Are there any libraries out there that can generate the JSON/YAML file by analysing static code and generating the JSON/YAML files off of that?
Right now we're producing the Swagger files once the project has finished building. We hit the url at /api/swagger.yaml and do what we need with the file (this adds quite a bit of complexity to our automated builds)
Yes so as you are sort of doing already you can stick that in your CI which would startup this dummy Spring app then make a HTTP request to download the JSON/YAML then shut it down and get the file, as far as I am aware there are no tools that do this at compile time for Java and that's as good as it gets.
The other way is if you do things in opposite where you first design your API interface in swagger and then you generate code (your interfaces) which you then implement yourself.
From the sounds of it you want to be designing your API first, this will allow you to put it in version control and this will make it easier to manage your API versions life cycle and track it's changes, otherwise it can be quite difficult to see how things have changed if you have to search through annotated methods in code, that is what swagger is designed for.
You don't necessarily need to do codegen from your swagger as I have found this to be very awkward at times and it is extremely dumb and generating code, it's just moustache templating and IMO the tool is fundamentally flawed for this reason, but as my most hated saying goes:
it's free so you can't complain
Related
I am trying to incorporate swagger-codegen in my new greenfield project, using Java (jaxrs-jersey2).
There are a lot of resources out there already documenting various parts of the project; however, I still haven't been able to find out any high-level advice on the best workflow to use with these tools.
As I understand it, swagger-codegen will be able to generate client-side code to interact with my API, such that I don't have to write this myself. This will happen by looking at a swagger.yaml (2.0) or openapi.yaml (3.0) file. This part is clear.
However, there seems to be multiple ways of generating this specification file. As I understand it, there are two primary ways:
Write a server implementation using a combination of jaxrs and Swagger annotations - have a maven plugin run as part of the compile step, generating a swagger.yaml specification file to be used by the client-generation plugin.
Write a swagger.yaml specification first, and generate server-stub code for Jersey, implementing only business logic, separate from all server boilerplate.
Which of these two ways is the recommended workflow? It sounds like (2) means writing less code and focusing on just application logic, without worrying too much about Jersey-specific glue to make the API work. This also means that the single source of truth for the API becomes a simple yaml file, rather than a bunch of Jersey code.
However, I'm unsure how to properly set this up:
Does my build need to have a compilation phase where server stubs are constantly regenerated?
Do I simply extend from the generated server stub and never worry about annotating API paths with #Path, #GET, etc?
Am I misunderstanding the use-case for server-stub generation? I.e. is the first approach (Jersey code-first) more appropriate?
If there is no real difference between the two approaches, when would you pick (1) over (2) and vice-versa?
Many thanks.
Most of the time, I don't like Javascript and would prefer strict and compiled languages like Scala, Java, Haskell...
However, one thing that can be nice with Javascript is to be able to easily change code of external dependencies. For exemple, if you have a bug and you think it's one of your dependency library you can easily hack around and swap a library method by your own override and check if it's better. You can even add methods to Array ou String prototypes and things like that... One could even go to node_modules and alter the library code here temporarily if he wants to.
In the JVM world this seems to me like an heavy process to just get started:
Clone the dependency sources
Hack it
Compile it
Publish it to some local maven/ivy repository
Integrate the fixed version in your project
This is a pain, I just don't want to do that more than once in a year
Today I was trying to fix a bug in my app, and the lib did not provide me enough information. I would have loved to just be able to put a Logger on one line of that lib to have better insight of what was happening but instead I tried to hack with the debugger with no success (the bug was not reproductible on my computer anyway...)
Isn't there any simple alternative for rapidly altering the code of a dependency?
I would be interested in any solution for Scala, Java, Clojure or any other JVM language.
I'm not looking for a production-deployable solution, just a quick solution to use locally and eventually deployable on a test env.
Edit: I'm talking about library internals that are not intended to be modified by the library author. Please assume that the class to change is final, not replaceable by library configuration, and not injectable by any way into the library.
In Clojure you can re-bind vars, also from other namespaces, by using intern. So as long as the code you want to alter is Clojure code, that's a possible way to monkeypatch.
(intern 'user 'inc dec)
(inc 1)
=> 0
This is not something to do lightly though, since it can and will lead to problems with other code not expecting this behavior. It can be handy to use during development to temporarily fix edge cases or bugs in other libraries, but don't use it in published libraries or production code.
Best to simply fork and fix these libraries, and send a pull request to have it fixed in the original library.
When you're writing a library yourself that you expect people need to extend or overload, implement it in Clojure protocols, where these changes can be restricted to the extending/overloading namespaces only.
I disagree that AspectJ is difficult to use, it, or another bytecode manipulation library is your only realistic alternative.
Load-time weaving is a definite way around this issue. Depending on how you're using the class in question you might even be able to use a mocking library to achieve the same results, but something like AspectJ, which is specifically designed for augmentation and manipulation, would likely be the easiest.
so here is my problem.
I'm using wsdl2java tool, to transform my web services into Java API.
The thing is, when I generate java stubs, my code contains something like that:
public void function(com.xxxxx.ssssss.Myclass myclass){...}
My question is:
how to remove this part "com.xxxxx.sssss." from the whole code, and put it in import section, and all that, not manually, because it would be too long.
Thank you
The vast majority of those classes shouldn't ever be edited at all; just generate them from the WSDL and leave them well alone. Yes, they'll be verbose but you'll just have to live with that (or offer to work on a better code generator for the CXF project, of course!)
The one class that you can edit is the skeleton (…Impl.java) that is generated with the -impl option. In fact, that's the source file that you should edit as it will contain the implementation logic for the service, which is your job. You generate it once and can change it however you want thereafter provided you implement the correct interface and have the right annotations. In particular, using refactoring tools to generate import declarations is perfectly fine (I find that this is easy to do in Eclipse; I'd be startled if other Java IDEs didn't also support something similar).
The only real fly in the ointment comes if you start altering the original WSDL significantly. While adding and removing methods is not too hard to deal with, the bigger the change the more work it is to support. You may have to look carefully at whether the service skeleton should be regenerated from scratch, but that will cost you all your changes; if you're expecting to be doing that a lot, it's a good idea to factor out much of the actual implementation of the service into worker classes so that you only need to rebuild the actual connection to the SOAP service. (Luckily, using Spring DI makes this sort of factorization really easy to manage, so much so that it's a good idea to use it anyway.)
I have developed a java library which I consider as a valuable intellectual property. I want to protect it from being used by unallowed softwares.
I shall say my library has a clean API, and I shall distribute the source code of the project which is using it (not the library) to my customer.
I mean I want to change the library somehow that it only works properly in my company's projects, but no-one else could not use it, in other projects.
What is the best solution to protect the library?
I must add that I can obfuscate the library (but not the customers' application).
2 possibilities:
You want to publish the source code of your app and allow clients to compile it by themselves and to modify the source; In this case, protecting your library is technically impossible, whatever the language.
You give the source only for information, you don't want them to compile or modify the source. In this case I see at least 2 levels of security can implement :
You compile and obfuscate your application with the source of your library . That way, all your public API will be obfuscated and so almost unusable (unless someone really want to unobfuscate it, good luck...). You can also repackage your classes, all your library classes will be in the same packages than the app, so it will be very hard to know which files are part of the library and what it is doing.
You implement a mechanism at compilation that compute a hash of your app jar and modify your library source code to check at runtime that the app is really your app.
I believe that obfuscating is enough. If someone succeed in understanding your obfuscated code, he will crack the solution 2 quite easily.
Except that, you cannot do anything, there is no mechanism for that.
For obfuscating I strongly recommend Proguard
I have a scenario where I have code written against version 1 of a library but I want to ship version 2 of the library instead. The code has shipped and is therefore not changeable. I'm concerned that it might try to access classes or members of the library that existed in v1 but have been removed in v2.
I figured it would be possible to write a tool to do a simple check to see if the code will link against the newer version of the library. I appreciate that the code may still be very broken even if the code links. I am thinking about this from the other side - if the code won't link then I can be sure there is a problem.
As far as I can see, I need to run through the bytecode checking for references, method calls and field accesses to library classes then use reflection to check whether the class/member exists.
I have three-fold question:
(1) Does such a tool exist already?
(2) I have a niggling feeling it is much more complicated that I imagine and that I have missed something major - is that the case?
(3) Do you know of a handy library that would allow me to inspect the bytecode such that I can find the method calls, references etc.?
Thanks!
I think that Clirr - a binary compatibility checker - can help here:
Clirr is a tool that checks Java libraries for binary and source compatibility with older releases. Basically you give it two sets of jar files and Clirr dumps out a list of changes in the public api. The Clirr Ant task can be configured to break the build if it detects incompatible api changes. In a continuous integration process Clirr can automatically prevent accidental introduction of binary or source compatibility problems.
Changing the library in your IDE will result in all possible compile-time errors.
You don't need anything else, unless your code uses another library, which in turn uses the updated library.
Be especially wary of Spring configuration files. Class names are configured as text and don't show up as missing until runtime.
If you have access to the source code, you could just compile source against the new library. If it doesn't compile, you have definitely a problem. If it compiles you may still have a problem if the program uses reflection, some kind of IoC stuff like Spring etc.
If you have unit tests, then you may have a better change catch any linking errors.
If you have only have a .class file of the program, then I don't know any tools that would help besides decomplining class file to source and compiling source again against the new library, but that doesn't sound too healthy.
The checks you mentioned are done by the JVM/Java class loader, see e.g. Linking of Classes and Interfaces.
So "attempting to link" can be simply achieved by trying to run the application. Of course you could hoist the checks to run them yourself on your collection of .class/.jar files. I guess a bunch of 3rd party byte code manipulators like BCEL will also do similar checks for you.
I notice that you mention reflection in the tags. If you load classes/invoke methods through reflection, there's no way to analyse this in general.
Good luck!