I'm spending sometime with DynaTrace.
I'm impressed by its feature related to cross jvm instrumentation.
In simple words, DynaTrace is able to instrument Java code creating trace with some statistical information. This is nothing new.
There is a feature really interesting: when a call to an external JVM is execute, DynaTrace is able to link this new trace to the caller one (i.e. remote session bean, web services, remote RMI and so on).
How could it be possible?
I'm not able to immagine how to implement this feature? Any ideas?
Thank you
Dynatrace actually doesnt rely on information from beans. As you correctly said in your questions - we are using Byte Code Instrumentation such as other tools in the market as well. We instrument key methods of certain frameworks, e.g: Servlet, Axis, JMS, JDBC, ...
In the scenario where you make a call from one JVM to another using e.g: HTTP-based communication we instrument both the sending side of the HTTP Request as well as the receiving side on the other JVM. On the sending side we attach an additional HTTP Header with the ID of the current PurePath. PurePath is our patent technology. So - every PurePath (=every single transaction) gets a unique ID. This ID "travels" with the request, e.g: we put it on the HTTP Request as an HTTP HEader. ON the receiving side - your second JVM - we inspect that HTTP HEader and therefore know that all the data we collect belongs to that PurePath. This allows us to do real end-to-end tracing without relying on things like Beans or without correlating this data based on e.g: timestamps
Makes sense?
If you have more questions let me know. I also recorded some videos and put on YouTube to explain the technology and the product itself: http://bit.ly/dttutorials
This information is normally extracted using MXBeans. Such beans provide a standard API for accessing standard runtime information. Similarly, such applications often scan the class loaders for specific classes and extract relevant information by hard-coded access. This is why less popular solutions are often not supported by monitoring tools.
Related
I have to write code to automatically create a JIRA based on some action performed in my workplace. The solution that my manager proposed is to create a JIRA creation agent. We are using REST architecture.
Last time I wrote a client. Now I have to write an agent. What I don't understand is the key and more like the technical difference between the two. Like how exactly these are different as for someone with very less experience with REST I feel hard to understand the core difference.
Do I have to code them in a different style? or what are some good practices to write these kinds of code?
I tried reading different blogs and related posts but couldn't find anything satisfactory to point out the differences.
This may be semantically different based on your company's internal linguistics, but typically it is as follows:
REST Server is the software which provides the API which is exposed
REST Client is the software which uses the REST Server's API to make requests and get the resulting information (usually JSON). This is more of an interface to make the requests
REST Agent uses the REST Client to make the requests but actually uses the resulting JSON and processes it to perform some sort of action
However colloquially people use REST Client and REST Agent interchangeably. The main thing is delineation of who is providing information with API and who is making requests for information through an API.
EDIT: In order to clarify in your case the agent would be making a request through the API but would most likely be a PUT or POST request to create a JIRA issue.
I read somewhere use of webservcies in apps. After a lot of research I am able to create one Webservice which will accept Json and JsonP both format as request and response accordingly. I developed the webservcies using Java, Apache Axis2, Hibernate and MySQL as database. there are few problems and I dont know how to solve ?
Insert or delete option, sometimes if at a time more than two users call that service that is insert or delete any row the queries go in sleep mode and next time someone tries to fetch that service he couldnt. Accroding to server log it says error SQL Lockout State. If I checks Processlist in MYSQL it is showing that query in Sleep, I have to kill to resume.
The performance of webservice doesnt seems to be upto mark, it takes time some more time as what i experienced it shouldn't. In simple words how to obtain better performance by the services
How to implement security feature such that if a user logins he/she can be provided an id and validation of that id so that unauthorized access can be prevented
Or just guide me what should be the most appropriate and optmized Webservice methodology that can be used using Java
Answer to this question is not specific to Android. Below are my investigations which might be useful for you.
For the point about MySQL connections going to sleep mode, you can do the following.
Debug the datasource used by Hibernate, try to increase the pool size & check for any issues in it.
Define a timeout period for connections. JBoss has several configurations related to this like blocking-timeout-millis, idle-timeout-minutes etc.
Declare a mechanism to validate periodically the connection resources in the pool for activeness. You can explore OracleStaleConnectionChecker for options.
Configure miniumn connections in the pool. This is important because when all the stale connections are discarded, empty pool needs to be pre-filled & ready with active connections.
Coming to performance of Insert/Delete operations & SQL Lockout State, please try to re-order the sequence of the queries which you are firing to DB at every request. This may not be a deadlock situation but sequencing DB queries correctly will definitely lead to less lockout time and better performance.
This answer may be of use for you. Hibernate: Deadlock found when trying to obtain lock
Web-services which you have developed may require some performance optimization to make them upto the mark. Below are first few steps you can take to bring the performance up.
Avoid nested loops. Every extra parameter in the iterated lust increase the order of the lopp exponentially.
Remove early initialization of objects. This may lead to long unwanted GC cycles.
Apart from above optimizations, there are several frameworks & tools at your service to evaluate the code quality & its performance. PMD, FindBugs, JMeter, Java profiler are few of them to name.
Shishir
You are going to have to profile your server and see where the time is spent. I really like YourKit for doing thread profile. visualvm which comes with the JDK can help also.
There are all sorts of reasons your web service can be slow:
Latency from client to server
Handling the HTTP request on the server
Handling the HTTP response on the client
Making the database call (sounds like you already have some kind of locking / blocking going on there)
You are going to have to get markers to tell you how long it took to go from A to B to C to D back to C back to B back to A kind of thing. We would be speculating heavily from here on what is exactly going on in your program, but we can give you the ideas / tools to figure it out.
If you use YourKit, connect it to your server process. Have nothing else connecting to your server (for instance your client is not sending requests). Try it with your client requesting, you should see your accepting threads receive the HTTP request and then delegate to either your processing thread or do the processing itself. You can use YourKit to see how much time is spent in different functions during that call time.
Try it with your client making the call.
Try it using a simple HTTP request tool like wget or maybe your IDE has a webservice test tool (for instance intellij does), or you can download a simple HTTP test tool.
By testing it in a simple tool that just outputs the response, you can eliminate any client processing issues. You can also achieve a similar test in Chrome or Firefox and use the developer tools to see time to fulfill request.
In my experience, the framework for handling the requests and delegating can introduce some performance issues. I ripped Grails out of a production environment because of its performance issues (before any Grails / Groovy flames come my way, we were operating at a much higher rate than typical web applications, and I am sure Grails has made some headway in the last couple years... alas, it was not for my need at that time)
BTW, I doubt you are operating a load where you will be critiquing the web service framework you chose to use. I have been happy with Spring MVC and DropWizard (Jersey JAX-RS), and Grails is easy to use too.
You should make a simple static content response in your webservice and see how quickly that returns vs a request that makes a database call.
Also, what kind of table are you using in MySQL? InnoDB? MyISAM? They have different locking schemes. That could be causing your MySQL issue.
The key to all of it, break the problem up into parts, and measure each and eliminate parts one by one till you go, everytime I do X it is slower (like everytime I make a database call its slower)
In Java the the way you will be able to find more support online via documentation/forums is to develop the web service as a REST web service using Spring MVC.
You can base yourself on this resource and take it from there:
Spring MVC REST Hello World Web Service
Using Spring you can create a RestFul webservice easily and spring does all the ground work you needed. As others had mentioned you can consume the webservice in any type of client - including Android.
A detailed guide available here:
https://spring.io/guides/gs/rest-service/
Here are my suggestions:
Make APIs only read or write database. If an API combines reading and writing, it is possible to cause deadlock;
Use a light-weight HTTP server. Powerful HTTP server is possibly consuming more.
Make use of thread. Have more threads could be helpful when you are facing a ton of users.
Make more things static. You could avoid unnecessary queries.
I think mhoglan's answer is detailed enough.
We're trying to design a new addition to our application. Basically we need to submit very basic queries to various remote databases accessed over the internet and not owned or controlled by us.
Our proposal is to install a small client app on each of the foreign systems, tiered in 2 basic layers, 1 that is tailored to the particular database its talking to, to handle the actual query in SQL or whatever, the other tier would be the communication tier to handle incoming requests and send back responses. This communication interface would be the same over all of the foreign systems, ie all requests and responses have the same structure.
In terms of java remoting I guess this small client app would be the 'server' and our webapp (normally referred to as the server) is the 'client'.
I've looked at various java remoting solutions (Hessian, Burlap, RMI, SOAP/REST WebServices). However am I correct in thinking that with all of these the 'server' must run in a container, ie in a tomcat/jetty etc instance?
I was really hoping to avoid having to battle all the IT departments controlling the foreign systems to get them to install very much. The whole idea is that its thin/small/easy to install/pain free. Are there any solutions that do not require running in a container / webserver?
The communication really is the smallest part of this design, no more than 10 string input params (that have no meaning other than to the db) and one true/false output. There are no complex object models required. The only complexity would be from security/encryption etc.
I wamly suggest somethig based on Jetty, the embedded HTTP server. You package a simple runnable JAR with dependency JARs into a ZIP file, add a startup script, and you have your product. See for example here.
I often use Sprint-Remoting in my projects and here you find a description how to use without a container. The guy is starting the jetty from within his application:
http://forum.springsource.org/showthread.php?12852-HttpInvoker-without-web-container
http://static.springsource.org/spring/docs/2.0.x/reference/remoting.html
Regards,
Boskop
Yes, most of them runs a standard servlet container. But containers like Jetty have very low footprint and you may configure and run Jetty completely out of your code while you stay with servlet standards.
Do not fail to estimate initial minimal requirements that may grow with project enhancement over time. Then have a standard container makes things much more easier.
As you have tagged this question with [rmi], RMI does not require any form of container. All you need is the appropriate TCP ports to be open.
GWT RPC is proprietary but looks solid, supported with patterns by Google, and is mentioned by every book and tutorial I've seen. Is it really the choice for GWT client/server communcation? Do you use it and if not why and what you chose? I assume that I have generic server application code that can accommodate for RPC, EJBs, web services/SOAP, REST, etc.
Bonus question: any security issues with GWT RPC I need to be aware of?
We primarily use three methods of communications:
GWT-RPC - This is our primary and prefered mechanism, and what we use whenever possible. It is the "GWT way" of doing things, and works very well.
XMLHttpRequest using RequestBuilder - This is typically for interaction with non-GWT back ends, and we use this mainly to pull in static web content that we need during runtime (something like server side includes). It is useful especially when we need to integrate with a CMS. We wrap our RequestBuilder code in a custom "Panel" (that takes a content URI as its constructor parameter, and populates itself with the contents of the URI).
Form submission using FormPanel - This also requires interaction with a non-GWT back end (custom servlet), and is what we currently use to do cross site communications. We don't really communicate "cross site" per se, but we do sometimes need to send data over SSL on a non-SSL page, and this is the only way we've been able to do it so far (with some hacks).
The problem is that you are on a web browser so any non-http protocol is pretty much not guaranteed to work (might not get through a proxy).
What you can do is isolate the GWT-RPC stuff in a single replaceable class and strip it off as soon as possible.
Personally I'd just rely on transferring a collection of objects with the information I needed encoded inside the collection--that way there is very little RPC code because all your RPC code ever does is "Collection commands=getCollection()", but there would be a million other possibilities.
Or just use GWT-RPC as it was intended, I don't think it's going anywhere.
our product is built on a client-server architecture, with the server implemented in Java (we are using POJO's with Spring framework). We have two API levels on the server:
the external API, which uses REST web services - useful for external clients and integrations with other servers.
the internal API, which uses pure Java classes - useful for the actual code inside (as many times the business logic invokes an API call) and for integration with plusins developed inside out company and deployed as parts of our product. The external REST API also uses the internal API.
We implemented permission checking (using Spring security) in the internal API because we wanted to control access at the lowest API level.
But here comes the problem: there are some operations defined on the API level that are regarded as forbidden for a currently logged user, but which should be performed smoothly by the server itself. For example, deleting some entity could be forbidden for the user, but the server might want to delete this entity as a side effect of some other operation performed by the user and we want this to be allowed.
So what is the best approach for allowing the server to perform an operation (in some kind of super-user mode) that might be forbidden for the actual logged-in user?
As I see it we have several options each of which have its pros and cons:
Implement permission checking in external level API (REST) - bad because plugins will bypass permissions checks.
Turn off permission checking for the current thread after the request was granted - too dangerous, we might allow too many server actions that should be forbidden.
Explicitly ask the internal API level to perform the operation in the privileged mode (just like PrivilegedAction in java security framework) - too verbose.
As none of the above approaches is ideal, I wonder what is the best-practice approach for this problem?
Thanks.
Security is applied at the bounds of a module. If I understand you, your system applies security on two levels of abstraction of the (roughly) same API. It sounds complex, as you have to make a double security check on the whole two APIs.
Consider migrating the REST needed methods from the internal API to the external one, and deleting security stuff in the internal API.
external API will manage security for external clients (at the boundaries of your app)
internal API will be strictly reserved for internal app and plugin use (and you would happy hack it, as no external clients are bounded to it)
Do you really need to control the plugin's permissions to your application logic ? Is there a good reason for it ? Plugins are developped by your company, after all. Maybe a formal document explaining to plugin's developpers what should not be done, or a safety test suite validation for the plugin (e.g. assert plugin does not call "this" method) will do the job either.
If you still need to consider these plugins as "untrusted", add the methods they need to your external API (on your app boundary) and create specific security profile for each use: "restProfile", "clientProfile" & "pluginProfile". Each will have specific rights on your external API methods.
It sounds like you need two levels of internal API, one exposed to plugins and one not.
The best way of enabling that would be using OSGi (or Spring Modules). It allows you to explicitly state which packages and classes can be accessed by other modules (ie REST modules and plugin modules). Those would be the exposed level of your new internal API and you would use Spring Security to further restrict access selectively. The internal packages and classes would contain the methods which did all the low level stuff (like deleting entities) and you wouldn't be able to call them directly. Some of the exposed API would just duplicate the internal API with a security check, but that would be ok.
The problem with the best way is that Spring Modules strikes me as still a bit too immature even to put into a new webapp project. There's no way I'd want to shoehorn it into an old project.
You could probably achieve something similar using Spring Security and AspectJ, but it strikes me that the performance overhead would be prohibitive.
One solution that would be quite cool if you could re-architect your system would be to take tasks requiring security elevation offline, or rather make them asynchronous. Using Quartz and/or Apache Camel (or a proper ESB) you could make the "delete my account" method create an offline task that can at a future date be executed as an atomic unit of work with admin priveliges. That means you can cleanly do your security checks for the user requesting account deletion in a completely separate thread to where the deletion actually takes place. This would have the advantage of making the web thread more responsive, although you'd still want to do somethings immediately to preserve the illusion that the requested action had been completed.
If you're using Spring, you may as well utilize it fully. Spring offers AOP that allows you to use interceptors and perform these cross-system checks, and in the event of an unauthorized action, prevent the action.
You can read more about this in Spring's online documentation here.
Hope this helps...
Yuval =8-)