well I have a simple question.
we are working on a simple application server(like), and this server accepts clients business at runtime without restarting the server.
When user implements it's business and deploy it to the server, server just try to find out the archive descriptor, and load the modules, and it works good.
some operations need to much reflection calls, and for each call they called again and again. for example there is a method which accepts any object, then search for a certain field witch has signed by a annotation and do some business with it, so if we call this method 1000 times with one same object, this is going to reflect 1000 times.
my question is, is it efficient? I mean doesn't it eat up the CPU?! the only possible solution I'm thinking is that create a class and compile it for each object(maybe wrapper) and the method will just find out the wrapper class. but I know this may make the system complex, and hard to debug.
current solution is working, but I think doing a work 1000 times is kinda not logical even it's simple and easy.
Thanks in advanced.
The use of reflection to dynamically load classes at runtime is not a bad choice per-se. Based on your description, you should provide an extensible framework that allows your client to make an implementation, and run their business logic based on that instead of some implicit run-time annotated magic.
A good real-world example for this off the top of my head is The Servlet API.
for example there is a method which accepts any object, then search
for a certain field witch has signed by a annotation and do some
business with it, so if we call this method 1000 times with one same
object, this is going to reflect 1000 times.
In this case I suggest you to use caching. After reflection is finished you'll know class name and the field name. Store them in a HashMap with Class type key and a Method as a value.
Next time you invoke "the method" check cache first.
Related
I am working with several external APIs on a business code that would be used by several developers that do not have the deep knowledge required to build meaningful queries to those APIs.
Those API retrieve data. For example, you can retrieve entities either based on their Key (direct access) or based on their Group (lists available entities). But if you choose to query by Key you have to provide an id, and if you query by Group you have to provide a groupName.
The APIs are bigger than that and more complex, with many possible use-cases. The main constraints are that:
Some parameters require the presence of other parameters
Some parameters put with other parameters produce no data at best, wrong data at worst.
I would love to fix the underlying APIs but they are outside our scope.
I think it might be good to enclose a bit those API and produced a APIService that can request(APIQuery query).
The basic thing I could do is to put conditions in the code to check that no developer instantiates the APIQuery with missing/incoherent parameters, however that would only be a runtime error. I would love for the developer to know when building their request that they can/cannot do something.
My two questions are:
Is there an extensible builder-like way to defer the responsibility of building itself to the object? Having 1 constructor per valid query is not a good solution, as there are many variables and "unspoken rules" here.
Is this even a good idea? Am I trying to over-engineer?
I'll answer your second question first:
Is this even a good idea? Am I trying to over-engineer?
The answer is an uncomfortable "it depends". It depends how bad the pain is, it depends how crucial it is to get this right. It depends on so many factors that we can't really tell.
And to your: is this possible?
Yes, a builder pattern can be extended to return specific builders when certain methods are called, but this can become complicated and mis-uses are possible.
For your specific example I'd make the QueryBuilder simply have two methods:
a byGroup method that takes a group value to filter on and returns a GroupQueryBuilder
a bykey method that takes a key value to filter on and returns a KeyQueryBuilder.
Those two classes can then have methods that are distinct to their respective queries and possibly extend a shared base class that provides common properties.
And their respective build methods could either return a APIQuery or distinct APIQueryByGroup/APIQueryByKey classes, whichever is more useful for you.
This can become way more complicated if you have multiple axis upon which queries can differ and at a certain point, it'll become very hard to map that onto types.
I try to figure out the best solution for a use case I'm working on. However, I'd appreciate getting some architectural advice from you guys.
I have a use case where the frontend should display a list of users assigned to a task and a list of users who are not assigned but able to be assigned to the same task.
I don't know what the better solution is:
have one backend call which collects both lists of users and sends them
back to the frontend within a new data class containing both lists.
have two backend calls which collect one of the two lists and send them
back separately.
The first solution's pro is the single backend call whereas the second solution's pro is the reusability of the separate methods in the backend.
Any advice on which solution to prefer and why?
Is there any pattern or standard I should get familiar with?
When I stumble across the requirement to get data from a server I start with doing just a single call for, more or less (depends on the problem domain), a single feature (which I would call your task-user-list).
This approach saves implementation complexity on the client's side and saves protocol overhead for transactions (TCP header, etc.).
If performance analysis shows that the call is too slow because it requests too much data (user experience suffers) then I would go with your 2nd solution.
Summed up I would start with 1st approach. Optimize (go with more complex solution) when it's necessary.
I'd prefer the two calls because of the reusability. Maybe one day you need add a third list of users for one case and then you'd need to change the method if you would only use one method. But then there may be other use cases which only required the two lists but not the three, so you would need to change code there as well. Also you would need to change all your testing methods. If your project gets bigger this makes your project hard to update or fix. Also all the modifications increase the chances of introducing new bugs as well.
Seeing the methods callable by the frontend of the backend like an interface helps.
In general an interface should be open for extension but closed on what the methods return and require. As otherwise a slight modification leads to various more modifications.
I am building a service that depends on another service. A typical Service oriented architecture. The service i am dependent on exposes some API and data types. I am confused should i be converting the object types exposed by that service into specific objects which my service understands. I do expect their service to change with time as these are two different services. I have two options:
Directly use those data types in my service and pass those in methods.
Transform those into specific data types which only my service understands. ( objects will look exactly same if i do this with 0 changes ).
I tried to answer these questions but still could not make the final call. I need help in making this decision.
Why should I have encapsulated/transformed types ?
To prevent building every time they build changes in the service.
To prevent widespread changes ( adapter pattern ) : Changes to the wire
format will lead me to change only the encapsulating classes.
Why should I not have the changes for the types encapsulated ?
The classes will look exactly same as the wire format classes. ( Useless effort to maintain extra classes )
As i understand the impact will be same if i go with either approach. Help ?
I am no architect or SOA specialist, so excuse me if I am saying anything stupid :-)
But I really think the way here is to keep your services simple.
In your shoes, I'd just directly use the existent API. I would not spent any time wrapping or adapting the methods into another API. Your second service (that uses the existent first service) business logic should take care of this convertion, IMO, except if you're being forced to do something that is really expensive with the existent API.
Remember that services are mutable. They're software. They have bugs, business logic changes as time goes and you'll have to change the API and sometimes you'll have to keep older methods compatible for other service consumers. You probably don't want to maintain two APIs that provide the same information without any good practical reason. Not for twice the maintenance work.
Creating another API just to adapt the data format sounds to me a little like that old "DTOs are evil" flame war. And I think a very few people write about the advantages of using DTO nowadays :-)
This is sort of opinion based question, so my opinion is, you should make your own data-types to let your piece of code understand what should be contained in which variable.
I think of services as a data provider, which accepts certain request and fulfill our needs and in return may give us some data. I think role of service is just providing services to client.
It should be responsiblity of client to accept the data returned by service and store them in certain data-structure as there can be n different clients for single service and they can have n different requirements which may lead them to design client specific data-structure to contain data.
Also, as you said client service is subject to change over the period of time, then if you make your own data-structure, then you will need to make change in one single place, and rest of your code will be safe.
Here's the scenario. As a creator of publicly licensed, open source APIs, my group has created a Java-based web user interface framework (so what else is new?). To keep things nice and organized as one should in Java, we have used packages with naming convention
org.mygroup.myframework.x, with the x being things like components, validators, converters, utilities, and so on (again, what else is new?).
Now, somewhere in class org.mygroup.myframework.foo.Bar is a method void doStuff() that I need to perform logic specific to my framework, and I need to be able to call it from a few other places in my framework, for example org.mygroup.myframework.far.Boo. Given that Boo is neither a subclass of Bar nor in the exact same package, the method doStuff() must be declared public to be callable by Boo.
However, my framework exists as a tool to allow other developers to create simpler more elegant R.I.A.s for their clients. But if com.yourcompany.yourapplication.YourComponent calls doStuff(), it could have unexpected and undesirable consequences. I would
prefer that this never be allowed to happen. Note that Bar contains other methods that are genuinely public.
In an ivory tower world, we would re-write the Java language and insert a tokenized analogue to default access, that would allow any class in a package structure of our choice to access my method, maybe looking similar to:
[org.mygroup.myframework.*] void doStuff() { .... }
where the wildcard would mean any class whose package begins with org.mygroup.myframework can call, but no one else.
Given that this world does not exist, what other good options might we have?
Note that this is motivated by a real-life scenario; names have been changed to protect the guilty. There exists a real framework where peppered throughout its Javadoc one will find public methods commented as "THIS METHOD IS INTERNAL TO MYFRAMEWORK AND NOT
PART OF ITS PUBLIC API. DO NOT CALL!!!!!!" A little research shows these methods are called from elsewhere within the framework.
In truth, I am a developer using the framework in question. Although our application is deployed and is a success, my team experienced so many challenges that we want to convince our bosses to never use this framework again. We want to do this in a well thought out presentation of the poor design decisions made by the framework's developers, and not just as a rant. This issue would be one (of several) of our points, but we just can't put a finger on how we might have done it differently. There has already been some lively discussion here at my workplace, so I wondered what the rest of the world would think.
Update: No offense to the two answerers so far, but I think you've missed the mark, or I didn't express it well. Either way allow me to try to illuminate things. Put as simply as I can, how should the framework's developers have refactored the following. Note this is a really rough example.
package org.mygroup.myframework.foo;
public class Bar {
/** Adds a Bar component to application UI */
public boolean addComponentHTML() {
// Code that adds the HTML for a Bar component to a UI screen
// returns true if successful
// I need users of my framework to be able to call this method, so
// they can actually add a Bar component to their application's UI
}
/** Not really public, do not call */
public void doStuff() {
// Code that performs internal logic to my framework
// If other users call it, Really Bad Things could happen!
// But I need it to be public so org.mygroup.myframework.far.Boo can call
}
}
Another update: So I just learned that C# has the "internal" access modifier. So perhaps a better way to have phrased this question might have been, "How to simulate/ emulate internal access in Java?" Nevertheless, I am not in search of new answers. Our boss ultimately agreed with the concerns mentioned above
You get closest to the answer when you mention the documentation problem. The real issue isn't that you can't "protect" your internal methods; rather, it is that the internal methods pollute your documentation and introduce the risk that a client module may call an internal method by mistake.
Of course, even if you did have fine grained permissions, you still aren't going to be able to prevent a client module from calling internal methods---the jvm doesn't protect against reflection based calls to private methods anyway.
The approach I use is to define an interface for each problematic class, and have the class implement it. The interface can be documented solely in terms of client modules, while the implementing class can provide what internal documentation you desire. You don't even have to include the implementation javadoc in your distribution bundle if you don't want to, but either way the boundary is clearly demarcated.
As long as you ensure that at runtime only one implementation is loaded per documentation-interface, a modern jvm will guarantee you don't suffer any performance penalty for using it; and, you can load harness/stub versions during testing for an added bonus.
The only idea that I can think in order to supply this missing "Framework level access modifier" is CDI and a better design.
If you have to use a method from very different classes and packages in various (but few) situations THERE WILL BE certainly a way to redesign those classes in order to make those methods "private" and inacessible.
There is no support in Java language for such kind of access level (you would like something like "internal" with namespace). You can only restrict access to package level (or the known inheritance public-protected-private model).
From my experience, you can use Eclipse convention:
create a package called "internal" that all class hierarchy (including sub-packages) of this package will be considered as non-API code and could be changed anytime with no guarantee for your users. In that non-API code, use public methods whenever you like. Since it is only a convention and it is not enforced by the JVM or Java compiler, you cannot prevent users from using the code, but at least let them know that these classes were not meant to be used by 3rd parties.
By the way, in Eclipse platform source code, there is a complex plugin model that enforces you not to use internal code of other plugins by implementing custom class loader for each plugin that prevents loading classes that should be "internal" in these plugins.
Interfaces and dynamic proxies are sometimes used to make sure you only expose methods that you do want to expose.
However that comes at a fairly hefty performance cost, if your methods are called very often.
Using the #Deprecated annotation might also be an option, although it won't stop external users invoking your "framework private" methods, they can't say they hadn't been warned.
In general I don't think you should worry about your users deliberately shooting themselves in the foot too much, so long as you made it clear to them that they shouldn't use something.
I've always wanted to write a simple world in Java, but which I could then run the 'world' and then add new objects (that didn't exist at the time the world started running) at a later date (to simulate/observe different behaviours between future objects).
The problem is that I don't want to ever stop or restart the world once it's started, I want it to run for a week without having to recompile it, but have the ability to drop in objects and redo/rewrite/delete/create/mutate them over time.
The world could be as simple as a 10 x 10 array of x/y 'locations' (think chessboard), but I guess would need some kind of ticktimer process to monitor objects and give each one (if any) a chance to 'act' (if they want to).
Example: I code up World.java on Monday and leave it running. Then on Tuesday I write a new class called Rock.java (that doesn't move). I then drop it (somehow) into this already running world (which just drops it someplace random in the 10x10 array and never moves).
Then on Wednesday I create a new class called Cat.java and drop that into the world, again placed randomly, but this new object can move around the world (over some unit of time), then on Thursday i write a class called Dog.java which also moves around but can 'act' on another object if it's in the neighbour location and vice versa.
Here's the thing. I don't know what kinda of structure/design I would need to code the actual world class to know how to detect/load/track future objects.
So, any ideas on how you would do something like this?
I don't know if there is a pattern/strategy for a problem like this, but this is how I would approach it:
I would have all of these different classes that you are planning to make would have to be objectsof some common class(maybe a WorldObject class) and then put their differentiating features in a separate configuration files.
Creation
When your program is running, it would routinely check that configuration folder for new items. If it sees that a new config file exists (say Cat.config), then it would create a new WorldObject object and give it features that it reads from the Cat.config file and drops that new object into the world.
Mutation
If your program detects that one of these item's configuration file has changed, then it find that object in the World, edit its features and then redisplay it.
Deletion
When the program looks in the folder and sees that the config file does not exist anymore, then it deletes the object from the World and checks how that affects all the other objects.
I wouldn't bet too much on the JVM itself running forever. There are too many ways this could fail (computer trouble, unexepected out-of-memory, permgen problems due to repeated classloading).
Instead I'd design a system that can reliably persist the state of each object involved (simplest approach: make each object serializable, but that would not really solve versioning problems).
So as the first step, I'd simply implement some nice classloader-magic to allow jars to be "dropped" into the world simulation which will be loaded dynamically. But once you reach a point where that no longer works (because you need to modify the World itself, or need to do incompatible changes to some object), then you could persist the state, switch out the libraries for new versions and reload the state.
Being able to persist the state also allows you to easily produce test scenarios or replay scenarios with different parameters.
Have a look at OSGi - this framework allows installing and removing packages at runtime.
The framework is a container for so called bundles, java libraries with some extra configuration data in the jars manifest file.
You could install a "world" bundle and keep it running. Then, after a while, install a bundle that contributes rocks or sand to the world. If you don't like it anymore, disable it. If you need other rocks, install an updated version of the very same bundle and activate it.
And with OSGi, you can keep the world spinning and moving around the sun.
The reference implementation is equinox
BTW: "I don't know what kinda of structure/design" - at least you need to define an interface for a "geolocatable object", otherwise you won't be able to place and display it. But for the "world", it really maybe enough to know, that "there is something at coordinates x/y/z" and for the world viewer, that this "something" has a method to "display itself".
If you only care about adding classes (and not modifying) here is what I'd do:
there is an interface Entity with all business methods you need (insertIntoWorld(), isMovable(), getName(), getIcon() etc)
there is a specific package where entities reside
there is a scheduled job in your application which every 30 seconds lists the class files of the package
keep track of the classes and for any new class attempt to load class and cast to Entity
for any newlly loaded Entity create a new instance and call it's insertIntoWorld().
You could also skip the scheduler and automatic discovery thing and have a UI control in the World where from you could specify the classname to be loaded.
Some problems:
you cannot easily update an Entity. You'll most probably need to do some classloader magic
you cannot extend the Entity interface to add new business bethod, so you are bound to the contract you initially started your application with
Too long explanation for too simple problem.
By other words you just want to perform dynamic class loading.
First if you somehow know the class name you can load it using Class.forName(). This is the way to get class itself. Then you can instantiate it using Class.newInstance(). If you class has public default constructor it is enough. For more details read about reflection API.
But how to pass the name of new class to program that is already running?
I'd suggest 2 ways.
Program may perform polling of predefined file. When you wish to deploy new class you have to register it, i.e. write its name into this file. Additionally this class has to be available in classpath of your application.
application may perform polling of (for example) special directory that contains jar files. Once it detects new jar file it may read its content (see JarInputStream), then call instantiate new class using ClaasLoader.defineClass(), then call newInstane() etc.
What you're basically creating here is called an application container. Fortunately there's no need to reinvent the wheel, there are already great pieces of software out there that are designed to stay running for long periods of time executing code that can change over time. My advice would be to pick your IDE first, and that will lead you someways to what app container you should use (some are better integrated than others).
You will need a persistence layer, the JVM is reliable but eventually someone will trip over the power cord and wipe your world out. Again with JPA et al. there's no need to reinvent the wheel here either. Hibernate is probably the 'standard', but with your requirements I'd try for something a little more fancy with one of the graph based NoSQL solutions.
what you probably want to have a look at, is the "dynamic object model" pattern/approach. I implemented it some time ago. With it you can create/modify objecttypes at runtime that are kind of templates for objects. Here is a paper that describes the idea:
http://hillside.net/plop/plop2k/proceedings/Riehle/Riehle.pdf
There are more papers but I was not able to post them, because this is my first answer and I dont have enough reputation. But Google is your friend :-)