I'm working on a Java application which should allow users to optimize their daily schedule. For that, I need a framework that helps calculate optimal times for "tasks" taking note of:
Required resources and resource usage limits
Dependencies between tasks (can do with only F->S relations though)
Earliest and latest start-finish times, slack times
Baseline vs. actual times - allowing to report actual start and finish times, updating the rest of the tasks accordingly
Some clarifications: I am not looking for neither a framework to draw these gantts, nor a framework that deals with one specific problem domain (such as classrooms), and definitely not a framework that deals with thread scheduling.
Thanks!
I don't think there is a framework that will suit your needs out of the box. I know you said you're not looking for a job/thread scheduler, but I think your best bet is probably to roll your own optimization/prioritization code around a "dumb" job/thread scheduling framework like Quartz (or whatever you have in place). If you go with Quartz, the API can probably provide you with some information useful for items 3 and 4 of your optimization criteria. Additionally, Quartz has a job "priority" concept, so once you've computed the optimized priority, it should make scheduling the execution easy.
If you do find a framework that does what you ask, please post back here -- I'm sure there are others who could use something similar.
You could check for a project management software. It seems you need it written in java with the ability to modify the code. It really narrows down the list but I made a quick scan and I see at least 2 of them which could help (Endeavour and Project.net).
Perhaps what you need is something like evolutionary/genetic algorithm to generate an optimized schedule?
If yes, you may have a look at this Watchmaker Framework:
http://watchmaker.uncommons.org/
With evolutionary/genetic algorithm, it randomly generate a pool of schedule. Your main focus will be defining the scoring criteria to evaluate each schedule generated. Then let it(the schedules generated) evolve from generation to generation until it is optimum enough for you.
Related
As far as I know Stream API is intended to be applied on collections. But I like the idea of them so much that I try to apply them when I can and when I shouldn't.
Originally my app had two threads communicating through BlockingQueue. First would populate new elements. Second make transformations on them and save on disk. Looked like a perfect stream oportunity for me at a time.
Code I ended up with:
Stream.generate().flatten().filter().forEach()
I'd like to put few maps in there but turns out I have to drag one additional field till forEach. So I either have to create meaningless class with two fields and obscure name or use AbstractMap.SimpleEntry to carry both fields through, which doesn't look like a great deal to me.
Anyway I'd rewritten my app and it even seems to work. However there are some caveats. As I have infinite stream 'the thing' can't be stopped. For now I'm starting it on daemon thread but this is not a solution. Business logic (like on connection loss/finding, this is probably not BL) looks alienated. Maybe I just need proxy for this.
On the other hand there is free laziness with queue population. One thread instead of two (not sure how good is this). Hopefully familiar pattern for other developers.
So my question is how viable is using of Stream API for application flow organising? Is there more underwather roks? If it's not recomended what are alternatives?
I don't understand why not to use TypedActors in Akka. Using reflection (well.. instanceof) to compensate for the lack of pattern matching in Java is quite ugly.
As far as I understand, TypedActors should be like a gate between the "Akka world" and the "Non Akka world" of your software. But why won't we just throw all OO principals and just use reflection!
Why wouldn't you want to use an actor and know exactly what it should respond to? Or for Akka's sake of keeping the actor model, why not create a message hierarchy that uses double-dispatch in order to activate the right method in the actor (and I know you shouldn't pass Actors as parameters and use ActorRef instead).
DISCLAIMER: I'm new to Akka and this model, and I haven't wrote a single line of code using Akka, but just reading the documentation is giving me a headache.
Before we get started: The question is about the deprecated "typed actors" module. Which will soon be replaced with akka-typed, a far superior take on the problem, which avoids the below explained shortcomings - please do have a look at akka-typed if you're interested in typed actors!
I'll enumerate a number of downsides of using the typed actors implementation you refer to. Please do note however that we have just merged a new akka-typed module, which brings in type safety back to the world of akka actors. For the sake of this post, I will not go in depth into the reasons developing the typed version was such a tough challenge, let's for now answer the question of "why not use the (old) typed actors".
Firstly, they were never designed to be the core of the toolkit. They are built on top of the messaging infrastructure Akka provides. Please note that thanks to that messaging infrastructure we're able to achieve location transparency, and Akka's well known performance. They heavily use reflection and JDK proxies to translate to and from methods to message sends. This is very expensive (time wise), and downgrades the performance around 10-fold in contrast to plain Akka Actors, see below for a "ping pong" benchmark (implemented using both styles, sender tells to actor, actor replies - 100.000 times):
Unit = ops/ms
Benchmark Mode Samples Mean Mean error Units
TellPingPongBenchmark.tell_100000_msgs thrpt 20 119973619.810 79577253.299 ops/ms
JdkProxyTypedActorTellPingPongBenchmark.tell_100000_msgs thrpt 20 16697718.988 406179.847 ops/ms
Unit = us/op
Benchmark Mode Samples Mean Mean error Units
TellPingPongBenchmark.tell_100000_msgs sample 133647 1.223 0.916 us/op
JdkProxyTypedActorTellPingPongBenchmark.tell_100000_msgs sample 222869 12.416 0.045 us/op
(Benchmarks are kept in akka/akka-bench-jmh and run using the OpenJDK JMH tool, via the sbt-jmh plugin.)
Secondly, using methods to abstract over distributed systems is just not a good way of going about it (oh, how I remember RMI... let's not go there again). Using such "looks like a method" makes you stop thinking about message loss, reordering and all the things which can and do happen in distributed systems. It also encourages (makes it "too easy to do the wrong thing") using signatures like def getThing(id: Int): Thing - which would generate blocking code - which is horrible for performance! You really do want to stay asynchronous and responsive, which is why you'd end up with loads of futures when trying to work properly with these (proxy based) typed actors.
Lastly, you basically lose one of the main Actor capabilities. The 3 canonical operations an Actor can perform are 1) send messages 2) start child actors 3) change it's own behaviour based on received messages (see Carl Hewitt's original paper on the Actor Model). The 3rd capability is used to beautifully model state machines. For example you can say (in plain akka actors) become(active) and then become(allowOnlyPrivileged), to switch between receive implementations - making finite state machine implementations (we also have a DSL for FSMs) a joy to work with. You can not express this nicely in JDK proxied typed actors, because you can not change the set of exposed methods. This is a major down side once you get into the thinking and modeling using state machines.
A New Hope (Episode 1): Please do have a look at the upcoming akka-typed module authored by Roland Kuhn (preview to be included in the 2.4 release soon), I'm pretty sure you'll like what you'll find there typesafety wise. And also, that implementation will eventually be even faster than the current untyped actors (omitting impl details here as the answer got pretty long already - short version: basically we'll remove a load of allocations thanks to the new implementation).
I hope you'll enjoy this thorough answer. Feel free to ask follow up questions in comments here or on akka-user - our official mailing list. Happy Hakking!
Typed Actors provide you with a static contract defined in the terms of your domain-- you can name their messages (which will be delegated to an underlying implementation and executed asynchronously) actions which make sense in your domain, avoiding the use of reflection on your part (TypedActors use JDK Proxies under the hood, so there is still reflection going on, you just don't have to worry about it, and you gain type-checking in terms of the arguments passed to the active object/typed actor and its return types. The documention is pretty clear on this, but I know for those new to actor-based concurrency, additional examples always help, so feel free to ask additional questions/comments if you are still having troubling groking the difference.
But do you guys realice that you have a huge number of companies where they don’t have the expertise developers, but a big Infra to scale horizontally as much as we need, so performance not always is the best “go for it” but instead be responsive, Message driven, elastic and resilient, which right now thanks to typed actors we have, being used by developers that don’t know anything about Akka or Reactive
Programing.
Don’t get me wrong, I’m use pure Akka typed in my day by day, but for delivery teams we have this framework that use typed actors and our consumers use as POJO without know that they are coding in a reactive system. And that’s awesome feature.
First of all, I have a very superficial knowledge of SAP. According to my understanding, they provide a number of industry specific solutions. The concept seems very interesting and I work on something similar for banking industry. The biggest challenge we face is how to adapt our products for different clients. Many concepts are quite similar across enterprises, but there are always some client-specific requirements that have to be resolved through configuration and customization. Often this requires reimplementing and developing customer specific features.
I wonder how efficient in this sense SAP products are. How much effort has to be spent in order to adapt the product so it satisfies specific customer needs? What are the mechanisms used (configuration, programming etc)? How would this compare to developing custom solution from scratch? Are they capable of leveraging and promoting best practices?
Disclaimer: I'm talking about the ABAP-based part of SAP software only.
Disclaimer 2, ref PATRYs response: HR is quite a bit different from the rest of the SAP/ABAP world. I do feel rather competent as a general-purpose ABAP developer, but HR programming is so far off my personal beacon that I've never even tried to understand what they're doing there. %-|
According to my understanding, they provide a number of industry specific solutions.
They do - but be careful when comparing your own programs to these solutions. For example, IS-H (SAP for Healthcare) started off as an extension of the SD (Sales & Distribution) system, but has become very much more since then. While you could technically use all of the techniques they use for their IS, you really should ask a competent technical consultant before you do - there are an awful lot of pits to avoid.
The concept seems very interesting and I work on something similar for banking industry.
Note that a SAP for Banking IS already exists. See here for the documentation.
The biggest challenge we face is how to adapt our products for different clients.
I'd rather rephrase this as "The biggest challenge is to know where the product is likely to be adapted and to structurally prepare the product for adaption." The adaption techniques are well researched and easily employed once you know where the customer is likely to deviate from your idea of the perfect solution.
How much effort has to be spent in
order to adapt the product so it
satisfies specific customer needs?
That obviously depends on the deviation of the customer's needs from the standard path - but that won't help you. With a SAP-based system, you always have three choices. You can try to customize the system within its limits. Customizing basically means tweaking settings (think configuration tables, tens of thousands of them) and adding stuff (program fragments, forms, ...) in places that are intended to do so. Technology - see below.
Sometimes customizing isn't enough - you can develop things additionally. A very frequent requirement is some additional reporting tool. With the SAP system, you get the entire development environment delivered - the very same tools that all the standard applications were written with. Your programs can peacefully coexist with the standard programs and even use common routines and data. Of course you can really screw things up, but show me a real programming environment where you can't.
The third option is to modify the standard implementations. Modifications are like a really sharp two-edged kitchen knife - you might be able to cook really cool things in half of the time required by others, but you might hurt yourself really badly if you don't know what you're doing. Even if you don't really intend to modify the standard programs, it's very comforting to know that you could and that you have full access to the coding.
(Note that this is about the application programs only - you have no chance whatsoever to tweak the kernel, but fortunately, that's rarely necessary.)
What are the mechanisms used (configuration, programming etc)?
Configurations is mostly about configuration tables with more or less sophisticated dialog applications. For the programming part of customizing, there's the extension framework - see http://help.sap.com/saphelp_nw70ehp1/helpdata/en/35/f9934257a5c86ae10000000a155106/frameset.htm for details. It's basically a controlled version of dependency injection. As a solution developer, you have to anticipate the extension points, define the interface that has to be implemented by the customer code and then embed the call in your code. As a project developer, you have to create an implementation that adheres to the interface and activate it. The basic runtime system takes care of glueing the two programs together, you don't have to worry about that.
How would this compare to developing custom solution from scratch?
IMHO this depends on how much of the solution is the same for all customers and how much of it has to be adapted. It's really hard to be more specific without knowing more about what you want to do.
I can only speak for the Human Resource component, but this is a component where there is a lot of difference between customers, based on a common need.
First, most of the time you set the value for a group, and then associate the object (person, location...) with a group depending on one or two values. This is akin to an indirection, and allow for great flexibility, as you can change the association for a given location without changing the others. in a few case, there is a 3 level indirection...
Second, there is a lot of customization that is nearly programming. Payroll or administrative operations are first class example of this. In the later cas, you get a table with the operation (hiring for example), the event (creation, modification...) a code for the action (I for test, F to call a function, O for a standard operation) and a text field describing the parameters of a function ("C P0001, begda, endda" to create a structure P001 with default values).
Third, you can also use such a table to indicate a function or class (ABAP-OO), that will be dynamically called. You get a developer to create this function or class, and then indicate this in the table. This is a method to replace a functionality by another one, or extend it. This is used extensively in the ESS/MSS.
Last, there is also extension point or file that you can modify. this is nearly the same as the previous one, except that you don't need to indicate the change : the file is always used (ZXPADU01/02 for HR modification of infotype)
hope this help
Guillaume PATRY
I am building an application in Java (with a jQuery frontend) that needs to talk to a third party application. it needs to update the interface every two seconds at the most.
Would it be a good idea to use comets? If so, how do they fit into the picture?
What other means/technologies can I use to make the application better?
The application will poll stock prices from a third party app, write it to a database and then push it to the front end every second, for the polling, I have a timer that runs every second to call the third party app for data, I then have to display it to the front end using JSP or something,
well at this point im not sure if I should use a servlet to write this out to the front end, what would you recommend? how should I go about it?
is there any new technology that I can use instead of servlets?
I am also using Berkeley db to store the data, do you think its a good option? what would be the drawbacks, if any for using berkeley..
im absolutely clueless so any advice will be much appreciated.
Thanks!
edit : I am planning to do this so that a deskop app constantly polls from the thrid part and writes to the database and a web app only reads and displays from the database, this will reduce the load on the web app and all it has to do is read from db.
Take a look at using a web application framework instead of Servlets - unless it's a really basic project with one screen. There are lots in the Java world unfortunately and it can be a bit of a minefield. Stick with maybe SpringMVC or Struts 2, the worst part is setting these up, but take a look at a sample application plus a tutorial or two and work from there.
http://www.springsource.org/about
http://struts.apache.org/2.x/index.html
Another option to look at is using a template framework such as Appfuse to get yourself up and running without having to integrate a lot of the framework together, see:
http://appfuse.org/display/APF/AppFuse+QuickStart
It provides you with a template to setup SpringMVC with MySQL as a database plus Spring as an POJO framework. It may be a quick way to get started and up and building a prototype.
Judging by your latency requirement of 2 seconds it would be wise to look at some sort of AJAX framework - JQuery or Prototype/Scriptaculous are both good places to start.
http://jquery.com/
http://www.prototypejs.org/
In terms of other technoloqies to make things better you will want to consider a build system, Ant/Maven are fine with Maven the slightly more complex of the two.
http://ant.apache.org/
http://maven.apache.org/download.html
Also, consider JUnit for testing the application. You might want to consider Selenium for functional testing of the front end.
http://www.junit.org
http://seleniumhq.org/
Is this really a stock trading application? Or just a stock price display application? I am asking because from your description it sounds like the latter.
How critical is it that data is polled every second? Specifically would it matter if some polls are a second or two late?
If you are building a stock trading application (where the timing is absolutely critical), or if you cannot afford to be delayed on your polling, I'd recommend you have a look at one of the Java Real Time solutions:
Sun Java Real-Time System (http://java.sun.com/javase/technologies/realtime/index.jsp)
WebSphere Real Time (http://www-01.ibm.com/software/webservers/realtime/)
Oracle JRockit Real Time (http://download.oracle.com/docs/cd/E13150_01/jrockit_jvm/jrockit/docs30/index.html)
Other than that, my only advice is that you stick to good OO design practices. For instance, use a DAO to write to your database, this way, if you find that Berkeley DB isn't quite for you, you can switch to a relational database system with relative ease. It also makes it easy for you to move on to some database partitioning solutions (e.g., Hibernate Shards) if you decide you need it.
While I may have my own technology preferences (for instance, I'd choose Spring MVC for the front end as others have mentioned, I'd try and use Hibernate for persistance), I really cannot claim that these would be better than other technologies out there. Go with something you are familiar with, if it fits the bill.
I think you should focus on your architectural design before picking technologies with a focus on scalability and extendability. Once an architectural design is in place you can look to see what's available and what you need to build, all of which should be pretty obvious.
While not directly comparable look at how Google, eBay and YouTube deal with the scalability problems they face. While a trading system won't have the issues these guys have with sheer numbers of users, you'll get similar problems with data volumes and being able to process price ticks in a timely fashion.
The LSE has getting on for 3000 names, multiply this by the 10 or so popular exchanges round the world and you've got a lot of data being updated continuously over the period each market is open. To give you an idea of what involved in capturing data from a single exchange take a look at http://kx.com/.
From a database perspective you've going to need something industrial strength that allows clustering and has reliable replication - for me this means Oracle. You also want to look at a Time-series Database Design, which in my experience is the best way to build this sort of system.
The same scaling and reliability requirements will apply to your app servers, with JBoss being the logical choice there, although I'd also consider the OSGi Spring Server (http://www.springsource.com/products/dmserver) as its lightweight nature could make it faster.
You'll also want Apache servers for load balancing and to serve static content - a quick Google will turn up stacks of information on that so I won't repeat it here.
Also forget polling, it doesn't scale. Look at using messaging and consumer processes for the cross-process communication, events and worker threads for the in-process communication. Both techniques achieve a natural load balancing effect that can be tuned by increasing the number of consumer processes or worker threads as need be.
Also a static front-end isn't going to cut the mustard, IMHO. Take a look at what's out in the market already - CNC Markets, IG Index, etc all have pretty impressive real-time trading apps.
As an aside, assuming this is a commercial project and not meaning to put a downer on the whole thing, companies like CNC Markets, IG Index, etc make their money from trading fees, the software being a means to an end, which you get access to for free simply by having an account. The other target for the trading software is commercial institutions such as the banks, investment managers, etc. I'd want a pretty watertight plan for how I was going to break into either of these markets before expending too much time and effort.
PostgreSQL is probably the right database. It's a little more enterprisy than MySQL. As for the front-end, there's lots of stuff that can go "on top" of servlets, SpringMVC, Tapestry, and so on and so forth. The actual servlet implementation will be hidden from you.
Many will suggest, and it's probably not a bad suggestion to use Spring to configure the application and to do any dependency injection.
If you're looking for something a little more lightweight, you might consider grails. It's quick to develop with and becoming mature.
Really though, it's kind of hard to recommend things without knowing what kind of "production" environment this would be. Are we talking lots of transactions? (sure, it's a stock trading program, but is it a simulation with a small number of users etc...) It's fun to suggest things, but if you're serious, I'm not sure I would start a major project like this. There are lots of ways to do this, and lots of ways to do this wrong.
Your intention is to build a web UI which shows realtime data eg: time, market data etc...
One of the technologies I have personally used is Web Firm Framework, an opensource framework under Apache License 2.0. It is a java server side framework to build web UI. For each and every tag & attribute there is a corresponding java class. We are just building the UI with Java code instead of pure HTML and JavaScript. The advantage is whatever changes we are making in the server tag & attribute objects will be reflected to the browser page without any explicit trigger from the client. In your case we can simply use ScheduledExecutorService to make data changes in the UI.
Eg:
AtomicReference<BigDecimal> oneUSDToOneGBPRef = new AtomicReference<>(new BigDecimal("0.77"));
SharedTagContent<BigDecimal> amountInBaseCurrencyUSD = new SharedTagContent<>(BigDecimal.ZERO);
Div usdToGBPDataDiv = new Div(null).give(dv -> {
//the second argument is formatter
new Span(dv).subscribeTo(amountInBaseCurrencyUSD, content -> {
BigDecimal amountInUSD = content.getContent();
if (amountInUSD != null) {
return new SharedTagContent.Content<>(amountInUSD.toPlainString(), false);
}
return new SharedTagContent.Content<>("-", false);
});
new Span(dv).give(spn -> {
new NoTag(spn, " USD to GBP: ");
});
new Span(dv).subscribeTo(amountInBaseCurrencyUSD, content -> {
BigDecimal amountInUSD = content.getContent();
if (amountInUSD != null) {
BigDecimal oneUSDToOneGBP = oneUSDToOneGBPRef.get();
BigDecimal usdToGBP = amountInUSD.multiply(oneUSDToOneGBP);
return new SharedTagContent.Content<>(usdToGBP.toPlainString(), false);
}
return new SharedTagContent.Content<>("-", false);
});
});
amountInBaseCurrencyUSD.setContent(BigDecimal.ONE);
//just to test
// will print <div><span>1</span><span> USD to GBP: </span><span>0.77</span></div>
System.out.println(usdToGBPDataDiv.toHtmlString());
ScheduledExecutorService scheduledExecutorService =
Executors.newScheduledThreadPool(1);
Runnable task = () -> {
//dynamically get USD to GBP exchange value
oneUSDToOneGBPRef.set(new BigDecimal("0.77"));
//to update latest converted value
amountInBaseCurrencyUSD.setContent(amountInBaseCurrencyUSD.getContent());
};
ScheduledFuture scheduledFuture = scheduledExecutorService.schedule(task, 1, TimeUnit.SECONDS);
//to cancel the realtime update
//scheduledFuture.cancel(false);
For displaying time in real-time you can use SharedTagContent<Date> and ContentFormatter<Date> to show time in specific timezone. You can watch this video for better understanding. You can also download sample projects from this github repository.
Have a long running set of discrete tasks: parsing 10s of thousands of lines from a text file, hydrating into objects, manipulating, and persisting.
If I were implementing this in Java, I suppose I might add a new task to an Executor for each line in the file or task per X lines (i.e. chunks).
For .Net, which is what I am using, I'm not so sure. I have a suspicion maybe CCR might be appropriate here, but I'm not familiar enough with it, which is why I pose this question.
Can CCR function in an equivalent fashion to Java Executors, or is there something else available?
Thanks
You may want to look at the Task Parallel Library.
As of C# 5 this is built into the language using the async and await keywords.
If you're going to ask a bunch of .NET people what's closest to being equivalent to Java Excecutors, it might not hurt to describe the distinguishing features of Java Executors. The person who knows your answer may not be any more familiar with Java than you are with .NET.
That said, if the already-mentioned Task Parallel Library is overkill for your needs, or you don't want to wait for .NET 4.0, perhaps ThreadPool.QueueUserWorkItem() would be what you're looking for.
Maybe this is related: Design: Task Parallel Library explored.
See 10-4 Episode 6: Parallel Extensions as a quick intro.
For older thread-based approach, there's ThreadPool for pooling.
The BackgroundWorker class is probably what you're looking for. As the name implies, it allows you to run background tasks, with automatically managed pooling, and status update events.
For anyone looking for a more contemporary solution (as I was), check out the EventLoopScheduler class.