I am trying to design a simple FIX message encoder and decoder to encode (convert to FIX) and decode (convert from FIX) my business domain Order objects. I have designed something, but I am not able to achieve the beautiful design I want. Wanted to see if others who have experience building this kind of things have any better design ideas.
This is what I roughly have: a business Object Order, QuickFIX object Message.
I need to generate NewOrder/Cancel/Replace messages and the message could be different for different exchanges.
I can have ReplaceEncoder --> NewOrderEncoder --> AbstractEncoder, CancelEncoder --> AbstractEncoder.
But if I want another dimension to this, like having custom message generation for different exchanges, then it results in too many combinations of hierarchies.
Is my only bet is to mundanely write different code for different exchanges? How others achieve this? Thanks.
I think you will probably come across a similar problem that we have. That is that each FIX implementation is different. Some use 4.2 others 4.4, some use some tags others ignore them, some use many of their own tags others use very few. What we have done is created general FIX sessions with subclasses for FIX 4.2 and 4.4 and then subclasses for each specific sessions (ie individual brokers). That gives us a reasonable amount of reuse of code for sending and receiving FIX messages. With just the specifics changed for things like handling account names and passwords etc.
For message generation we have a factory method that returns and adapter. All the adapters have the same API which will convert our Business order object in to a FIX Message object. Of course each adapter is specific to the API of the broker. I guess we could probably reuse some code between the adapters but currently we don't.
Is my only bet is to mundanely write different code for different exchanges?
Certainly not. In a FIX message there are compulsory and non compulsory fields. You cannot negotiate on the required fields because then you could not guarantee the authenticity and completeness of the messages. Now I am not saying this is impossible, many counter parties have their own specific user level agreements with exchanges for their own specific messages.
With Quickfix, the XML data dictionary from where the engines confirms the completeness of the messages, is in your hand. Tweak it for your own requirements. You would certainly have multiple sessions. I am not sure if this is possible, haven't tried it myself, does different sessions allow different data dictionaries ? If yes, then use them for different counter parties. If that isn't possible, one way which crosses my mind is add extra code for processing your specific fields, not the whole message, in messages expected from certain counter parties.
One place where I had worked, we were using something on these lines. Receive whatever version you may, but once the message is received convert it into a specific version of FIX message, which only exists inside your system. So your engine basically reads only 1 FIX version of messages. But the added complexity is you have to code a converter. I am not sure how feasible is that for you.
FIX is an extraordinarily slippery protocol when it comes to message definitions.
In practice, every institution that offers a FIX interface has made modifications to the default message set. That means, for instance, a FIX4.4 NewOrderSingle message from counterparty A may have different fields than one from counterparty B.
In fact, counterparty A may have made up some fields whole-cloth and added them in. For any new counterparty, there's a chance you'll encounter fields that you've never seen before.
I've written a few adapters for a few different exchanges, and unfortunately, you're really forced to handle them individually. You may be able to capitalize on some commonalities, but you can't make any assumptions on that until you've reviewed their FIX interface's specs.
So, short answer to your question:
Is my only bet is to mundanely write different code for different exchanges?
Yep, pretty much.
What we ended up doing was writing a base fix layer that applies only the required fix tags. In the fix spec certain tags are flagged as required for each message type.
Once this message had been created we apply a filter to the message that is specific to a broker and instrument type.
ie if you trade options and equities with Goldman and JPMorgan you'd write the following filters:
Goldman-equity
Goldman-option
JPMorgan-Equity
JPMorgan-option
Each would apply vendor and instrument specific fields to the base message.
Related
I read the "The Dataflow Model: A Practical Approach to Balancing Correctness, Latency, and Cost in MassiveScale, Unbounded, Out of Order Data Processing" paper. Alas, the SDK does not yet expose the accumulating & retracting triggering mode (section 2.3).
I was wondering if there was a workaround for getting similar semantics?
I have been reading the source and have figured out that StateTag or StateNamespace may be the way i can store the "last emitted value of the window" and hence can be used to calculate the retraction message down the pipeline. Is this the correct path or are there other classes/ways I can/should look at.
The upcoming state API is indeed your best bet for emulating retractions. Those classes you mentioned are part of the state API, but everything in the com.google.cloud.dataflow.sdk.util is for internal use only; we technically make no guarantees that the APIs won't change drastically, or even remain unreleased. That said, releasing that API is on our roadmap, and I'm hopeful we'll get it released relatively soon.
One thing to keep in mind: all the code downstream of your custom retractions will need to be able to differentiate them from normal records. This is something we'll do automatically for you once bonafide retraction support is ready, but in the mean time, you'll just need to make sure all the code you write that might receive a retraction knows how to recognize and handle it as such.
I'm currently in the process of making a pretty large Akka based Java application and I'm running into a couple issues that bug me to no end.
My current package layout looks kinda like this:
My Mobile class serving as the supervisor of the actors inside the actors package.
Since I don't want to create a new set of Actors for every HttpClient and Account, I pass those around in message objects, which are stored in the messages package, together with the endpoint ActorRef that receives the final result. This does however create a very cluttered messages package with a different message for each actor. Eg. MobileForActor1, Actor1ForMobile, MobileForActor2 etc. Now my question is, is there a Convention to use for this sort of stuff that deals with this problem and is my structure (Mobile->Actor1->Mobile->Actor2->etc.) the way Akka wants it to be or do I have to just sort of waterfall the messages (Mobile->Actor1->Actor2->etc.)?
Right now I'm sending a ConnectMessage to my Mobile actor which then sends it to Actor1, Actor1 processes it and sends a new message back to Mobile, Mobile sends that response then to Actor2 and the cycle continues with a new message being created based on the old message. Eg. new Message2(message1.foo, message1.bar, message1.baz, newComputatedResult, newComputatedResult2, etc);
Is this good practice or should I include the old instance (which may contain info that isn't useful anymore) and include the new stuff? Eg. new Message2(message1, newComputatedResult, newComputatedResult2, etc);
Or should I do something completely different?
I thought about using TypedActors but those require the use of a waterfall pattern and I don't know how I would pass on the ActorRef of the listener that wants to receive the final result.
I hope I made myself understandable enough because English is not my maiden languages and that the question is clear to everyone.
I'm a beginning Akka developer and love the idea but since the documentation doesn't cover this very well, I figured this would be the best place to ask. Thanks for reading!
I will venture a few comments in response to this because I've dealt with the same issues in my learning curve of Akka. I think you're asking for some rules of thumb so mine are contained herein.
First, creating actors is incredibly cheap; they are very lightweight. So, why not create one for each HttpClient and Account and give them suitable names derived from their identity? This also avoids you having to pass them around as much, probably, decluttering your code.
Second, keep your message names short, focused and starting with a verb. Each message should tell the actor to do something so you want the name to reflect that by using a verb.
Third, sets of messages go with the actor. I usually declare them in the actor class's companion object so that using them is like ActorClass.MessageName unless it is within ActorClass and then it is just MessageName.
Fourth, append a counter to the name of an actor. I often just combine a counter (use AtomicInteger) with the name of the type (Car-1, Car-2, etc.).
If the hierarchy is important to you, I would recommend only appending the parent actor to the name. Something like Phone-1-in-Car-7 meaning Phone-1 is contained within Car-7. You can then assemble the hierarchy both programmatically and manually by following the parent links.
I think "Message" in ConnectMessage is redundant. Just make the message name be "Connect" or even better "ConnectToThing" (whatever Thing is, if that's relevant).
I wouldn't compound your message names too much like you're suggesting with Message2. Use the minimal amount of information to be useful to whomever is going to read those names. I think the lack of response to this may have resulted from this part of your question. I found it confusing as a lot of detail is missing.
Hope this helps.
The HATEOAS principle "Clients make state transitions only through actions that are dynamically identified within hypermedia by the server"
Now I have a problem with the word dynamically, though I guess it's the single most important word there.
If I change one of my parameters from say optional to mandatory in the API, I HAVE to fix my client else the request would fail.
In short, all HATEOAS does is give the server side developer extreme liberty to change the API at will, at the cost of all clients using his/her API.
Am I right in saying this, or am I missing something like versioning or maybe some other media-type than JSON which the server has to adopt?
Any time you change a parameter from optional to mandatory in an API, you will break consumers of that API. That it is a REST API that follows HATEOAS principles does not change this in any way. Instead, if you wish to maintain compatibility you should avoid making such changes; ensure that any call made or message sent by a client written against the old API will continue to function as expected.
On the other hand, it is also a good idea to not write clients to expect the set of returned elements to always be identical. They should be able to ignore additional information given by a server if the server chooses to provide it. Again, this is just good API design.
HATEOAS isn't the problem. Excessively rigid API expectations are the problem. HATEOAS is just part of the solution to the problem (as it potentially relieves clients from having to know vast amounts about the state model of the service, even if it doesn't necessarily make it straight-forward).
Donal Fellows has a good answer, but there's another side to the same coin. The HATEOAS principle doesn't have anything to say itself about the format of your messages (other parts of REST do); instead, it means essentially that the client should not try to know which URI's to act upon out of band. Instead, the server should tell the client which URI's are of interest via hyperlinks (or forms/templates which construct hyperlinks). How it works:
The client starts at state 0.
The client requests a well-known resource.
The server's response moves the client to a new state N. There may be multiple states achievable at this point depending on the response code and payload.
The response includes links (or forms/templates) which tell the client, in band, the set of potential next states.
The client selects one of the potential next states by issuing a method on a URI.
Repeat 3 through 5 to states N+1 and beyond until the client's application needs are met.
In this fashion, the server is free to change the URI that moves the client from state N to state N+1 without breaking the client.
It seems to me that you misunderstood the quoted principle. Your question suggests that you think about the resources and that they could be "dynamically" defined. Like a mandatory property added to certain resource type at the application runtime. This is not what the principle says and this was correctly pointed out in other answers. The quoted principle says that the actions within the hypermedia should be dynamically identified.
The actions available for a given resource may change in time (e.g. because someone added/removed a relationship in the meantime) and there may be different actions available for the same resource but for different users (e.g. because users have different authorization levels). The idea of HATEOAS is that clients should not have any assumptions about actions available for certain resource at any given time. The client should identify available actions each time it reads that resource.
Edit: The below paragraph may be wrong. See the comments for discussion about it.
On the other hand clients may have expectation for the data available in the resource. Like that a book resource must have a title and that it there may be links to the book's author or authors. There is no way of avoiding the coupling introduced by these assumptions but both service providers and clients should use backward-compatibility and versioning techniques to deal with it.
I'm trying to think in the best way on communication for the game I'm writing. The scenario is simple: tcp sockets and request for authentication, map updates, chat updates, etc. What I was thinking to use was set of classes, like User, Map, Creature, etc and have a Message class, which will have enum with message types and Object to store previously mentioned classes. After I will convert this with GSON to json and on other side I will decode it corresponding to the message type indicated by the element of enum. The problem is that I will pass sometimes too much unnecessary data and that's doesn't let me quiet plus the integration of new types of messages will not be very easy neither for me, nor for someone else who might use it. In the previous version I have used my own XML protocol which also doesn't let me very happy.
So what I'm asking is advice for me the better way for communication or maybe some improvement of my idea.
Thanks in advance,
Serhiy.
XML and JSOn are intended to make application integration simple, but still be human readable.
If you want a protocol tuned to your needs, I suggest you start by determining what information you want to send and how it would look. Document this before you even start implementing it. That way the data sent will suit your needs. (This is more work BTW which is why it is not done more often)
I have an existing Protocol I'd like to write a java Client for. The Protocol consists of messages that have a header containing message type and message length, and then the announced number of bytes which is the payload.
I'm having some trouble modeling it, since creating a Class for each message type seems a bit excessive to me (that would turn out to be 20+ classes just to represent the messages that go over the wire) I was thinking about alternative models. But I can't come up with one that works.
I don't want anything fancy to work on the messages aside from notifying via publish subscribe when a message comes in and in some instances reply back.
Any pointers as to where to look?
A class for each message type is the natural OO way to model this. The fact that there are 20 classes should not put you off. (Depending on the relationship between the messages, you can probably implement common featues in superclasses.)
My advice is to not worry too much about efficiency to start with. Just focus on getting clean APIs that provide the required functionality. Once you've got things working, profile the code and see if the protocol classes are a significant bottleneck. If they are ... then think about how to make the code more efficient.