How to access the Impl classes of Twitter4J? - java

Using MongoDB, I need to persist objects from Twitter4J. Twitter4J uses interfaces, which are implemented in JSON versions. Example:
The API returns Status (an interface), and Status is implemented as StatusJSONImpl.
I can't save Status to MongoDB, I need to implement StatusJSONImpl.
My issue is, this class StatusJSONImpl is not public (see here) so I can't use it in my code. I tried to download the source of Twitter4J to manually add "public" to StatusJSONImpl: I can do:
Status status = twitter.updateStatus(latestStatus);
String statusStringified = TwitterObjectFactory.getRawJSON(status);
StatusJSONImpl statusImplemented = (StatusJSONImpl) TwitterObjectFactory.createUserList(statusStringified);
SingletonLaunchDB.getMongo().save(statusImplemented);
But I still get a java.lang.IllegalAccessError on the class StatusJSONImpl at run time.
I see from other SA answers that users routinely point other users to this Impl classes... how do they do to use it in their code?
Your help is much appreciated.

Status is serializable. To recover StatusJSONImpl from statusStringified you can write.
JSONObject json = new JSONObject(statusStringified);
Status status = new StatusJSONImpl(json);
The code sample is from StatusSerializationTest.java
I hope this helps.

Use the static factory method on TwitterObjectFactory:
Status status = TwitterObjectFactory.createStatus(statusAsString);
StatusJSONImpl is an implementation detail which library users are not meant deal with. The only thing a user of the library should care about is the contract (the Status interface in this case) which is necessarily public and the library authors promise to fulfill. On the other hand, the concrete classes like StatusJSONImpl are not public on purpose in order to prevent consumers from using them and getting tightly coupled to a specific implementation which may change over time. And from the authors' point of view, by coding to an interface they are then free to return any concrete type they wish as long as it fulfills the contract.
If you check the class that is returned from the factory method, it is StatusJSONImpl. But to reiterate, as a user of the library you should need to know or care about that.
Status status = TwitterObjectFactory.createStatus(statusAsString);
status.getClass(); // class twitter4j.StatusJSONImpl
To understand more about why this is done, you can read about static factory methods.

Related

Creating JsonLd + Hydra based Generic Client API in java. Is there any projects exist for reference?

I am creating Client API in Java using :+ Apache Jena FrameWork+ Hydra(for Hypermedia driven) + my private vocab similar to Markus Lanther Event-API Vocab instead of schema.org(for Ontology/Vocabulary part)
Section 1 :
After looking this Markus Lanther EventDemo repo and hydra-java.I found that they are creating classes for each hydra:Class that can break client in future .For example :
A Person class (Person.java)
public class Person
{
String name;
};
But in future requirement name is also a class eg:
public class Name
{
String firstName;
String LastName;
};
So to fulfill this requirement I have to update Person class like this:
public class Person
{
Name name;
};
Question 1:
Is my understanding correct or not of this Section? If yes then what is the way to deal with this part ?
Section 2:
To avoid above problem I created a GenericResource class(GenericResource.java)
public class GenericResource
{
private Model model;
public void addProperty(String propertyName,Object propertyValue)
{
propertyName = "myvocab:"+propertyName;
//Because he will pass propertyName only eg: "name" and I will map it to "myvocab:name"
//Some logic to add propertyName and propertyValue to model
}
public GenericResource retriveProperty(String propertyName)
{
propertyName = "myvocab:"+propertyName;
//Some logic to query and retrieve propertyName data from this Object add it to new GenericResource Object and return
}
public GenericResouce performAction(String actionName,String postData)
{
//Some logic to make http call give response in return
}
}
But again I stuck in lots of problem :
Problem 1: It is not necessary that every propertyName is mapped to myvocab:propertyName. Some may be mapped to some other vocab eg: hydra:propertyName, schema:propertyName, rdfs:propertyName, newVocab:propertyName, etc.
Problem 2: How to validate whether this propertyName belongs to this class ?
Suggestion: Put type field/variable in GenericResource class.And then check supportedProperty in vocab corresponding to that class.To more clarity assume above Person class which is also defined in vocab and having supportedProperty : [name,age,etc] .So my GenericResource have type "Person" and at time of addProperty or some other operation , I will query through vocab for that property is in supportedProperty list or in supportedOperation list in case of performAction().
Is it correct way ? Any other suggestion will be most welcomed?
Question 1: Is my understanding correct or not of this Section? If yes
then what is the way to deal with this part ?
Yes, that seems to be correct. Just because hydra-java decided to creates classes doesn't mean you have to do the same in your implementation though. I would rather write a mapper and annotate an internal class that can then stay stable (you need to update the mapping instead). Your GenericResource approach also looks good btw.
Problem 1: It is not necessary that every propertyName is mapped to
myvocab:propertyName. Some may be mapped to some other vocab eg:
hydra:propertyName, schema:propertyName, rdfs:propertyName,
newVocab:propertyName, etc.
Why don't you store and access the properties with full URLs, i.e., including the vocab? You can of course implement some convenience methods to simplify the work with your vocab.
Problem 2: How to validate whether this propertyName belongs to this
class
Suggestion: Put type field/variable in GenericResource class
JSON-LD's #type in node objects (not in #value objects) corresponds to rdf:type. So simply add it as every other property.
And then check supportedProperty in vocab corresponding to that class.
Please keep in mind that supportedProperty only tells you which properties are known to be supported. It doesn't tell you which aren't. In other words, it is valid to have properties other than the ones listed as supportedProperty on an object/resource.
Ad Q1:
For the flexibility you want, the client has to be prepared for semantic and structural changes.
In HTML that is possible. The server can change the structure of an html form in the way outlined by you, by having a firstName and lastName field rather than just a name field. The client does not break, rather it adjusts its UI, following the new semantics. The trick is that the UI is generated, not fixed.
A client which tries to unmarshal the incoming message into a fixed representation, such as a Java bean, is out of luck, and I do not think there is any solution how you could deserialize into a Java bean and survive a change like yours.
If you do not try to deserialize, but stick to reading and processing the incoming message into a more flexible representation, then you can achieve the kind of evolvability you're after. The client must be able to handle the flexible representation accordingly. It could generate UIs rather than binding data to fixed markup, which means, it makes no assumptions about the semantics and structure of the data. If the client absolutely has to know what a data element means, then the server cannot change the related semantics, it can only add new items with the new semantics while keeping the old ones around.
If there were a way how a server could hand out a new structure with a code-on-demand adapter for existing clients, then the server would gain a lot of evolvability. But I am not aware of any such solutions yet.
Ad Q2:
If your goal is to read an incoming json-ld response into a Jena Model on the client side, please see https://jena.apache.org/documentation/io/rdf-input.html
Model model = ModelFactory.createDefaultModel() ;
String base = null;
model.read(inputStream, base, "JSON-LD");
Thus your client will not break in the sense that it cannot read the incoming response. I think that is what your GenericResource achieves, too. But you could use Jena directly on the client side. Basically, you would avoid unmarshalling into a fixed type.

How to unit test a class which uses HttpURLConnection internally?

I'm looking for the best way to test a class which internally makes HTTP requests to a pre-defined URL. Generally, the class in question looks more or less like this :
public class ServiceAccess {
private static final String SERVICE_URL = "http://someservice.com/";
public ServiceAccess(String username) throws IOException,
UserNotFoundException, MalformedURLException {
URL url = new URL(SERVICE_URL + username);
HttpURLConnection conn = (HttpURLConnection)url.openConnection();
if(conn.getResponseCode() == HTTP_NOT_FOUND) {
throw new UserNotFoundException("user not found : " + username);
}
// and some more checks
}
}
I would like to test that the class properly reacts to the HTTP server's responses, including response codes, header fields, and such. I found the mockwebserver library that looks just like something I need. However, in order to use it, I would need to somehow change the URL that the class connects to.
The only sensible option that I see is to pass the URL in the constructor : however, it seems to me that this does not play too well in terms of design, since requiring the client to pass an URL to such a class looks fishy. Furthermore, I have not seen any other web service access libraries (Twitter4J, RestFB) that would require their clients to pass the URL in order to actually use them.
I'm not a Java whiz, but I'd like to get it as right as possible. All answers welcome.
What is fishy about passing the URL? Not sure I get that.
Generally for things like this, don't you want the URL to be a property? I would think in the same way that the database url for your instance is going to be constructed of properties, you would want to do the same here. In which case, in your test you just override the property/ies.
The other interesting thing about these kinds of tests is I think it's a really good idea to have tests of the actual protocol (which is what you are doing with the mock) and also the actual service and then run the service tests on a schedule, just as a way to make sure that the downstream services you are consuming are still there and honoring their end of the contract. Was reading the excellent Continuous Delivery book from Addison Wesley, contemplating making this part of a pipeline today.
if you have written your tests first, you would have never written such code :)
your class violates single responsibility rule. refactor this class. extract part responsible for networking (in your code - getting connection). then ServiceAccess should use that class. then you can easily test ServiceAccess in unit tests. unit testing networking code is pointless - guys from oracle have already done that. all you can test is that you have provided correct parameters and that's the role of integration tests
Iff you can't change the code, you could use PowerMock to mock HttpURLConnection.

What does `Client.findById(id)` mean?

Looking through the Play documentation for Java I noticed the following block of code:
public static Result show(Long id) {
Client client = Client.findById(id);
return ok(views.html.Client.show(client));
}
Source: http://www.playframework.com/documentation/2.1.0/JavaRouting
I am having some trouble understanding the second line, my understanding of Java Object creation is a typical constructor looks like the following:
Person john = new Person();
What is the second line doing? Creating a Object called client from Class called Client, also what is Client? It doesn't appear to be a part of the Play Framework, certainly I cannot find anything in JavaDocs.
Thanks
Edit:
I found this to be a good point of reference for the answer (http://docs.oracle.com/javase/tutorial/java/javaOO/classvars.html)
Also I think the class Client comes from the following documentation (http://www.playframework.com/documentation/1.1.1/controllers) with Client being just a example model class, the new documentation probably needs updating to clear up this confusion.
Pretty clearly, the class Client has a static function of findById, which takes a Long and returns a Client. Static functions are functions that are defined without any access to object properties, and therefore can be accessed through the class name, rather than having to be accessed through an object. Most likely, the class has a static property containing a collection of all clients in the system by index, and findById grabs an existing Client from that list.
I really have no idea where the class Client is defined, however. I've also made a quick look around for it, and couldn't find it in the obvious places.
There must be a static method called show(Client) on the views.html.Client class that returns some object. That object is passed into an ok(whatever) method, and that ok method returns a Result object.
You're missing some basic knowledge/experience. The sample you gave has nothing to do with routes and in this snippet only first line is important, second is just some hypothetical usage. De facto it could be just...
public static Result show(Long id) {
return ok("You want to display details of client with ID: " + id);
}
Although #BenBarden explained what is that mean correctly , this static method isn't declared anywhere, it's (again) hyphotetical usage of some ORM. For an example the real usage with Ebean's model will be:
Client = Client.find.byId(id);
Of course you can also declare own method in your Client model and name it the same as in the sample, however it will be just only wrapper:
public static Finder<Long, Client> find
= new Finder<>(Long.class, Client.class);
public Client findById(Long id) {
return find.byId(id);
}
Conclusions
You need to examine some samples available with your Play sources to get familiar with some basic syntax, fortunately you'll find it easy.
DO NOT MIX documentation from Play 1.x with Play 2.x they are not compatible!

What's the "proper" and right way to keep Jersey Client API functions and REST (Jersey API) Server functions linked?

I was wondering how people with more experience and more complex projects get along with this "uglyness" in the REST Communication. Imagine the following Problem:
We'll need a fair amount of functionalities for one specific resource within our REST Infrastructure, in my case that's about 50+ functions that result in different querys and different responses. I tried to think of a meaningful resource-tree and assigned these to methods that will do "stuff". Afterwards, the Server Resource Class looks like this:
#Path("/thisResource")
public class SomeResource {
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunction1 ( ...lots of Params) {
... logic ....
}
//
// lots of functions ...
//
#GET/POST/PUT/DELETE
#Path("meaningfulPath")
public Response resourceFunctionN ( ...lots of Params) {
... logic ....
}
}
To construct the urls my client will call, I made a little function to prevent Typos and to take better use of Constants
so my Client looks like this:
public class Client() {
public returnType function1 () {
client.resource = ResourceClass.build(Constants.Resouce, "meaningfulPath");
...
return response.getEntity(returnType);
}
}
Now the questions that bothers me is how could I link the client function and the server function better?
The only connection between these two blocks of code is the URL that will be called by the client and mapped by the server, and if even this URL is generated somewhere else, this leads to a lot of confusion.
When one of my colleagues needs to get into this code, he has a hard time figuring out which of the 50+ client functions leads to wich server function. Also it is hard to determine if there are obsolete functions in the code, etc. I guess most of you know about the problems of unclean code better than I do.
How do you deal with this? How would you keep this code clean, maintainable and georgeous?
Normally, this would be addressed by EJB or similar technologies.
Or at least by "real" web services, which would provide at least WSDL and schemas (with kind of mapping to Java interfaces, or "ports").
But REST communication is very loosely typed and loosely structured.
The only thing I can think of now, is: define a project (let's call it "Definitions") which would be referenced (hence known) by client and server. In this project you could define a class with a lot of public static final String, such as:
public static final String SOME_METHOD_NAME = "/someMethodName";
public static final String SOME_OTHER_METHOD_NAME = "/someOtherMethodName";
Note: a static final String can very well be referenced by an annotation (in that case it is considered to be constant by the compiler). So use the "constants" to annotate your #Path, such as:
#Path(Definitions.SOME_METHOD_NAME)
Same for the client:
ResourceClass.build(Constants.Resouce, Definitions.SOME_METHOD_NAME);
You are missing the idea behind REST. What you are doing is not REST but RPC over HTTP. Generally you are not supposed to construct URLs using out of band knowledge. Instead you should be following links received in the responses received from the server. Read about HATEOAS:
http://en.wikipedia.org/wiki/HATEOAS

Adding fields to a WebService

I have a SOAP service that exposes a method
TradeDetail getTradeDetail()
TradeDetail stores 5 fields, transaction number, dates etc
I need to add a couple of fields to TradeDetail. I want to keep backward compatibility (for a while) and it looks as if my options are limited to creating a new class with the extra fields
TradeDetail2 getTradeDetail2()
Now this will work - I've done it before. But are there any other solutions that people have used?
E.g.
Fundamentally change TradeDetail2 to add name value pairs.
Inherit TradeDetail2 from TradeDetail, this would reduce code but increase coupling
Return XML or JSON instead
I will be able to retire the original interface pretty quickly so the code will get cleaned up and the extra TradeDetail2 won't last forever!
thanks
I sympathise - some of my webservices are riddled with myMethod(), myMethod2(), myMethod3() etc simply because I needed to add a few new fields.
Would it make sense for you to keep the method name and create a new endpoint for each version of your API instead? eg:
http://my.domain.com/servicename/v1
http://my.domain.com/servicename/v1.1
http://my.domain.com/servicename/v1.5
http://my.domain.com/servicename/v2
Then your method names stay sensible, regardless of how many future changes you need to make.
Any apps using your webservice would probably need to be rewritten and/or rebuilt against a new WSDL anyway in order to take advantage of the new fields, so why not just have them rewritten/rebuilt against the new v1.1 API.
I find that this also helps when communicating with the owners/developers of the apps using your service - eg, "Version [old] of our webservice API will no longer be supported after [date], please ensure that you are using at least version [new]."
This is why I prefer to have complete control over XML to Object mapping, so that I can separate model from XML interface. In your case, I would simply add new fields to TradeDetail, and consider them "optional" for backwards compatibility. This would be the example XML->Object mapping for TradeDetail in framework my team uses, written for your interface:
// this would go into my client endpoint class
public TradeDetail getTradeDetail() {
Element requestRoot = new Element("GetTradeDetail");
Element responseRoot = invokeWebServiceAndReturnJdomElement(requestRoot);
return mapTradeDetail(responseRoot);
}
// this would go into my client XO mapping class
public TradeDetail mapTradeDetail(Element root) {
TradeDetail tradeDetail = new TradeDetail();
tradeDetail.setField1 = fetchString(root, "/GetTradeDetail/Field1");
tradeDetail.setField2 = fetchInteger(root, "/GetTradeDetail/Field2");
tradeDetail.setField3 = mapField3(root, "/GetTradeDetail/Field3");
tradeDetail.setField4 = fetchString(root, "/GetTradeDetail/Field4");
}
This kind of client would ignore new fields, thus being compatible with new version of protocol, until I add something like this to the end of this same method in version 2:
if (fetchXPath(root, "/GetTradeDetail/Field5") != null) {
// so we're talking with server which speaks new version of protocol
tradeDetail.setField5 = fetchString(root, "/GetTradeDetail/Field5");
}
Server would work with similar code, possibly checking client version, and mapping extra fields only if client supports new version of protocol.
In my view, client should be written so that few extra fields added to protocol don't break the client - I don't have the luxury of being down simply because upstream provider added new functionality and didn't inform me about it. If provider changes existing mandatory fields, of course, client needs modification. This is why upstream provider should version protocol and support old version for at least a couple of months.

Categories