Test equality of an API which is present in different languages [closed] - java

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
How would you tackle the following problem?
I've got an API which validates tokens (which are just simple XML files). So the API specifies a bunch of validation methods like validateTime(String tokenPath), validateFileHash(String tokenPath) or validateSomthingElse(String tokenPath).
The API is already implemented in two different languages, Java and C. My task is to make sure, that both versions behave the same. So if Java throws a TokenExpiredException after invoking validateTime("expiredToken.xml"), C should return a corresponding error value (in this case a predefined -4 for TOKEN_EXPIRED).
The good old approach would be to write Unit/Integration-tests in both languages. However, this would require double the effort as I would have to implement essentially the same Tests in Java and in C.
My idea was to define a XML-Schema for TestCases which would look something like this.
<!-- TestCases.xml -->
<testcase>
<tokenpath>expiredToken.xml</tokenpath>
<apiMethod>validateTime</apiMethod>
<expectationJava>TokenExpiredException</expectationJava>
<expectationC>-4</expectationC>
</testcase>
<testcase>
...
</testcase>
Furthermore, I would build a small Java tool to parse TestCases.xml and directly invoke both API versions (using JNI for C) to match the outcome to the preset expectations.
Do you think this is a feasible plan, or is it better to do the old approach? Are there Frameworks to deal with this kind of tasks or is it a smelly idea to begin with?

Your approach is feasible, what would be even better is if you can take advantage of some existing data driven testing frameworks. This way you don't need to do the legwork of parsing inputs, running test cases and asserting outputs.
Here's an example of how to drive Java tests through JUnit + an excel spreadsheet containing the data: http://www.wakaleo.com/component/content/article/241
I didn't see one immediately, but hopefully you can find something similar for C.

Related

How mocks are created [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
My question is how mocks objects are created not how to create a mock object using a library.
I have looked at the Mockito library source code but I didn't understand how its done. I have searched in the Internet but the articles explain what are mock object and how to create them using libraries.
For dynamic programming language perhaps it's simple as we can change methods, variable but how its done in static programming language (Java for example)
Let's begin with what a mock is: an object that you can set expectancies on it regarding methods that expects to be called, and/or parameters on those methods and/or count of calls on those methods.
Mocks are sent to tested objects in order to mimic certain dependencies without having to use the real code (in many cases this is problematic/dangerous, like dealing with payment gateways).
Since mocks will need to intercept calls to all (or some, in case of partial mocks) methods, there are several ways they can be implemented, depending mainly on the features the language provides. Particularly in Java this can be implemented via proxy classes: https://stackoverflow.com/a/1082869/1974224, an approach that kinda forces you (but in a good way) to use interfaces in your code when relying on dependencies.

Create too many classes or have some schema-less data structure(like dictionary)? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I'm have to use 50 different custom datatypes(/classes) which are defined in a document(xml/json), they have only fields and no methods and maybe strong validations.
My question is should i go ahead and create(/generate) 50 classes or use some generic data structure (like HashMap<String,Object>)?
Update: My fear is if i go with class geneartion, then my codebase might increased by very much
and if go with schema-less way, my data integrity might be compromised, so which one is lesser evil.
Unless it is just ridiculous, more code is more forgivable, in general. There are a few different reasons:
If you give them base classes at the right points, you can have it both ways, as your handling code can hold the base classes, and may have anchor points for extracting, validating or cleaning information stored in the different formats. Surely some of the processing can be shared.
If absolutely everything really falls to the base class, you can refactor the sub-classes out of existence without pain. On the other hand, if you start the amorphous way, gathering the special cases back into separate classes is more likely to go wrong.
Excessively large code is only bad if the extra volume does not clarify the logic for readers. I would have the classes, if they constitute units in which people think.
Also, actual functionality is more important than format or even readability. So if the risk is to data integrity vs code bloat, protect the content, not the form.

Selenium Webdriver (Java): What are the benefits (if any) of using an objectmap.properties file instead of Page Objects classes? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm implementing Selenium Webdriver 2 automated testing for our website, and am unable to find a clear assessment of what the benefits are of using an objectmap.properties file to store all the element locators, versus storing them in page objects java classes?
Also, it seems that using java classes for Page Objects allows exposing and abstracting page operations in those page objects classes too, whereas I'm not clear how this would be done if using an objectmap.properties file instead?
Or have I missed the point and the 2 are best used in conjunction?
Thanks in advance!
This is purely subjective. Some people prefer the simplicity of my_object=something then just fetching it using objectmap.get('my_object') while others, prefer using objects in Java. e.g. using LoginPage.TXT_USERNAME
Depending on your personal preference, and philosophies, you should determine which way is easier to you.
Personally, I think using java page objects are much more efficient because of the auto-complete that eclipse provides. I could do
LoginPage.TXT_USERNAME
LoginPage.TXT_PASSWORD
instead of having the possibilty of misspelling your object if you use a properties file like this:
objectmap.getProperty('TXT_USRNAME') # oops! forgot the E, and i wouldn't've known it until runtime.

Java versus XML&Lua for storing voxel/block types [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
So, I want it to be very easy to create all the entities of my game and for other people to come in and do the same. I was thinking I could just let the users/myself create an XML sheet the stores all the properties of each block (Like a Terraria or Minecraft voxel) and add Lua scripts that are referenced in the XML for additional functionality of any of the blocks.
I'm starting to think It would just be easier to let the user create a JAR file full of classes for each block. And then that JAR file could easily be used to get all the blocks. It'd just be interesting to reference all the blocks by a block id without storing all the classes by ID. Or I could give each class a static id. But that's not important.
Okay, so my short question is what are the pros and cons of storing all the the different types of blocks as classes versus in an XML sheet with Lua for additional functionality?
UPDATE: It looks like I'll be using pure Lua! Looks like an interesting and effective way to do it!
A limitation of the JAR approach is that your data would need to be compiled before it got used. With XML/Lua the data gets read/interpreted at runtime.
A third option that you did not mention is using straight Lua tables instead of XML. This lets you load the data with a simple "require", "dofile" or similar instead of needing to use a XML library as well.

java peephole optimization beginner compilers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
As part of a group project I'm writing a compiler for a simplified language. As one of the optional features I thought I'd add a peephole optimizer to go over the codegen's output intel assembly code and optimize it.
Our compiler is done in java and it seems like it's going to be a lot of work to create this peephole optimizer using the java I've learned so far. Is there some sort of tool I should be using to make this possible, as pattern matching strings doesn't sound like a good approach in java.
thanks
Peephole optimization ought to be done on a binary representation of the parse tree, not on text intended as input to the assembler.
Hard to say without looking at the design of your compiler, but typically you would have an intermediate step between generating code and emitting it. For example, you could think of having the output of the code generation phase be e.g. a linked list of instructions, where each instruction object stores the kind of instruction, any arguments, label/branch destinations, and so on. Then each pattern would inspect the current node and its immediate successors (e.g. if (curr.isMov() && curr.next.isPush() && ...) and modify the list accordingly. Then your peephole optimizer starts with the codegen output, runs each pattern on it, and does this over and over until the list stops changing. Then you have a separate phase which just takes this list of instructions and outputs the actual assembly.
I definitely wouldn't use strings for this. You might look at lex/yacc, and its ilk (e.g. Jack is one for Java, although I haven't used it) to generate the AST of the assembly, then run optimizations on the AST, and write out the assembly again … but you do realise this is a hard thing to do, right? :-)

Categories