My Linux-based C++ server (central document repository) uses git to manipulate file that it and receives from clients. Git is being used by executing standard git shell commands from the server application.
Now I develop a Java client that is intended to be used on Windows machines.
Client's network only accepts emails (it is heavily firewalled) so the server generates a diff patch file that should be applied on the client's side.
I'm not really a Windows user so this whole git bash thing confused me. How to I execute git commands from windows application? Similar to system("git add ."), but for Java and with security and error checking.
I read there are git libraries but a failed to find if they support applying patch files.
I am not sure if I get you completely right, but if you are looking for a way to master git through Java, I would suggest to have a look at JGit. It's the Java implementation for git that is e.g. used within the Eclipse IDE.
JGit also offers the ApplyCommand which suits your need for applying patch files.
There are several tutorials to be found across the web.
Related
I am working on a enterprise product and primarily there are 3 pieces to it swing based client, DB, Server(for now we can ignore DB part). Being enterprise product Client and Server comes with their own installer(it is not like configuring apache or JBOSS and deploy war's on it).
We have CI configured to generate the nightly OS specific builds for Client and server which can be installed.
So we have to test these build regularly on specific OS, which requires a lot of manual process of installing and creating system with X version client on Y OS OR X version server on Y OS. This is becoming very tedious since we are all on windows and doing next-> next -> really sucks(I have created a script which installed our product via shell but then it is still steps which I believe can be automated, but don't how). And also we need an isolation.
Now I am thinking how can we automate this process of creating these test machine. I have just started exploring Vagrant/Docker if they can be helpful to me (and under the their concept, still doesn't understand Puppet/Chef though) and I am confused in which strategy should I adopt
Create VM via vagrant and run my installation script on that box (This will require one VM per Client or per server)
Create VM via vagrant and run my client docker containers on it (this I guess, will require one VM for multiple client or server, since they would be under container)
Note: I have to create VM, since we are on window.either via vagrant or via boot2docker
So my question are
If these 2 strategy are valid and not wrong then out of these 2 which strategy should I adopt out of two ?
Are there any different strategy that I am missing or am I approaching this in right way ?
If strategy #2 is to be adopted then how can I create container/docker images in which my client is installed
how can I create container/docker images in which my client is installed
You must put in a Dockerfile all what you do in order to have your client started and configured.
In order to do so, you can either create a container, do all the stuff, and then docker commit or the better way is to put all the required commands in a Dockerfile, so that when you do a slight modification, you build a new version easily with a basic docker build -t myclient_version_n .
Check the docs
https://docs.docker.com/examples/mongodb/#creating-a-dockerfile-for-mongodb
and how to automate builds
http://docs.docker.com/docker-hub/builds/#automated-builds
how to create a Dockerfile
https://docs.docker.com/examples/nodejs_web_app/#creating-a-dockerfile
and have a look at existing Dockerfiles of containerized application in the docker Hub
https://registry.hub.docker.com/
An alternative to Vagrant would be to use Docker Machine. You could leverage the cloud providers as #m1keil mentioned too. Machine can provision Docker hosts on a number of providers and they are ready to go.
Disclosure: I work at Docker and am the maintainer of Machine :)
Your strategies seem valid to me. The addition of containers (docker) to your process might help you speed up and parallelize the testing process (if it's fully automatic testing) since the initialization time and the general resource consumption of a container are lower. However one cannot give you definitive answer without inspecting your testing process first. And since you haven't provided any details about it, it would be hard to tell you if you should use the first or the second strategy.
You can take advantage of the cloud and use services such as AWS, Azure, GCE, etc to initialize machines and run your tests. You can use Vagrant to do this, or skip Vagrant and create your own simple scripts by using the appropriate APIs of your chosen Cloud provider.
Also you can take a look at services such as Travis.ci, Circle.ci, and others, which might help you created automated testing pipe without the need to spend too much time on the plumbing.
I really like docker's ease of use via the Dockerfile. The Dockerfile let's you very easily update and control the software in the docker image, and then you can provision it in you CI/testing environment. Docker now has native Windows support, so this shouldn't prevent you from being able to use it: https://docs.docker.com/docker-for-windows/ Furthermore, I like that you can setup very lightweight, minimal machines, with only the build and runtime dependencies needed for your project, and store them for free on hub.docker.com. Depending on how long it takes to build & install certain dependencies, this can speed up your testing because you can just download a docker image with everything already installed and built, and then just build and test your actual project.
I use this for https://github.com/sourceryinstitute/opencoarrays, which is GCC's official implementation of Coarray Fortran. I have a little project https://github.com/zbeekman/nightly-docker-rebuild that lets you setup nightly docker image builds on hub.docker.com in under two minutes. I use this to trigger builds of https://github.com/zbeekman/nightly-gcc-trunk-docker-image because I can't rebuild GCC from source on Travis-CI.org without the build timing out. This way, I delegate the GCC nightly build to hub.docker.com and then just docker pull zbeekman/nightly-gcc-trunk-docker-image into a travis-ci instance to test OpenCoarrays against the latest GCC trunk.
I have some code that uploads and downloads files using AWS S3 (using the Java AWS SDK). I want to be able to write some tests for it, I was wondering if anyone has any good options. Ideally I would like a light-weight S3 server that runs locally that can be started fast and requires no system configuration (the tests need to be run by Jenkins).
Some options I have looked at so far:
FakeS3 - Almost exactly what I'm looking for, however, when using the Java AWS SDK, you must edit your /etc/hosts file and restart networking, not something I can do in Jenkins. Also when trying it out there seems to be a bug with the creation date field being formatted wrong which makes my client throw an exception, which doesn't inspire me with much confidence in the project.
Ceph - Implements S3 API but takes several minutes to install
You can try localstack, which is an open source local AWS cloud stack made for testing. It provides implementations of several of AWS services, including S3.
It looks like a very popular open source project on GitHub.
You can try installing minio server on your laptop/system, its open source & single static binary. Server is S3 compatible. Then you can try minio-java client library for all operations, following is basic operations example.
Installing minio server [GNU/Linux]
$ wget https://dl.minio.io/server/minio/release/linux-amd64/minio
$ chmod 755 minio
$ ./minio --help
Hope it helps
Disclaimer: I work for Minio
Late answer, will be useful mostly for Docker users. There's a great S3 compatible storage software called Riak CS and there's docker-riak-cs image that allows to quickly launch the server.
I've been using it for nearly 2 years for local development and integration testing with great success. It has some limitations, but nothing major that comes in the way, see api / compatibility documentation.
If you need Docker-less solution, you can set it up locally for each build, all setup and configuration scripts are available in docker-riak-cs repository.
Minio offers (in my opinion) the best set of features, flexibility and ease of use.
It is available as a docker container or binary for major OSes.
To start with minio, it is as easy as:
Download
Start the binary minio server /data
Use it
It works flawlessly with s3cmd and it has nice documentation for popular programming languages.
I started a S3 Server API project for Ladon, it contains a simple File System Repository. Its a Java Project and contains a Spring Boot Starter for simple testing. Not all S3 API features are supported yet but I will add them on request. Its on Github: Ladon S3 Server
findify/s3mock - an in-process, Java S3 server aimed at testing. Didn't test it - just stumbled upon it. Needs no docker, which might be an advantage. HTH! :)
ive tried both minio and localstack, and the problem with localstack is that the storage in the s3 bucket is not persistence. I think only if you have the pro version it will support percistency. minio was very easy to use, and it is persistent for free.
I created different buckets to use for the different use cases. For example: my-dev-bucket and my-prod-bucket. I don't know if this meets your use criteria but you might want to consider it. The side benefit is it makes your pre production and production code follow the exact same flows.
I am following this guide here:
http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/UsingTheCommandLineTools.html
I don't have JRE, although I have Java 7 setup (I develop in Java). So, I believe that I am stuck on this step in the tutorial: export JAVA_HOME=/usr/local/jre (but my /usr/local does not have JRE.)
See information here:
lucas#lucas-ThinkPad-W520:~$ ls /usr/local/
bin etc games include lib man sbin share src
lucas#lucas-ThinkPad-W520:~$ which java
/usr/bin/java
lucas#lucas-ThinkPad-W520:~$ which jre
lucas#lucas-ThinkPad-W520:~$
Should I install JRE separately, or is there a way to configure my system to work with these auto-scaling tools?
You should not use as-cmd anymore. Please use AWS CLI. Here is the AWS CLI relevant autoscaling commands
The reason being, as-cmd is not maintained by Amazon anymore and all the old CLI features have been exported to AWS CLI. AWS CLI is a one stop shop for all the AWS Services unlike older CLIs where you had to install a separate CLI for individual services.
as-cmd is JAva based and hecne your question. However, AWS-CLI is python based and in my opinion (which is ofcourse subjective), It is bit faster than older JAva based AWS CLIs.
AWS CLI provides output in JSON format which is much more easier to parse.
BESIDES, You don't have to play with CLI for autoscaling. Now, you can do the same job via AWS Console.
Your best bet is to download JRE.
http://www.oracle.com/technetwork/java/javase/downloads/index.html
Bottom right of that page. Small download should be done within seconds. Hope this helps
I have to make a tool for automated distribution of the Java code. Basically, I have a repository with compiled files, and about 50 locations to distribute the same code.
Does anyone know some opensource tool which can help me in this process?
If you are speaking about easy deployment of java applications, use JNLP. The only thing user has to do in this case is to surf to URL.
If you wish to do it without any user participation I believe the solution depends on target platform:
Use SSH for Unix platforms
WNI or telnet for windows platforms.
To make the solution more portable you can run
wget THE-JNLP-URL
on target machine using SSH for unix like platforms.
I do not know built-in command like wget for windows. But you can implement this in VBS or JS and then invoke the script using cscript over WMI or telnet.
Good luck.
Either you can distribute it out with rsync, or you can use Java WebStart to let the user JVM download and invoke the software as needed. For Windows based clients this is usually the easiest, especially when you want people to update to a newer version.
I am developing a web service in GWT which needs to be able to read and write files on the server.
Initially I was just going to dedicate a directory on the server which will be accessed via the GWT Server. However as this is deployed to Tomcat, I am unsure of the problems that could arise or if it is even possible.
I would like a way for the GWT application's server side to have access to a Subversion server. Where files generated on the fly in the GWT Client side are sent to the server, the file is created and commited to subversion. Therefore, should someone want this file (which is a configuration file) they can then have access to it again by checking it out, etc.
Is this possible? Subversion sounds like the ideal solution however I am unsure of the problems.
JavaHL is an official part of the Subversion project.
Here is a page describing the basic difference between JavaHL and SVNKit: http://help.collab.net/index.jsp?topic=/org.tigris.subclipse.doc/topics/faq_subclipse.html (click "What is an adapter? What is JavaHL?")
There are several Java librarys that provide an API to Subversion Servers. Several years ago I used one, but I can't recall its name, however SVNKit is a popular one.