Java insert into MongoDB not working - java

I followed several different tutorials for his but everytime more or less nothing happens. Since I had problems with "ClassNotFoundException" I used the mongodb driver jar file suggested in this question:
Stackoverflow Topic
I have a very simple Java Project with a Class Test running the main method to connect to my database "local" and to the collection "Countries" according to several tutorials, the data should be inserted as defined in the code. But when I check the collection in command line or Studio 3T it is still empty. There are some unused imports due to several tests before.
import org.bson.Document;
import com.mongodb.BasicDBObject;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.Mongo;
import com.mongodb.MongoClient;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
public class Test {
public static void main(String[] args) {
// TODO Auto-generated method stub
try {
MongoClient connection = new MongoClient("localhost", 27017);
DB db = connection.getDB("local");
DBCollection coll = db.getCollection("Countries");
BasicDBObject doc = new BasicDBObject("title", "MongoDB").
append("name","Germany" ).
append("population", "82 000 000");
coll.insert(doc);
System.out.print("Test");
}
catch(Exception e) {
System.out.print(e);
System.out.print("Test");
}
}
}
The output is the following:
Usage : [--bucket bucketname] action
where action is one of:
list : lists all files in the store
put filename : puts the file filename into the store
get filename1 filename2 : gets filename1 from store and sends to filename2
md5 filename : does an md5 hash on a file in the db (for testing)
I dont get why the insert is not working and in addition why the System.out.print methods are not showing up. The getDB method is also cancelled in eclipse saying "The method getDB(String) from the type Mongo is deprecated" that i do not really understand. I hope someone can help me to get the code working. Mongod.exe is running in the background.

Related

how to batch insert data into Google BigQuery from a Java service?

I've read through a few similar questions on SO and GCP docs - but did not get a definitive answer...
Is there a way to batch insert data from my Java service into BigQuery directly, without using intermediary files, PubSub, or other Google services?
The key here is the "batch" mode: I do not want to use streaming API as it costs a lot.
I know there are other ways to do batch inserts using Dataflow, Google Cloud Storage, etc. - I am not interested in those, I need to do batch inserts programmatically for my use case.
I was hoping to use the REST batch API but it looks like it is deprecated now: https://cloud.google.com/bigquery/batch
Alternatives that are pointed to by the docs are:
https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll REST request - but it looks like it will be working in the streaming mode inserting one row at a time (and cost a lot)
a Java client library: https://developers.google.com/api-client-library/java/google-api-java-client/dev-guide
After following through the links and references I ended up finding this specific API method promising: https://googleapis.dev/java/google-api-client/latest/index.html?com/google/api/client/googleapis/batch/BatchRequest.html
with the following usage pattern:
Create an BatchRequest object from this Google API client instance.
Sample usage:
client.batch(httpRequestInitializer)
.queue(...)
.queue(...)
.execute();
Is this API using the batch mode, not streaming one, and is the right way to go ?
thank you!
The "batch" version of writing data is called a "load job" in the Java client library. The bigquery.writer method creates an object which can be used to write data bytes as a batch load job. Set the format options based on the type of file you'd like to serialize to.
import com.google.cloud.bigquery.BigQuery;
import com.google.cloud.bigquery.BigQueryException;
import com.google.cloud.bigquery.BigQueryOptions;
import com.google.cloud.bigquery.FormatOptions;
import com.google.cloud.bigquery.Job;
import com.google.cloud.bigquery.JobId;
import com.google.cloud.bigquery.JobStatistics.LoadStatistics;
import com.google.cloud.bigquery.TableDataWriteChannel;
import com.google.cloud.bigquery.TableId;
import com.google.cloud.bigquery.WriteChannelConfiguration;
import java.io.IOException;
import java.io.OutputStream;
import java.nio.channels.Channels;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.UUID;
public class LoadLocalFile {
public static void main(String[] args) throws IOException, InterruptedException {
String datasetName = "MY_DATASET_NAME";
String tableName = "MY_TABLE_NAME";
Path csvPath = FileSystems.getDefault().getPath(".", "my-data.csv");
loadLocalFile(datasetName, tableName, csvPath, FormatOptions.csv());
}
public static void loadLocalFile(
String datasetName, String tableName, Path csvPath, FormatOptions formatOptions)
throws IOException, InterruptedException {
try {
// Initialize client that will be used to send requests. This client only needs to be created
// once, and can be reused for multiple requests.
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
TableId tableId = TableId.of(datasetName, tableName);
WriteChannelConfiguration writeChannelConfiguration =
WriteChannelConfiguration.newBuilder(tableId).setFormatOptions(formatOptions).build();
// The location and JobName must be specified; other fields can be auto-detected.
String jobName = "jobId_" + UUID.randomUUID().toString();
JobId jobId = JobId.newBuilder().setLocation("us").setJob(jobName).build();
// Imports a local file into a table.
try (TableDataWriteChannel writer = bigquery.writer(jobId, writeChannelConfiguration);
OutputStream stream = Channels.newOutputStream(writer)) {
// This example writes CSV data from a local file,
// but bytes can also be written in batch from memory.
// In addition to CSV, other formats such as
// Newline-Delimited JSON (https://jsonlines.org/) are
// supported.
Files.copy(csvPath, stream);
}
// Get the Job created by the TableDataWriteChannel and wait for it to complete.
Job job = bigquery.getJob(jobId);
Job completedJob = job.waitFor();
if (completedJob == null) {
System.out.println("Job not executed since it no longer exists.");
return;
} else if (completedJob.getStatus().getError() != null) {
System.out.println(
"BigQuery was unable to load local file to the table due to an error: \n"
+ job.getStatus().getError());
return;
}
// Get output status
LoadStatistics stats = job.getStatistics();
System.out.printf("Successfully loaded %d rows. \n", stats.getOutputRows());
} catch (BigQueryException e) {
System.out.println("Local file not loaded. \n" + e.toString());
}
}
}
Resources:
https://cloud.google.com/bigquery/docs/batch-loading-data#loading_data_from_local_files
https://cloud.google.com/bigquery/docs/samples/bigquery-load-from-file
system test which writes JSON from memory

Avro Schema for GenericRecord: Be able to leave blank fields

I'm using Java to convert JSON to Avro and store these to GCS using Google DataFlow.
The Avro schema is created on runtime using SchemaBuilder.
One of the fields I define in the schema is an optional LONG field, it is defined like this:
SchemaBuilder.FieldAssembler<Schema> fields = SchemaBuilder.record(mainName).fields();
Schema concreteType = SchemaBuilder.nullable().longType();
fields.name("key1").type(concreteType).noDefault();
Now when I create a GenericRecord using the schema above, and "key1" is not set, when putting the resulted GenericRecord to the context of my DoFn: context.output(res); I get the following error:
Exception in thread "main" org.apache.beam.sdk.Pipeline$PipelineExecutionException: org.apache.avro.UnresolvedUnionException: Not in union ["long","null"]: 256
I also tried doing the same thing with withDefault(0L) and got the same result.
What do I miss?
Thanks
It works fine for me when trying as below and you can try to print the schema that will help to compare also you can remove the nullable() for long type to try.
fields.name("key1").type().nullable().longType().longDefault(0);
Provided the complete code that I used to test:
import org.apache.avro.AvroRuntimeException;
import org.apache.avro.Schema;
import org.apache.avro.SchemaBuilder;
import org.apache.avro.SchemaBuilder.FieldAssembler;
import org.apache.avro.SchemaBuilder.RecordBuilder;
import org.apache.avro.file.DataFileReader;
import org.apache.avro.file.DataFileWriter;
import org.apache.avro.generic.GenericData.Record;
import org.apache.avro.generic.GenericDatumReader;
import org.apache.avro.generic.GenericDatumWriter;
import org.apache.avro.generic.GenericRecord;
import org.apache.avro.generic.GenericRecordBuilder;
import org.apache.avro.io.DatumReader;
import org.apache.avro.io.DatumWriter;
import java.io.File;
import java.io.IOException;
public class GenericRecordExample {
public static void main(String[] args) {
FieldAssembler<Schema> fields;
RecordBuilder<Schema> record = SchemaBuilder.record("Customer");
fields = record.namespace("com.example").fields();
fields = fields.name("first_name").type().nullable().stringType().noDefault();
fields = fields.name("last_name").type().nullable().stringType().noDefault();
fields = fields.name("account_number").type().nullable().longType().longDefault(0);
Schema schema = fields.endRecord();
System.out.println(schema.toString());
// we build our first customer
GenericRecordBuilder customerBuilder = new GenericRecordBuilder(schema);
customerBuilder.set("first_name", "John");
customerBuilder.set("last_name", "Doe");
customerBuilder.set("account_number", 999333444111L);
Record myCustomer = customerBuilder.build();
System.out.println(myCustomer);
// writing to a file
final DatumWriter<GenericRecord> datumWriter = new GenericDatumWriter<>(schema);
try (DataFileWriter<GenericRecord> dataFileWriter = new DataFileWriter<>(datumWriter)) {
dataFileWriter.create(myCustomer.getSchema(), new File("customer-generic.avro"));
dataFileWriter.append(myCustomer);
System.out.println("Written customer-generic.avro");
} catch (IOException e) {
System.out.println("Couldn't write file");
e.printStackTrace();
}
// reading from a file
final File file = new File("customer-generic.avro");
final DatumReader<GenericRecord> datumReader = new GenericDatumReader<>();
GenericRecord customerRead;
try (DataFileReader<GenericRecord> dataFileReader = new DataFileReader<>(file, datumReader)){
customerRead = dataFileReader.next();
System.out.println("Successfully read avro file");
System.out.println(customerRead.toString());
// get the data from the generic record
System.out.println("First name: " + customerRead.get("first_name"));
// read a non existent field
System.out.println("Non existent field: " + customerRead.get("not_here"));
}
catch(IOException e) {
e.printStackTrace();
}
}
}
If I understand your question correctly, you're trying to accept JSON strings and save them in a Cloud Storage bucket, using Avro as your coder for the data as it moves through Dataflow. There's nothing immediately obvious from your code that looks wrong to me. I have done this, including saving the data to Cloud Storage and to BigQuery.
You might consider using a simpler, and probably less error prone approach: Define a Java class for your data and use Avro annotations on it to enable the coder to work properly. Here's an example:
import org.apache.avro.reflect.Nullable;
import org.apache.beam.sdk.coders.AvroCoder;
import org.apache.beam.sdk.coders.DefaultCoder;
#DefaultCoder(AvroCoder.class)
public class Data {
public long nonNullableValue;
#Nullable public long nullableValue;
}
Then, use this type in your DnFn implementations like you likely already are. Beam should be able to move the data between workers properly using Avro, even when the fields marked #Nullable are null.

Azure functions using Java - How to created #BlobTrigger

i need to created Azure function BlobTrigger using Java to monitor my storage container for new and updated blobs.
tried with below code
import java.util.*;
import com.microsoft.azure.serverless.functions.annotation.*;
import com.microsoft.azure.serverless.functions.*;
import java.nio.file.*;
import java.io.*;
import java.net.URL;
import java.net.URLConnection;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import com.microsoft.azure.storage.*;
import com.microsoft.azure.storage.blob.*;
#FunctionName("testblobtrigger")
public String testblobtrigger(#BlobTrigger(name = "test", path = "testcontainer/{name}") String content) {
try {
return String.format("Blob content : %s!", content);
} catch (Exception e) {
// Output the stack trace.
e.printStackTrace();
return "Access Error!";
}
}
when executed it is showing error
Storage binding (blob/queue/table) must have non-empty connection. Invalid storage binding found on method:
it is working when added connection string
public String kafkablobtrigger(#BlobTrigger(name = "test", path = "testjavablobstorage/{name}",connection=storageConnectionString) String content) {
why i need to add connection string when using blobtrigger?
in C# it is working without connection string:
public static void ProcessBlobContainer1([BlobTrigger("container1/{blobName}")] CloudBlockBlob blob, string blobName)
{
ProcessBlob("container1", blobName, blob);
}
i didn't see any Java sample for Azure functions for #BlobTrigger.
After all, connection is necessary for the trigger to identify where the container locates.
After test I find #Mikhail is right.
For C#, the default value(in local.settings.json or in application settings in portal) will be used if connection is ignored. But unfortunately there's no same settings for java.
You can add #StorageAccount("YourStorageConnection") below your #FuncionName as it's another valid way to choose. And value of YourStorageConnection in local.settings.json or in portal's application settings is up to you.
You can follow this tutorial, use mvn azure-functions:add to find four(Http/Blob/Queue/TimerTrigger) templates for java.

Best way to connect java swing app to ms access db on a shareserver?

Currently im developing a java swing application that I'd like to serve as the GUI for CRUD operations on a MS access database. Currently, everyone on the team that will be using this application updates a spreadsheet on a shareserver. They'd like to switch over to a UI that better suits their purposes, and transition the spreadsheet to a database.
I'm planning on putting an executable jar and the ms access database file on the shareserver. This is where the jar will be accessed.
I don't want users to have to be messing with ODBC settings. Is there a library that can help with this?
UPDATE: Shailendrasingh Patil's suggestion below worked best for me. This took me a little bit of research and the setup was a bit confusing. But I eventually got everything working the way I was hoping. I used Gradle to pull in the necessary dependencies to use UcanAccess.
The following is a snippet from my DatabaseController class:
import javax.swing.*;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.Statement;
public class DatabaseController {
public DatabaseController() {}
public void addOperation(String date, String email, String subject, String body) {
try{
Connection con = DriverManager.getConnection("jdbc:ucanaccess://C:\\Users\\user\\Desktop\\TestDatabase.accdb;jackcessOpener=CryptCodecOpener","user", "password");
String sql = "INSERT INTO Email (Date_Received, Email_Address, Subject, Message) Values " +
"('"+date+"'," +
"'"+email+"'," +
"'"+subject+"'," +
"'"+body+"')";
Statement statement = con.createStatement();
statement.execute(sql);
}
catch (Exception e) {
JOptionPane.showMessageDialog(null, e.getMessage(),"Error",
JOptionPane.ERROR_MESSAGE);
e.printStackTrace();
}
}
}
The following class is also required:
import java.io.File;
import java.io.IOException;
import com.healthmarketscience.jackcess.CryptCodecProvider;
import com.healthmarketscience.jackcess.Database;
import com.healthmarketscience.jackcess.DatabaseBuilder;
import net.ucanaccess.jdbc.JackcessOpenerInterface;
public class CryptCodecOpener implements JackcessOpenerInterface {
public Database open(File fl,String pwd) throws IOException {
DatabaseBuilder dbd =new DatabaseBuilder(fl);
dbd.setAutoSync(false);
dbd.setCodecProvider(new CryptCodecProvider(pwd));
dbd.setReadOnly(false);
return dbd.open();
}
}
I apologize for the bad indentations.
You should use UCanAccess drivers to connect to MS-Access. It is a pure JDBC based and you don't need ODBC drivers.
Refer examples here

Duplicate Key Error Index : insertOne MongoDB

I'm getting error
duplicate key error index: my.own.$_id_ dup key: { : ObjectId('57d2c4857c137b20e40c633f')
this ObjectId is from first insertOne() but second insertOne() command fails can anybody help me in this.
Just learning Java Driver MongoDB
import com.mongodb.MongoClient;
import com.mongodb.MongoCredential;
import com.mongodb.ServerAddress;
import com.mongodb.client.MongoCollection;
import com.mongodb.client.MongoDatabase;
import com.sun.org.apache.xml.internal.security.utils.HelperNodeList;
import org.bson.Document;
import java.util.Arrays;
import static com.mongodb.MongoCredential.*;
public class Main {
public static void main(String[] args){
//Creating Credential Parameters
//MongoCredential credential = createScramSha1Credential("root","my","root".toCharArray());
//MongoClient to connect
MongoClient mongo = new MongoClient();
MongoDatabase database = mongo.getDatabase("my");
MongoCollection<Document> collection = database.getCollection("own");
Document document = new Document("x",1).append("y",3);
collection.insertOne(document);
collection.insertOne(document.append("z",3));
}
}
you inserted a document using insertOne method, now you are trying to use the same method to perform update operations which is wrong.
{ collection.updateOne(document.append("z",3)); }
you have to use the updateOne method to updated the document. the insertOne actually try to re-insert the document to your mongo collection and hence you get the error.
you have to use insertOne actually try to re-insert the document to your mongo collection and hence you get the error.
now, if u want another collection means,
collection.insertOne(document);
collection.insertOne(document.append("z",3)).remove(_id));
if u want same Collection means,
collection.updateOne(document.append("z",3))

Categories