I am trying to get data from SQL Server database table and show it as part of choice parameter as part of a Jenkins Job Build Parameters that I am trying to setup.
I am trying to figure out how to use Extensible Choice for this.
The Choice provider I used is "System Groovy Choice Parameter"
import groovy.sql.Sql
import com.microsoft.sqlserver.jdbc.SQLServerDriver
def output = []
def configuration = [
'dbInstance' : 'servername',
'dbPort' : 0000,
'dbName' : 'dbName',
'dbUser' : 'dbUser',
'dbPass' : 'dbPass'
]
def sql = Sql.newInstance("jdbc:sqlserver://${configuration.dbInstance}:${configuration.dbPort};"
+ "databaseName=" + configuration.dbName,
configuration.dbUser, configuration.dbPass,
'com.microsoft.sqlserver.jdbc.SQLServerDriver')
String sqlString = "SELECT * FROM dbTable"
sql.eachRow(sqlString){ row -> output.push(row[0])
}
return output.sort()
Below is the error I see. Which I understand I see because the jdbc driver is not present. I downloaded the driver from the link below:
https://www.microsoft.com/en-us/download/details.aspx?id=11774
I followed the instructions as to where it should be unzipped to as mentioned in the instructions.
I saw that the CLASSPATH variable is missing, so i went ahead and created the Environment variable with path: "C:\Program Files\sqljdbc_6.0\enu\sqljdbc.jar"
Error: unable to resolve class com.microsoft.sqlserver.jdbc.SQLServerDriver
How do i make sure that the script runs successfully and returns all the data to Extensible Choice. If there is anyother way to do this, I am open to suggestions.
Thank you very much
To resolve the issue I had to copy the "sqljdbc4.jar" file to the following location "C:\Java\jdk1.8.0_92\jre\lib\ext" since that is the path where the JAVA searches for the external jars. Use 4th version for the file which will have 4 in the file name as above as that is version Jenkins supports.
Related
I am running the following code but I am getting the error about the name of the Oracle class. I have set the classpath environment variable with the oracle jar file but it doesn't work. Somebody could help me? I have no idea what else to do. I really appreciate any help
This is the error : Caused by: java.sql.SQLException: Cannot load JDBC driver class 'oracle.jdbc.driver.OracleDriver'
import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.io.jdbc import ReadFromJdbc
import os;
os.environ["JAVA_HOME"] = "/home/jupyter/env/java/jre1.8.0_291"
os.environ["PATH"] = "/home/jupyter/env/java/jre1.8.0_291/bin"
os.environ["ORACLE_HOME"] = "/home/jupyter/env/instantclient_21_1/"
os.environ["CLASSPATH"] = "/home/jupyter/env/ojdbc6-11.2.0.4.jar"
with beam.Pipeline(options=PipelineOptions()) as p:
result = (p
| 'Read from jdbc' >> ReadFromJdbc(
fetch_size=None,
table_name='log_table',
driver_class_name='oracle.jdbc.driver.OracleDriver',
jdbc_url='jdbc:oracle:thin://xx.x.xxx.xxx:xxxx/xxxxx',
username='xxxxxxxx',
password='xxxxxxxx',
))
could you please try the following suggestions?
Double-check the driver path location.
Validate the file permission, maybe it is necessary to execute as administrator.
Check that the database is up and running.
Validate port number and credentials.
Check this post, it contains great insights.
I used the following command to create a .pb file:
flow --model ../YOLOv2/alexeyAB_darknet/darknet-master/cfg/yolov2-dppedestrian.cfg --load ../YOLOv2/alexeyAB_darknet/darknet-master/backup/yolov2-dppedestrian_33900.weights --savepb
Although the model was created successfully, when I load it into my java tensorflow application, I get the following error:
Exception in thread "Thread-9" org.tensorflow.TensorFlowException: Could not find meta graph def matching supplied tags: { serve }. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli
The problem is in the second line of code:
String model_path = "/home/adisys/Desktop/cloudiV2/models/yolo_pedestrian/saved_model";
SavedModelBundle model = SavedModelBundle.load(model_path, "serve");
I tried digging deep and found this link:
Can not load pb file in tensorflow serving
Following the link I ran the following command:
saved_model_cli show --dir saved_model/
The output is as follows:
/home/adisys/anaconda3/lib/python3.6/site-packages/h5py/init.py:34: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters
The given SavedModel contains the following tag-sets:
As can be seen, there were no tag-sets displayed.
What could be the issue?
I just saw your post, I'm sure the problem has solved itself now, but I'm leaving the comment for others working with darkflow. The command --savepb needs to be assigned as --savepb True
I have angular2-node.js application. I am executing a jar file through the node server.
Jar execution is happening fine but it's using the logback.xml present in the jar file.
Node js code:
app.get('/report/:parameter1/:parameter2', function(req, res) {
var fileName = path.join(__dirname, '..', 'javaFile', 'xyz.jar');
spawn('/usr/bin/java', ['-jar ', fileName, parameter1 , parameter2, '&'],{
stdio : ['ignore', out, err],
detached : true }).unref();
data = '{response: Success}';
res.status(200).json(data);
res.end();
});
I want to refer the different logback.xml file for jar execution while running the jar from UI. So, i tried the below code:
spawn('/usr/bin/java', ['-jar -Dlogback.configurationFile=./logback.xml', fileName, cacheName , cacheType, '&'],{
stdio : ['ignore', out, err],
detached : true }).unref();
But, it also didn't work and throw the below error:
Unrecognized option: -jar -Dlogback.configurationFile=./logback.xml
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
I am new to node js. I searched the web but couldn't get an answer.
Is there any way to provide the logback.xml file dynamically in node.js code something like we do in shell script like below:
nohup java -jar -Dlogback.configurationFile=./logback.xml xyz.jar
Can anyone provide any solution for this.
The args arguments is <string[]>, so you should split the multiple args into multiple elements of the array, like you've done for the other arguments. You can check the signature of the method here.
Try,
spawn('/usr/bin/java', ['-jar', '-Dlogback.configurationFile=./logback.xml'], ....
Environment:
Java API google-api-services-datastore-protobuf v1beta2-rev1-3.0.0.
OS: Windows 7.
Goal:
Start Local Datastore Server using the method:
public void start(String sdkPath, String dataset, String cmdLineOptions)
from com.google.api.services.datastore.client.LocalDevelopmentDatastore.java in order to use it in unit tests.
Steps:
I downloaded gcd tool gcd-v1beta2-rev1-3.0.2.zip and put it to C:\gcd folder
(paths to gcd.cmd and gcd.sh are `C:\gcd).
Also, I set environment variables:
"DATASTORE_HOST"="http://localhost:8080" and
"DATASTORE_DATASET"="myapp".
Problem:
LocalDevelopmentDatastoreException occurs.
Caused by: java.io.IOException: Cannot run program "./gcd.sh" (in directory "C:\gcd"): CreateProcess error=2, The system cannot find the file specified.
Note that it tries to find ./gcd.sh but not gcd.cmd.
Java code:
String datasetName = "myapp";
String hostName = "http://localhost:8080";
DatastoreOptions options = new DatastoreOptions.Builder()
.host(hostName)
.dataset(datasetName).build();
LocalDevelopmentDatastoreOptions localOptions = new LocalDevelopmentDatastoreOptions.Builder()
.addEnvVar("DATASTORE_HOST", hostName)
.addEnvVar("DATASTORE_DATASET", datasetName).build();
LocalDevelopmentDatastore datastore = LocalDevelopmentDatastoreFactory.get().create(options, localOptions);
datastore.start("C:\\gcd", datasetName);
This code is based on the example from LocalDevelopmentDatastore.java documentation.
Please help.
It seems as though the method is only programmed to look for gcd.sh, as it doesn't appear there's anything in your config which could have helped this to not fail. I suggest you open a defect report in the Cloud Platform Public Issue Tracker.
Did you consider gcloud-java for using the Datastore?
It also has an option for programmatically starting the local datastore using LocalGcdHelper which should work on Windows.
I am having trouble using Create Function Command for Derby Database.
To start with I tried
CREATE FUNCTION TO_DEGREES(RADIANS DOUBLE) RETURNS DOUBLE
PARAMETER STYLE JAVA NO SQL LANGUAGE JAVA
EXTERNAL NAME 'java.lang.Math.toDegrees'
and then
SELECT TO_DEGREES(3.142), BILLNO FROM SALEBILL
This works absolutely fine.
Now I tried making my own function like this :
package SQLUtils;
public final class TestClass
{
public TestClass()
{
}
public static int addNos(int val1, int val2)
{
return(val1+val2);
}
}
followed by
CREATE FUNCTION addno(no1 int, no2 int) RETURNS int
PARAMETER STYLE JAVA NO SQL LANGUAGE JAVA
EXTERNAL NAME 'SQLUtils.TestClass.addNos'
and then
SELECT addno(3,4), BILLNO FROM SALEBILL
This gives an Exception
Error code -1, SQL state 42X51: The class 'SQLUtils.TestClass' does not exist or is inaccessible. This can happen if the class is not public.
Error code 99999, SQL state XJ001: Java exception: 'SQLUtils.TestClass: java.lang.ClassNotFoundException'.
Line 6, column 1
I have made a jar file of the project containing the above Class. I may be wrong but the conclusion that I can draw from this is that this jar file needs to be in some classpath. But in which classpath and how to add it to a classpath, I am not able to understand.
I tried copying the jar file to jdk\lib folder, jre\lib folder, jdk\jre\lib folder but to no avail.
Can someone please point me in the right direction ?
I am using NetBeans IDE 7.1.2, jdk 1.7.0_09, Derby version 10.8.1.2 in Network mode. The applications and data are on a Server. I access them from Netbeans installed on client computer.