I am using hbase client 2.1.7 to connect to my server(same version 2.1.7).
<groupId>org.apache.hbase</groupId>
<artifactId>hbase-client</artifactId>
<version>2.1.7</version>
Now there is an user who have permission to read/write on the table in the server.
User = LTzm#yA$U
For this my code looks like this:
String hadoop_user_key = "HADOOP_USER_NAME";
String user = "LTzm#yA$U";
System.setProperty(hadoop_user_key, token);
Now when I am trying to read the key from the table i am getting following error:
error.log:! Causing:
org.apache.hadoop.hbase.security.AccessDeniedException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions for user 'LTzm' (table=table_name, action=READ)
Weird part is writes are working fine. To validate that whether right user is getting passed for write, i removed the user and try rerun the code and the write fails with the error:
error.log:! org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions (user=LTzm#yA$U,
scope=table_name, family=d:visitId,
params=[table=table_name,family=d:visitId],action=WRITE)
Again read was also failing with:
error.log:! org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient
permissions for user 'LTzm'
(table=table_name, action=READ)
Somehow Ltzm is getting passed with read call and LTzm#yA$U is getting passed for write.
Does anyone help me what is the issue here, Is # or special symbol not allowed in the user for hbase(then how is it working for write calls).
Edit 1:
Here is the function to create connection:
public static Connection createConnection() {
String hadoop_user_key = "HADOOP_USER_NAME";
String user = "LTzm#yA$U";
Map<String, String> configMap = new HashMap<>();
configMap.put("hbase.rootdir", "hdfs://session/apps/hbase/data"));
configMap.put("hbase.zookeeper.quorum", "ip1, ip2");
configMap.put("zookeeper.znode.parent", "/hbase");
configMap.put("hbase.rpc.timeout", "400");
configMap.put("hbase.rpc.shortoperation.timeout", "400");
configMap.put("hbase.client.meta.operation.timeout", "5000");
configMap.put("hbase.rpc.engine", "org.apache.hadoop.hbase.ipc.SecureRpcEngine");
configMap.put("hbase.client.retries.number", "3");
configMap.put("hbase.client.operation.timeout", "3000"));
configMap.put(HConstants.HBASE_CLIENT_IPC_POOL_SIZE, "30"));
configMap.put("hbase.client.pause", "50"));
configMap.put("hbase.client.pause.cqtbe", "1000"));
configMap.put("hbase.client.max.total.tasks", "500"));
configMap.put("hbase.client.max.perserver.tasks", "50"));
configMap.put("hbase.client.max.perregion.tasks", "10"));
configMap.put("hbase.client.ipc.pool.type", "RoundRobinPool");
configMap.put("hbase.rpc.read.timeout", "200"));
configMap.put("hbase.rpc.write.timeout", "200"));
configMap.put("hbase.client.write.buffer", "20971520"));
System.setProperty(hadoop_user_key, token);
Configuration hConfig = HBaseConfiguration.create();
for (String key : configMap.keySet())
hConfig.set(key, configMap.get(key));
UserGroupInformation.setConfiguration(hConfig);
Connection hbaseConnection;
hbaseConnection = ConnectionFactory.createConnection(config);
return connection;
}
Here are the read and write calls:
protected Result read(String tableName, String rowKey) throws IOException {
Get get = new Get(Bytes.toBytes(rowKey));
get.addFamily(COLUMN_FAMILY_BYTES);
Result res;
Table hTable = null;
try {
hTable = getHbaseTable(tableName);
res = hTable.get(get);
} finally {
if (hTable != null) {
releaseHbaseTable(hTable);
}
}
return res;
}
protected void writeRow(String tableName, String rowKey, Map<String, byte[]> columnData) throws IOException {
Put cellPut = new Put(Bytes.toBytes(rowKey));
for (String qualifier : columnData.keySet()) {
cellPut.addColumn(COLUMN_FAMILY_BYTES, Bytes.toBytes(qualifier), columnData.get(qualifier));
}
Table hTable = null;
try {
hTable = getHbaseTable(tableName);
if (hTable != null) {
hTable.put(cellPut);
}
} finally {
if (hTable != null) {
releaseHbaseTable(hTable);
}
}
}
private Table getTable(String tableName) {
try {
Table table = hbaseConnection.getTable(TableName.valueOf(tableName));
} catch (IOException e) {
LOGGER.error("Exception while adding table in factory.", e);
}
}
Related
I added a step in my application to persist files via GridFS and added a metadata field called "processed" to work as a flag for a scheduled task that retrieves the new file and sends it on for processing. Since the Java driver for GridFS doesn't have a method allowing metadata to be updated I used MongoCollection for the "fs.files" collection to update "metadata.processing" to true.
I use GridFSBucket.find(eq("metadata.processed", false) to get the new files for processing and then update metadata.processed to true once processing is completed. This works if I add a new file while the application is running. However, if I have an existing file with "metadata.processed" set to false and start the application, the above find call returns no results. Similarly if I have a file that was already processed and I set the "metadata.processed" field back to false, the above find call also ceases working.
private static final String FILTER_STR = "'{'\"filename\" : \"{0}\"'}'";
private static final String UPDATE_STR =
"'{'\"$set\": '{'\"metadata.processed\": \"{0}\"'}}'";
#Autowired
private GridFSBucketFactory gridFSBucketFactory;
#Autowired
private MongoCollectionFactory mongoCollectionFactory;
public void storeFile(String filename, DateTime publishTime,
InputStream inputStream) {
if (fileExists(filename)) {
LOGGER.info("File named {} already exists.", filename);
} else {
uploadToGridFS(filename, publishTime, inputStream);
LOGGER.info("Stored file named {}.", filename);
}
}
public GridFSDownloadStream getFile(BsonValue id) {
return gridFSBucketFactory.getGridFSBucket().openDownloadStream(id);
}
public GridFSDownloadStream getFile(String filename) {
final GridFSFile file = getGridFSFile(filename);
return file == null ? null : getFile(file.getId());
}
public GridFSFindIterable getUnprocessedFiles() {
return gridFSBucketFactory.getGridFSBucket()
.find(eq("metadata.processed", false));
}
public void setProcessed(String filename, boolean isProcessed) {
final BasicDBObject filter =
BasicDBObject.parse(format(FILTER_STR, filename));
final BasicDBObject update =
BasicDBObject.parse(format(UPDATE_STR, isProcessed));
if (updateOne(filter, update)) {
LOGGER.info("Set metadata for {} to {}", filename, isProcessed);
}
}
private void uploadToGridFS(String filename, DateTime publishTime,
InputStream inputStream) {
gridFSBucketFactory.getGridFSBucket().uploadFromStream(filename,
inputStream, createMetadata(publishTime));
}
private GridFSUploadOptions createMetadata(DateTime publishTime) {
final Document metadata = new Document();
metadata.put("processed", false);
// metadata.put("publishTime", publishTime.toString());
return new GridFSUploadOptions().metadata(metadata);
}
private boolean fileExists(String filename) {
return getGridFSFile(filename) != null;
}
private GridFSFile getGridFSFile(String filename) {
return gridFSBucketFactory.getGridFSBucket()
.find(eq("filename", filename)).first();
}
private boolean updateOne(BasicDBObject filter, BasicDBObject update) {
try {
mongoCollectionFactory.getFsFilesCollection().updateOne(filter,
update, new UpdateOptions().upsert(true));
} catch (final MongoException e) {
LOGGER.error(
"The following failed to update, filter:{0} update:{1}",
filter, update, e);
return false;
}
return true;
}
Any idea what I can do to ensure:
GridFSBucket.find(eq("metadata.processed", false)
returns the proper results for existing files and/or files that have had the metadata changed?
The issue was due to setting the metadata.processed value as a String vs a boolean.
When initially creating the metadata I set its value with a boolean:
private GridFSUploadOptions createMetadata(DateTime publishTime) {
final Document metadata = new Document();
metadata.put("processed", false);
// metadata.put("publishTime", publishTime.toString());
return new GridFSUploadOptions().metadata(metadata);
}
And later I check for a boolean:
public GridFSFindIterable getUnprocessedFiles() {
return gridFSBucketFactory.getGridFSBucket()
.find(eq("metadata.processed", false));
}
But when updating the metadata using the "fs.files" MongoCollection I incorrectly added quotes around the boolean value here:
private static final String UPDATE_STR =
"'{'\"$set\": '{'\"metadata.processed\": \"{0}\"'}}'";
Which caused the metadata value to be saved as a String vs a boolean.
in my system (Spring boot project) I need to make a request to every 350 people that I search for my data, I need to page and go sending. I looked for a lot of ways to do it and found a lot of it with JPA but I'm using Jooq, so I asked for help with the user's tool and they guided me to use the options of limit and offset.
This is the method where I do the research, I set up my DTO and in the end I return the list of people.
public static ArrayList getAllPeople(Connection connection) {
ArrayList<peopleDto> peopleList = new ArrayList<>();
DSLContext ctx = null;
peopleDto peopleDto;
try {
ctx = DSL.using(connection, SQLDialect.MYSQL);
Result<Record> result = ctx.select()
.from(people)
.orderBy(people.GNUM)
.offset(0)
.limit(350)
.fetch();
for (Record r : result) {
peopleDto = new peopleDto();
peopleDto.setpeopleID(r.getValue(people.GNUM));
peopleDto.setName(r.get(people.SNAME));
peopleDto.setRM(r.get(people.SRM));
peopleDto.setRG(r.get(people.SRG));
peopleDto.setCertidaoLivro(r.get(people.SCERT));
peopleDto.setCertidaoDistrito(r.get(people.SCERTD));
peopleList.add(peopleDto);
}
} catch (Exception e) {
log.error(e.toString());
} finally {
if (ctx != null) {
ctx.close();
}
}
return peopleList;
}
This search without the limitations returns 1,400 people.
The question is how do I send up the limit number then return to this method to continue where I left off last until I reach the total value of records?
Feed your method with a Pageable parameter and return a Page from your method. Something along the lines of ...
public static ArrayList getAllPeople(Connection connection, Pageable pageable) {
ArrayList<peopleDto> peopleList = new ArrayList<>();
DSLContext ctx = null;
peopleDto peopleDto;
try {
ctx = DSL.using(connection, SQLDialect.MYSQL);
Result<Record> result = ctx.select()
.from(people)
.orderBy(people.GNUM)
.offset(pageable.getOffset())
.limit(pageable.getPageSize())
.fetch();
for (Record r : result) {
peopleDto = new peopleDto();
peopleDto.setpeopleID(r.getValue(people.GNUM));
peopleDto.setName(r.get(people.SNAME));
peopleDto.setRM(r.get(people.SRM));
peopleDto.setRG(r.get(people.SRG));
peopleDto.setCertidaoLivro(r.get(people.SCERT));
peopleDto.setCertidaoDistrito(r.get(people.SCERTD));
peopleList.add(peopleDto);
}
} catch (Exception e) {
log.error(e.toString());
} finally {
if (ctx != null) {
ctx.close();
}
}
return new PageImpl(peopleList, pageable, hereyoushouldQueryTheTotalItemCount());
}
Now you can do something with those 350 Users. With the help of the page you can now iterate over the remaining people:
if(page.hasNext())
getAllPeople(connection, page.nextPageable());
Inspired by this article Sorting and Pagination with Spring and Jooq
First, I want to say thanks to everyone that took their time to help me figure this out because I was searching for more than a week for a solution to my problem. Here it is:
My goal is to start a custom workflow in Alfresco Community 5.2 and to set some custom properties in the first task trough a web script using only the Public Java API. My class is extending AbstractWebScript. Currently I have success with starting the workflow and setting properties like bpm:workflowDescription, but I'm not able to set my custom properties in the tasks.
Here is the code:
public class StartWorkflow extends AbstractWebScript {
/**
* The Alfresco Service Registry that gives access to all public content services in Alfresco.
*/
private ServiceRegistry serviceRegistry;
public void setServiceRegistry(ServiceRegistry serviceRegistry) {
this.serviceRegistry = serviceRegistry;
}
#Override
public void execute(WebScriptRequest req, WebScriptResponse res) throws IOException {
// Create JSON object for the response
JSONObject obj = new JSONObject();
try {
// Check if parameter defName is present in the request
String wfDefFromReq = req.getParameter("defName");
if (wfDefFromReq == null) {
obj.put("resultCode", "1 (Error)");
obj.put("errorMessage", "Parameter defName not found.");
return;
}
// Get the WFL Service
WorkflowService workflowService = serviceRegistry.getWorkflowService();
// Build WFL Definition name
String wfDefName = "activiti$" + wfDefFromReq;
// Get WorkflowDefinition object
WorkflowDefinition wfDef = workflowService.getDefinitionByName(wfDefName);
// Check if such WorkflowDefinition exists
if (wfDef == null) {
obj.put("resultCode", "1 (Error)");
obj.put("errorMessage", "No workflow definition found for defName = " + wfDefName);
return;
}
// Get parameters from the request
Content reqContent = req.getContent();
if (reqContent == null) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST, "Missing request body.");
}
String content;
content = reqContent.getContent();
if (content.isEmpty()) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST, "Content is empty");
}
JSONTokener jsonTokener = new JSONTokener(content);
JSONObject json = new JSONObject(jsonTokener);
// Set the workflow description
Map<QName, Serializable> params = new HashMap();
params.put(WorkflowModel.PROP_WORKFLOW_DESCRIPTION, "Workflow started from JAVA API");
// Start the workflow
WorkflowPath wfPath = workflowService.startWorkflow(wfDef.getId(), params);
// Get params from the POST request
Map<QName, Serializable> reqParams = new HashMap();
Iterator<String> i = json.keys();
while (i.hasNext()) {
String paramName = i.next();
QName qName = QName.createQName(paramName);
String value = json.getString(qName.getLocalName());
reqParams.put(qName, value);
}
// Try to update the task properties
// Get the next active task which contains the properties to update
WorkflowTask wfTask = workflowService.getTasksForWorkflowPath(wfPath.getId()).get(0);
// Update properties
WorkflowTask updatedTask = workflowService.updateTask(wfTask.getId(), reqParams, null, null);
obj.put("resultCode", "0 (Success)");
obj.put("workflowId", wfPath.getId());
} catch (JSONException e) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST,
e.getLocalizedMessage());
} catch (IOException ioe) {
throw new WebScriptException(Status.STATUS_BAD_REQUEST,
"Error when parsing the request.",
ioe);
} finally {
// build a JSON string and send it back
String jsonString = obj.toString();
res.getWriter().write(jsonString);
}
}
}
Here is how I call the webscript:
curl -v -uadmin:admin -X POST -d #postParams.json localhost:8080/alfresco/s/workflow/startJava?defName=nameOfTheWFLDefinition -H "Content-Type:application/json"
In postParams.json file I have the required pairs for property/value which I need to update:
{
"cmprop:propOne" : "Value 1",
"cmprop:propTwo" : "Value 2",
"cmprop:propThree" : "Value 3"
}
The workflow is started, bpm:workflowDescription is set correctly, but the properties in the task are not visible to be set.
I made a JS script which I call when the workflow is started:
execution.setVariable('bpm_workflowDescription', 'Some String ' + execution.getVariable('cmprop:propOne'));
And actually the value for cmprop:propOne is used and the description is properly updated - which means that those properties are updated somewhere (on execution level maybe?) but I cannot figure out why they are not visible when I open the task.
I had success with starting the workflow and updating the properties using the JavaScript API with:
if (wfdef) {
// Get the params
wfparams = {};
if (jsonRequest) {
for ( var prop in jsonRequest) {
wfparams[prop] = jsonRequest[prop];
}
}
wfpackage = workflow.createPackage();
wfpath = wfdef.startWorkflow(wfpackage, wfparams);
The problem is that I only want to use the public Java API, please help.
Thanks!
Do you set your variables locally in your tasks? From what I see, it seems that you define your variables at the execution level, but not at the state level. If you take a look at the ootb adhoc.bpmn20.xml file (https://github.com/Activiti/Activiti-Designer/blob/master/org.activiti.designer.eclipse/src/main/resources/templates/adhoc.bpmn20.xml), you can notice an event listener that sets the variable locally:
<extensionElements>
<activiti:taskListener event="create" class="org.alfresco.repo.workflow.activiti.tasklistener.ScriptTaskListener">
<activiti:field name="script">
<activiti:string>
if (typeof bpm_workflowDueDate != 'undefined') task.setVariableLocal('bpm_dueDate', bpm_workflowDueDate);
if (typeof bpm_workflowPriority != 'undefined') task.priority = bpm_workflowPriority;
</activiti:string>
</activiti:field>
</activiti:taskListener>
</extensionElements>
Usually, I just try to import all tasks for my custom model prefix. So for you, it should look like that:
import java.util.Set;
import org.activiti.engine.delegate.DelegateExecution;
import org.activiti.engine.delegate.DelegateTask;
import org.apache.log4j.Logger;
public class ImportVariables extends AbstractTaskListener {
private Logger logger = Logger.getLogger(ImportVariables.class);
#Override
public void notify(DelegateTask task) {
logger.debug("Inside ImportVariables.notify()");
logger.debug("Task ID:" + task.getId());
logger.debug("Task name:" + task.getName());
logger.debug("Task proc ID:" + task.getProcessInstanceId());
logger.debug("Task def key:" + task.getTaskDefinitionKey());
DelegateExecution execution = task.getExecution();
Set<String> executionVariables = execution.getVariableNamesLocal();
for (String variableName : executionVariables) {
// If the variable starts by "cmprop_"
if (variableName.startsWith("cmprop_")) {
// Publish it at the task level
task.setVariableLocal(variableName, execution.getVariableLocal(variableName));
}
}
}
}
Is it possible to create Group and User in AEM6.2 by using Jackrabbit User Manager API with permissions.
I have just followed below URL's but the code is throwing some exception :
https://helpx.adobe.com/experience-manager/using/jackrabbit-users.html
https://stackoverflow.com/questions/38259047/how-to-give-permission-all-in-aem-through-programatically
ResourceResolverFactory getServiceResourceResolver throws Exception in AEM 6.1
As getAdministrativeResourceResolver(Map) method is deprecated then how can we use getServiceResourceResolver(Map) method instead.
Sharing my solution which will be helpful for others.
Following is the code using getServiceResourceResolver(Map) method for creating Group first and then User and then add user into group with ACL privileges and permission:
public void createGroupUser(SlingHttpServletRequest request) {
String userName = request.getParameter("userName");
String password = request.getParameter("password");
String groupName = request.getParameter("groupName");
Session session = null;
ResourceResolver resourceResolver = null;
try {
Map<String, Object> param = new HashMap<String, Object>();
param.put(ResourceResolverFactory.SUBSERVICE, "datawrite");
resourceResolver = resourceResolverFactory.getServiceResourceResolver(param);
session = resourceResolver.adaptTo(Session.class);
// Create UserManager Object
final UserManager userManager = AccessControlUtil.getUserManager(session);
// Create a Group
Group group = null;
if (userManager.getAuthorizable(groupName) == null) {
group = userManager.createGroup(groupName);
ValueFactory valueFactory = session.getValueFactory();
Value groupNameValue = valueFactory.createValue(groupName, PropertyType.STRING);
group.setProperty("./profile/givenName", groupNameValue);
session.save();
log.info("---> {} Group successfully created.", group.getID());
} else {
log.info("---> Group already exist..");
}
// Create a User
User user = null;
if (userManager.getAuthorizable(userName) == null) {
user = userManager.createUser(userName, password);
ValueFactory valueFactory = session.getValueFactory();
Value firstNameValue = valueFactory.createValue("Arpit", PropertyType.STRING);
user.setProperty("./profile/givenName", firstNameValue);
Value lastNameValue = valueFactory.createValue("Bora", PropertyType.STRING);
user.setProperty("./profile/familyName", lastNameValue);
Value emailValue = valueFactory.createValue("arpit.p.bora#gmail.com", PropertyType.STRING);
user.setProperty("./profile/email", emailValue);
session.save();
// Add User to Group
Group addUserToGroup = (Group) (userManager.getAuthorizable(groupName));
addUserToGroup.addMember(userManager.getAuthorizable(userName));
session.save();
// set Resource-based ACLs
String nodePath = user.getPath();
setAclPrivileges(nodePath, session);
log.info("---> {} User successfully created and added into group.", user.getID());
} else {
log.info("---> User already exist..");
}
} catch (Exception e) {
log.info("---> Not able to perform User Management..");
log.info("---> Exception.." + e.getMessage());
} finally {
if (session != null && session.isLive()) {
session.logout();
}
if (resourceResolver != null)
resourceResolver.close();
}
}
public static void setAclPrivileges(String path, Session session) {
try {
AccessControlManager aMgr = session.getAccessControlManager();
// create a privilege set
Privilege[] privileges = new Privilege[] {
aMgr.privilegeFromName(Privilege.JCR_VERSION_MANAGEMENT),
aMgr.privilegeFromName(Privilege.JCR_MODIFY_PROPERTIES),
aMgr.privilegeFromName(Privilege.JCR_ADD_CHILD_NODES),
aMgr.privilegeFromName(Privilege.JCR_LOCK_MANAGEMENT),
aMgr.privilegeFromName(Privilege.JCR_NODE_TYPE_MANAGEMENT),
aMgr.privilegeFromName(Replicator.REPLICATE_PRIVILEGE) };
AccessControlList acl;
try {
// get first applicable policy (for nodes w/o a policy)
acl = (AccessControlList) aMgr.getApplicablePolicies(path).nextAccessControlPolicy();
} catch (NoSuchElementException e) {
// else node already has a policy, get that one
acl = (AccessControlList) aMgr.getPolicies(path)[0];
}
// remove all existing entries
for (AccessControlEntry e : acl.getAccessControlEntries()) {
acl.removeAccessControlEntry(e);
}
// add a new one for the special "everyone" principal
acl.addAccessControlEntry(EveryonePrincipal.getInstance(), privileges);
// the policy must be re-set
aMgr.setPolicy(path, acl);
// and the session must be saved for the changes to be applied
session.save();
} catch (Exception e) {
log.info("---> Not able to perform ACL Privileges..");
log.info("---> Exception.." + e.getMessage());
}
}
In code "datawrite" is a service mapping which is mapped with system user in "Apache Sling Service User Mapper Service" which is configurable in the OSGI configuration admin interface.
For more detail about system user check link - How to Create System User in AEM?
I am providing this code direcly from a training of an official Adobe channel, and it is based on AEM 6.1. So I assume this might be the best practice.
private void modifyPermissions() {
Session adminSession = null;
try{
adminSession = repository.loginService(null, repository.getDefaultWorkspace());
UserManager userMgr= ((org.apache.jackrabbit.api.JackrabbitSession)adminSession).getUserManager();
AccessControlManager accessControlManager = adminSession.getAccessControlManager();
Authorizable denyAccess = userMgr.getAuthorizable("deny-access");
AccessControlPolicyIterator policyIterator =
accessControlManager.getApplicablePolicies(CONTENT_GEOMETRIXX_FR);
AccessControlList acl;
try{
acl=(JackrabbitAccessControlList) policyIterator.nextAccessControlPolicy();
}catch(NoSuchElementException nse){
acl=(JackrabbitAccessControlList) accessControlManager.getPolicies(CONTENT_GEOMETRIXX_FR)[0];
}
Privilege[] privileges = {accessControlManager.privilegeFromName(Privilege.JCR_READ)};
acl.addAccessControlEntry(denyAccess.getPrincipal(), privileges);
accessControlManager.setPolicy(CONTENT_GEOMETRIXX_FR, acl);
adminSession.save();
}catch (RepositoryException e){
LOGGER.error("**************************Repo Exception", e);
}finally{
if (adminSession != null)
adminSession.logout();
}
I am having trouble getting data from a database I know exists and I know the format of.
In the code snippet below the "if conn != null" is just a test to verify the database name, table name, etc are all correct, and they DO verify.
The last line below is what generates the exception
public static HashMap<Integer, String> getNetworkMapFromRemote(DSLContext dslRemote, Connection conn, Logger logger) {
HashMap<Integer,String> remoteMap = new HashMap<Integer, String>();
// conn is only used for test purposes
if (conn != null) {
// test to be sure database is ok
try
{
ResultSet rs = conn.createStatement().executeQuery("SELECT networkid, name FROM network");
while (rs.next()) {
System.out.println("TEST: nwid " + rs.getString(1) + " name " + rs.getString(2));
}
rs.close();
}
catch ( SQLException se )
{
logger.trace("getNetworksForDevices SqlException: " + se.toString());
}
}
// ----------- JOOQ problem section ------------------------
Network nR = Network.NETWORK.as("network");
// THE FOLLOWING LINE GENERATES THE UNKNOWN TABLE
Result<Record2<Integer, String>> result = dslRemote.select( nR.NETWORKID, nR.NAME ).fetch();
This is the output
TEST: nwid 1 name Network 1
org.jooq.exception.DataAccessException: SQL [select `network`.`NetworkId`, `network`.`Name` from dual]; Unknown table 'network' in field list
at org.jooq.impl.Utils.translate(Utils.java:1288)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:495)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:327)
at org.jooq.impl.AbstractResultQuery.fetch(AbstractResultQuery.java:330)
at org.jooq.impl.SelectImpl.fetch(SelectImpl.java:2256)
at com.nvi.kpiserver.remote.KpiCollectorUtil.getNetworkMapFromRemote(KpiCollectorUtil.java:328)
at com.nvi.kpiserver.remote.KpiCollectorUtilTest.testUpdateKpiNetworksForRemoteIntravue(KpiCollectorUtilTest.java:61)
.................
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown table 'network' in field list
.................
For the sake of completness here is part of the JOOQ generated class file for Network
package com.wbcnvi.intravue.generated.tables;
#javax.annotation.Generated(value = { "http://www.jooq.org", "3.3.1" },
comments = "This class is generated by jOOQ")
#java.lang.SuppressWarnings({ "all", "unchecked", "rawtypes" })
public class Network extends org.jooq.impl.TableImpl<com.wbcnvi.intravue.generated.tables.records.NetworkRecord> {
private static final long serialVersionUID = 1729023198;
public static final com.wbcnvi.intravue.generated.tables.Network NETWORK = new com.wbcnvi.intravue.generated.tables.Network();
#Override
public java.lang.Class<com.wbcnvi.intravue.generated.tables.records.NetworkRecord> getRecordType() {
return com.wbcnvi.intravue.generated.tables.records.NetworkRecord.class;
}
public final org.jooq.TableField<com.wbcnvi.intravue.generated.tables.records.NetworkRecord, java.lang.Integer> NWID = createField("NwId", org.jooq.impl.SQLDataType.INTEGER.nullable(false), this, "");
public final org.jooq.TableField<com.wbcnvi.intravue.generated.tables.records.NetworkRecord, java.lang.Integer> NETWORKID = createField("NetworkId", org.jooq.impl.SQLDataType.INTEGER.nullable(false).defaulted(true), this, "");
public final org.jooq.TableField<com.wbcnvi.intravue.generated.tables.records.NetworkRecord, java.lang.String> NAME = createField("Name", org.jooq.impl.SQLDataType.CHAR.length(40).nullable(false).defaulted(true), this, "");
public final org.jooq.TableField<com.wbcnvi.intravue.generated.tables.records.NetworkRecord, java.lang.Integer> USECOUNT = createField("UseCount", org.jooq.impl.SQLDataType.INTEGER.nullable(false).defaulted(true), this, "");
public final org.jooq.TableField<com.wbcnvi.intravue.generated.tables.records.NetworkRecord, java.lang.Integer> NETGROUP = createField("NetGroup", org.jooq.impl.SQLDataType.INTEGER.nullable(false).defaulted(true), this, "");
public final org.jooq.TableField<com.wbcnvi.intravue.generated.tables.records.NetworkRecord, java.lang.String> AGENT = createField("Agent", org.jooq.impl.SQLDataType.CHAR.length(16), this, "");
public Network() {
this("network", null);
}
public Network(java.lang.String alias) {
this(alias, com.wbcnvi.intravue.generated.tables.Network.NETWORK);
}
..........
Based on the "unknown table" exception I thought there was a problem connected to the wrong database or wrong server, but the console output is correct for a JDBC query.
Any thoughts are appreciated, perhaps something else can be the root cause or the DSLContext is not valid (but I would think that would generate a different exception).
The answer ends up being simple, I did not include the .from() method
Result<Record2<Integer, String>> result = dslRemote.select( nR.NETWORKID, nR.NAME )
.from(nR)
.fetch();
That is why the table was unknown, I never put the from method in.