Table doesn't exists in Liferay 6 - java

I have defined a data table and associated objects in Liferay 6, but when I run the code it says the table doesn't exists, and it's true. The code runs fine when I create the table by hand just copy-pasting the create table from the model implementation, but I expected the table to be created when deploying.
The user has all the privileges needed to create it.
What I'm missing?

I face the same problem. and #urvish is correct you have to change build number in
service properties file.
problem
When multiple developers working on portlet that uses servicebuilder
will give below exception “Build namespace has build number which is
newer than “. When developer commits service.properties file and that
deployed on other developer machine then it will throw below
exception
Best Practice: To avoid these kind of errors, follow these:
create service-ext.properties file at the same locaiton of service.properties
add build.number={higher-value or same value in exception)
Deploy portlet again
.

Check values of build.namespace in service.properties file and value of
select buildNumber from servicecomponent where buildNamespace = <<build.namespace from service.properties>>
Now the buildNumber return from query must be lesser than value of build.number propert in service.properties. If it is not then just set the value of build.number to 9999.
Sometimes due to mismatch, changes are not applied to database.

Related

How to query Custom Object in Salesforce?

Ok, I am reposting this question because it really drives me crazy.
I have enterprise.wsdl downloaded from salesforce and generated to some jars.
I build path those jars to my Android project in Eclipse.
Here is my code:
ConnectorConfig config = new ConnectorConfig();
config.setAuthEndpoint(authEndPoint);
config.setUsername(userID);
config.setPassword(password + securityToken);
config.setCompression(true);
con = new EnterpriseConnection(config);
con.setSessionHeader(UserPreference.getSessionID(mContext));
String sql = "SELECT something FROM myNameSpace__myCustomObject__c";
con.query(sql);
but it returns me this error:
[InvalidSObjectFault [ApiQueryFault [ApiFault
exceptionCode='INVALID_TYPE' exceptionMessage='sObject type 'abc__c'
is not supported.'] row='-1' column='-1' ]]
I am pretty sure that my userID has been assigned with profile that has read, edit access to that custom object.
My code also can query standard object.
Anyone can advise me what could be wrong?
From what I know there are three reasons it may give this error.
1. User permission which you said is setup correctly.
2. Do you have the custom object deployed to the org where you are trying to establish the connection?
3. Check the enterprise WSDL if it contains the custom object name which you are trying to query.
Hope it helps.

Genexus Ev2 Stored procedure

I'm trying to call a stored procedure on my iSeries System (RPG program) but I'm not able to activate the corresponding menu under "tools"->"Java Generator".
The RPG program that I want to call (we'll name it RPG00) has 5 input parameters and 1 output value.
I performed the following operations:
Create an "external object" (type: stored procedure) whose name is "RPG00"
Create a method named "RPG00" as well in the external object above and set the "external name" property to "RPG00"
Create a Web Panel with a variable &test (type: external object RPG00) and call it with the right parameters
Change the following settings in iSeries datastore configuration:
"access technology to set" --> JDBC
"list of external stored procedure" --> RPG00
At this point if i try to build the KB, it ends up always in error. In the project folder i can't find the "crtjdccalls.java" file and the corresponding "class" file that stores the instructions for the stored procedure..
What's going wrong? Any idea? Any suggestion?
The appropriate element in the "Java generator" menu never appears!!
My Configuration:
Gx Ev2 U5
Environment: Web\Java
DB: iSeries 6.1
I think you forgot set the data store property (JDBC) 'Library list' with the name of the library in which the RPG progrma RPG00 is found.
Check this and make a rebuild all.
Regards, Luis.
Thanks to the Genuxs development team I found a solution!
The problem is related to the way parameters are passed to the stored procedure.
REMEBER:
You can't use SDT elements as input parameters
You can't use direct values as input parameters
YOU CAN USE ONLY VARIABLES!!!
E.G.
SDT.value1, SDT.value2
&variable1 = SDT.value1
&variable2 = SDT.value2
&RPG00.RPG00(SDT.value1, SDT.value2, ecc) --> ERROR
&RPG00.RPG00(&variable1, XXX, ecc) --> ERROR where XXX is for example an integer value
&RPG00.RPG00(&variable1, &variable2, ecc) --> ONLY VARIABLES WORK FINE!!
Hope this help someone else

neo4j java rest binding api not returning from getNodeAutoIndexer

I have an application which uses neo4j embedded database. Now, I want to migrate to neo4j server as I need to integrate this application with a web app (using servlets, tomcat).
I want to change the code minimally, So I thought of using java-rest-binding api of neo4j. But I am stuck at getting the auto node index. The method getAutoNodeIndexer doesn't return. In messages.log of the database, It shows
[o.n.k.EmbeddedGraphDatabase]: GC Monitor: Application threads blocked for an additional 254ms [total block time: 2.678s]
I have no idea how to solve this.
I have set the appropriate properties in the neo4j.properties, which are
node_auto_indexing=true
node_keys_indexable=primaryKey
relationship_auto_indexing=true
relationship_keys_indexable=X-->Y
And this is what my code looks like:
graphDb = new RestGraphDatabase("http://localhost:7474/db/data/");
ReadableIndex<Node> autoNodeIndex = graphDb.index().getNodeAutoIndexer().getAutoIndex();
ReadableRelationshipIndex autoRelIndex = graphDb.index().getRelationshipAutoIndexer().getAutoIndex();
It seems that there's a lot of garbage collection going on. Run your app with a bigger heap (e.g. -Xmx1g) and see what happens.
EDIT:
Also, relationship_keys_indexable=X-->Y seems strange. I'd expect a property name there. What happens if you remove this property or enter a valid value?
To state the official documentation:
The node_keys_indexable key allows you to specify a comma-separated
list of node property keys to be indexed. The
relationship_keys_indexable does the same for relationship property
keys.

Can't use MySql for Spring Greenhouse project?

I just checked out the Spring Greenhouse project as a first step to learn Spring Security.
The project works fine but I was wondering about the following scenarios:
There are two configurations: standard and embedded. The javadoc says the embedded is default. I am not sure how to make it run in standard mode. Has anybody has tried this before?
Secondly in embedded mode I modified the code slightly with the following code to run it with MySql but to my surprise the application is not starting up at all. It throws the following error:
throw new RuntimeException("Unable to determine database version", e);
#Bean(destroyMethod="shutdown")
public DataSource dataSource() {
// EmbeddedDatabaseFactory factory = new EmbeddedDatabaseFactory();
// factory.setDatabaseName("greenhouse");
// factory.setDatabaseType(EmbeddedDatabaseType.);
DriverManagerDataSource mysqldataSource = new DriverManagerDataSource();
mysqldataSource.setDriverClassName("com.mysql.jdbc.Driver");
mysqldataSource.setUrl("jdbc:mysql://localhost/greenhouse?useConfigs=maxPerformance&characterEncoding=utf8");
mysqldataSource.setUsername("root");
mysqldataSource.setPassword("mysql");
return populateDatabase(mysqldataSource);
}
Can anybody please help me on this?
I figured it out by myself.
There was a tag in web.xml and i changed there from embedded to standard.
The problem was in the sql query which was creating the database version table was not mysl complaint. There were other queries also which were not mysql complaint. I changed the sql query and now eveything worked like as expected. The next thing I am gonna do is use some kind og generic database query generator so that i don't have to change the query if in future I change my mind to use post gre sql in place of my sql.
Thank you for all your help and support.
Judging by the exception you're getting, the file you're interested in is GenericDatabaseUpgrader.java
More than half the code there refers to DatabaseVersion which is a table that is created if absent.
Since I doubt you already have it and judging by the exception type (SQLException), I'm inclined to say it's failing to create/insert/retrieve the referenced table.
You can check if you have the table and go from there.
Also, judging by the code you're trying to inject, I'd also look at connection = dataSource.getConnection() as a possible culprit.
P.S. Regarding standard/embedded, it appears you can VM parameters or maven settings to switch between the two. Please check the official forums and specifically this topic for info

Migration of blobs from database to the file system in jackrabbit

As being proposed in the previous discussion Using file system instead of database to store pdf files in jackrabbit
we can use FileDataStore to store blob files in the file system instead of database (i my case have stored ~ 100 kb size pdfs).
The following problem I have faced is dealing with files that have been previously stored in blobstore and I want them to be accessible after switching to FileDataStore.
After adding FileDataStore support to the repository.xml
when using JcrUtils method getOrAddNode i get ItemExistsException:
public static Node getOrAddNode(Node parent, String name)
throws RepositoryException {
if (parent.hasNode(name)) {
return parent.getNode(name);
} else {
return parent.addNode(name);
}
}
e.g. parent.hasNode(name) returns false (it seems the item doesn't exist)
but then we fall in to the code parent.addNode(name) which consequently throws ItemExistsException.
Any help?
Is it necessary to proceed the migration of blobs to the FileDataStore or there is kind of configuration that jackrabbit could search for blobs in different locations at the same time: in my case mysql database and filesystem.
Some comments:
I have found at least several ways that could help do the migration job:
spec http://wiki.apache.org/jackrabbit/BackupAndMigration
tells about using JCR API (Session.exportSystemView(..) and then Session.importXML(..) ), using RepositoryCopier API etc.
jackrabbit-jcr-import-export-tool (see http://svn.apache.org/repos/asf/jackrabbit/sandbox/jackrabbit-jcr-import-export-tool/README.txt)
using jackrabbit standalone server (http://jackrabbit.apache.org/standalone-server.html)
It might be possible that there is a repository corruption. That is, the node contains a child node entry for the given name (the node you want to add), but the child node itself doesn't exist. Specially in older version of Jackrabbit you could get into this situation if multiple sessions concurrently tried to change the same nodes.
To fix such corruption problems, the bundle db persistence managers support a consistency check & fix feature. You would need to set those options in the repository.xml and workspace.xml files, and restart Jackrabbit. Once fixed, you can disable those options again.
There is also a way to fix such problems at runtime, by setting the system property org.apache.jackrabbit.autoFixCorruptions to true, and then traverse over all nodes in the repository.

Categories