In my rapidclipse 4.0 project, I have to read data out of a database view, while saving manual entered data.
The readed value should then be included into the data to be saved.
My problem is, that this works fine, only onetime/ the first save.
If I save a second time, the value is not updated.
In the save-button click event I placed following code:
private void cmdSave_buttonClick(final Button.ClickEvent event) {
try {
this.txtDmvTable.setValue("T_supplier");
final int i = 1;
VSuppliersNewId vsni = new VSuppliersNewId();
vsni = new VSuppliersNewIdDAO().find(i);
this.txtDmvCol00.setValue(vsni.getNewSupId().toString());
this.fieldGroup.save();
}
catch(final Exception e) {
e.printStackTrace();
Notification.show("Do isch was falsch",
e.getMessage(),
Notification.Type.ERROR_MESSAGE);
}
}
The following lines did exactly what expected, but only one time:
final int i = 1;
VSuppliersNewId vsni = new VSuppliersNewId();
vsni = new VSuppliersNewIdDAO().find(i);
this.txtDmvCol00.setValue(vsni.getNewSupId().toString());
The view VSuppliersNewId will always give back only one uptodate value.
For example:
My view gives back the highest value out of a table field.
Lets asume in the first round it gives back the number 237
After saving my data, the view will give back perhaps 238
If I read this direct by sql in the database I get back 238
but by the above code it still persists the 237
I assume, that the whole code chain from database has to be refreshed/ reloaded, but it didn't.
How to change/enhance my code to get the expected result? What did I wrong?
I found the solution by myself
There are several steps which I did.
1) opend my entity and set cacheable = false
2) opend persistence.xml and set following parameters:
<property name="hibernate.cache.use_query_cache" value="false" />
<property name="xdev.queryCache.mode" value="ENABLE_SELECTIVE" />
<property name="hibernate.cache.use_second_level_cache" value="false" />
I used following code to get the table refresh done:
private void cmdSave_buttonClick(final Button.ClickEvent event) {
try
{
this.txtDmvTable.setValue("T_supplier");
final int i = 1;
PersistenceUtils.getEntityManager(manOKMContacts.class).unwrap(SessionFactory.class);
VSuppliersNewId vsni = new VSuppliersNewId();
vsni = new VSuppliersNewIdDAO().find(i);
this.txtDmvCol00.clear();
this.txtDmvCol00.setValue(vsni.getNewSupId().toString());
this.fieldGroup.save();
this.table.getBeanContainerDataSource().removeAll();
this.table.getBeanContainerDataSource().addAll(new OkmDbMetadataValueDAO().findAllContacts());
this.table.getBeanContainerDataSource().refresh();
this.table.setSortContainerPropertyId("DmvCol00");
this.table.sort();
}
catch(final Exception e)
{
Notification.show("Do isch was falsch", e.getMessage(), Notification.Type.ERROR_MESSAGE);
}
}
Hope this will help others
Related
I am creating a custom renameParticpant to rename an Eclipse project's launch configuration files, and to change the APPNAME variable in the Makefile. The Makefile side works 100% of the time, but attempting to rename the launch configs causes the following error to occur:
<FATALERROR
FATALERROR: No input element provided
Context: <Unspecified context>
code: none
Data: null
>
This error occurs when the changes are being validated at the following line in org.eclipse.ltk.core.refactoring.PerformChangeOperation [line: 248].
fValidationStatus= fChange.isValid(new SubProgressMonitor(monitor, 1));
Below is a screenshot of the variable view. I suspect that my compositeChange is not in the correct format or is missing some information, however; the error dialogue and logs don't give any helpful information.
Debugger variable view of fChanges
The following is relevant code snippets:
// This one sparks joy (it works great, 100% success)
final HashMap<IFile, TextFileChange> textChanges = new HashMap<IFile, TextFileChange>();
// Stuff gets put inside
textChanges.put(makefile, changeAppname);
// This one does not spark joy (it runs, but results in an invalid Change)
final HashMap<IFile, RenameResourceChange> renameChanges = new HashMap<IFile, RenameResourceChange>();
// Stuff gets put inside
RenameResourceChange renameChange = new RenameResourceChange(
launch.getFile().getProjectRelativePath(), newLaunchName);
renameChanges.put(launch.getFile(), renameChange);
// This is where they get added to the hashmap.
CompositeChange result;
if (textChanges.isEmpty() && renameChanges.isEmpty()) {
result = null;
} else {
result = new CompositeChange(
String.format("Rename project references and dependencies for %1$s", proj.getName()));
for (Iterator<TextFileChange> iter = textChanges.values().iterator(); iter.hasNext();) {
result.add(iter.next());
}
for (Iterator<RenameResourceChange> iter = renameChanges.values().iterator(); iter.hasNext();) {
result.add(iter.next());
}
}
return result;
I looked into adding or generating a changeDescriptor, however that seems like the wrong approach.
We have to implement a logic to write the unique code generation in Java. The concept is when we generate the code the system will check if the code is already generate or not. If already generate the system create new code and check again. But this logic fails in some case and we cannot able to identify what is the issue is
Here is the code to create the unique code
Integer code = null;
try {
int max = 999999;
int min = 100000;
code = (int) Math.round(Math.random() * (max - min + 1) + min);
PreOrders preObj = null;
preObj = WebServiceDao.getInstance().preOrderObj(code.toString());
if (preObj != null) {
createCode();
}
} catch (Exception e) {
exceptionCaught();
e.printStackTrace();
log.error("Exception in method createCode() - " + e.toString());
}
return code;
}
The function preOrderObj is calling a function to check the code exists in the database if exists return the object. We are using Hibernate to map the database functions and Mysql on the backend.
Here is the function preOrderObj
PreOrders preOrderObj = null;
List<PreOrders> preOrderList = null;
SessionFactory sessionFactory =
(SessionFactory) ServletActionContext.getServletContext().getAttribute(HibernateListener.KEY_NAME);
Session Hibernatesession = sessionFactory.openSession();
try {
Hibernatesession.beginTransaction();
preOrderList = Hibernatesession.createCriteria(PreOrders.class).add(Restrictions.eq("code", code)).list(); // removed .add(Restrictions.eq("status", true))
if (!preOrderList.isEmpty()) {
preOrderObj = (PreOrders) preOrderList.iterator().next();
}
Hibernatesession.getTransaction().commit();
Hibernatesession.flush();
} catch (Exception e) {
Hibernatesession.getTransaction().rollback();
log.debug("This is my debug message.");
log.info("This is my info message.");
log.warn("This is my warn message.");
log.error("This is my error message.");
log.fatal("Fatal error " + e.getStackTrace().toString());
} finally {
Hibernatesession.close();
}
return preOrderObj;
}
Please guide us to identify the issue.
In createCode method, when the random code generated already exist in database, you try to call createCode again. However, the return value from the recursive call is not updated to the code variable, hence the colliding code is still returned and cause error.
To fix the problem, update the method as
...
if (preObj != null) {
//createCode();
code = createCode();
}
...
Such that the code is updated.
By the way, using random number to generate unique value and test uniqueness through query is a bit strange. You may try Auto Increment if you want unique value.
What Happened
All the data from last month was corrupted due to a bug in the system. So we have to delete and re-input these records manually. Basically, I want to delete all the rows inserted during a certain period of time. However, I found it difficult to scan and delete millions of rows in HBase.
Possible Solutions
I found two way to bulk delete:
The first one is to set a TTL, so that all the outdated record would be deleted automatically by the system. But I want to keep the records inserted before last month, so this solution does not work for me.
The second option is to write a client using the Java API:
public static void deleteTimeRange(String tableName, Long minTime, Long maxTime) {
Table table = null;
Connection connection = null;
try {
Scan scan = new Scan();
scan.setTimeRange(minTime, maxTime);
connection = HBaseOperator.getHbaseConnection();
table = connection.getTable(TableName.valueOf(tableName));
ResultScanner rs = table.getScanner(scan);
List<Delete> list = getDeleteList(rs);
if (list.size() > 0) {
table.delete(list);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (null != table) {
try {
table.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (connection != null) {
try {
connection.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
private static List<Delete> getDeleteList(ResultScanner rs) {
List<Delete> list = new ArrayList<>();
try {
for (Result r : rs) {
Delete d = new Delete(r.getRow());
list.add(d);
}
} finally {
rs.close();
}
return list;
}
But in this approach, all the records are stored in ResultScanner rs, so the heap size would be huge. And if the program crushes, it has to start from the beginning.
So, is there a better way to achieve the goal?
Don't know how many 'millions' you are dealing with in your table, but the simples thing is to not try to put them all into a List at once but to do it in more manageable steps by using the .next(n) function. Something like this:
for (Result row : rs.next(numRows))
{
Delete del = new Delete(row.getRow());
...
}
This way, you can control how many rows get returned from the server via a single RPC through the numRows parameter. Make sure it's large enough so as not to make too many round-trips to the server, but at the same time not too large to kill your heap. You can also use the BufferedMutator to operate on multiple Deletes at once.
Hope this helps.
I would suggest two improvements:
Use BufferedMutator to batch your deletes, it does exactly what you need – keeps internal buffer of mutations and flushes it to HBase when buffer fills up, so you do not have to worry about keeping your own list, sizing and flushing it.
Improve your scan:
Use KeyOnlyFilter – since you do not need the values, no need to retrieve them
use scan.setCacheBlocks(false) - since you do a full-table scan, caching all blocks on the region server does not make much sense
tune scan.setCaching(N) and scan.setBatch(N) – the N will depend on the size of your keys, you should keep a balance between caching more and memory it will require; but since you only transfer keys, the N could be quite large, I suppose.
Here's an updated version of your code:
public static void deleteTimeRange(String tableName, Long minTime, Long maxTime) {
try (Connection connection = HBaseOperator.getHbaseConnection();
final Table table = connection.getTable(TableName.valueOf(tableName));
final BufferedMutator mutator = connection.getBufferedMutator(TableName.valueOf(tableName))) {
Scan scan = new Scan();
scan.setTimeRange(minTime, maxTime);
scan.setFilter(new KeyOnlyFilter());
scan.setCaching(1000);
scan.setBatch(1000);
scan.setCacheBlocks(false);
try (ResultScanner rs = table.getScanner(scan)) {
for (Result result : rs) {
mutator.mutate(new Delete(result.getRow()));
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
Note the use of "try with resource" – if you omit that, make sure to .close() mutator, rs, table, and connection.
I have a OneToOne relationship between two tables, as shown below:
PreRecordLoad.java:
#OneToOne(mappedBy="preRecordLoadId",cascade = CascadeType.ALL)
private PreRecordLoadAux preRecordLoadAux;
PreRecordLoadAux.java:
#JoinColumn(name = "PRE_RECORD_LOAD_ID", referencedColumnName = "PRE_RECORD_LOAD_ID")
#OneToOne
private PreRecordLoad preRecordLoadId;
I'm using this method to pull back the PreRecordLoad object:
public PreRecordLoad FindPreRecordLoad(Long ID)
{
print("Finding " + ID + "f");
Query query;
PreRecordLoad result = null;
try
{
query = em.createNamedQuery("PreRecordLoad.findByPreRecordLoadId");
query.setParameter("preRecordLoadId", ID);
result = (PreRecordLoad)query.getSingleResult();
//result = em.find(PreRecordLoad.class, ID);
}
catch(Exception e)
{
print(e.getLocalizedMessage());
}
return result;
}
The '+ "f"' is to see if the passed value somehow had something at the end. It didn't.
I originally used em.find, but the same issue occurred no matter which method I used.
I used to use a BigDecimal for the ID because it was the default, and noticed I was getting a precision difference when it worked, and when it didn't work. Specifically the precision was 4 when it didn't work, but 0 when it did. I couldn't work out why this was, so I changed the BigDecimal to a Long, as I never really needed it to be a BigDecimal anyway.
When I save the new PreRecordLoad and PreRecordLoadAux objects to the database (inserting them for the first time), and then try and run this method to recall the objects, it retrieves the PreRecordLoad, but the PreRecordLoadAux is null. This is despite the entry being in the database and what looks to be full committed, as I can access it from SQLDeveloper, which is a separate session.
However, if I stop and re-run the application, then it successfully pulls back both objects. The ID being passed is the same both times, or at least appears to be.
Anyway suggestions would be greatly appreciated, thankyou.
Edit:
Here is the code for when I am persisting the objects into the DB:
if(existingPreAux==null) {
try {
preLoad.setAuditSubLoadId(auditLoad);
em.persist(preLoad);
print("Pre Record Load entry Created");
preAux.setPreRecordLoadId(preLoad);
em.persist(preAux);
print("Pre Record Load Aux entry Created");
}
catch(ConstraintViolationException e) {
for(ConstraintViolation c : e.getConstraintViolations()) {
System.out.println (c.getPropertyPath() + " " + c.getMessage());
}
}
}
else {
try {
preLoad.setPreRecordLoadId(existingPreLoad.getPreRecordLoadId());
preLoad.setAuditSubLoadId(auditLoad);
em.merge(preLoad);
print("Pre Record Load entry found and updated");
preAux.setPreRecordLoadAuxId(existingPreAux.getPreRecordLoadAuxId());
preAux.setPreRecordLoadId(preLoad);
em.merge(preAux);
print("Pre Record Load Aux entry found and updated");
}
catch(ConstraintViolationException e) {
for(ConstraintViolation c : e.getConstraintViolations()) {
System.out.println (c.getPropertyPath() + " " + c.getMessage());
}
}
}
That's in a method, and after that code, the method ends.
It's your responsibility to maintain the coherence of the object graph. So, when you do preAux.setPreRecordLoadId(preLoad);, yo must also do preLoad.setPreRecordLoadAux(preAux);.
If you don't, then every time you'll load the preAux from the same session, it will be retrieved from the first-level cache, and will thus return your incorrectly initialized instance of the entity.
Busy trying to Call RPG function from Java and got this example from JamesA. But now I am having trouble, here is my code:
AS400 system = new AS400("MachineName");
ProgramCall program = new ProgramCall(system);
try
{
// Initialise the name of the program to run.
String programName = "/QSYS.LIB/LIBNAME.LIB/FUNNAME.PGM";
// Set up the 3 parameters.
ProgramParameter[] parameterList = new ProgramParameter[2];
// First parameter is to input a name.
AS400Text OperationsItemId = new AS400Text(20);
parameterList[0] = new ProgramParameter(OperationsItemId.toBytes("TestID"));
AS400Text CaseMarkingValue = new AS400Text(20);
parameterList[1] = new ProgramParameter(CaseMarkingValue.toBytes("TestData"));
// Set the program name and parameter list.
program.setProgram(programName, parameterList);
// Run the program.
if (program.run() != true)
{
// Report failure.
System.out.println("Program failed!");
// Show the messages.
AS400Message[] messagelist = program.getMessageList();
for (int i = 0; i < messagelist.length; ++i)
{
// Show each message.
System.out.println(messagelist[i]);
}
}
// Else no error, get output data.
else
{
AS400Text text = new AS400Text(50);
System.out.println(text.toObject(parameterList[1].getOutputData()));
System.out.println(text.toObject(parameterList[2].getOutputData()));
}
}
catch (Exception e)
{
//System.out.println("Program " + program.getProgram() + " issued an exception!");
e.printStackTrace();
}
// Done with the system.
system.disconnectAllServices();
The application Hangs at this lineif (program.run() != true), and I wait for about 10 minutes and then I terminate the application.
Any idea what I am doing wrong?
Edit
Here is the message on the job log:
Client request - run program QSYS/QWCRTVCA.
Client request - run program LIBNAME/FUNNAME.
File P6CASEL2 in library *LIBL not found or inline data file missing.
Error message CPF4101 appeared during OPEN.
Cannot resolve to object YOBPSSR. Type and Subtype X'0201' Authority
FUNNAME insert a row into table P6CASEPF through a view called P6CASEL2. P6CASEL2 is in a different library lets say LIBNAME2. Is there away to maybe set the JobDescription?
Are you sure FUNNAME.PGM is terminating and not hung with a MSGW? Check QSYSOPR for any messages.
Class ProgramCall:
NOTE: When the program runs within the host server job, the library list will be the initial library list specified in the job description in the user profile.
So I saw that my problem is that my library list is not setup, and for some reason, the user we are using, does not have a Job Description. So to over come this I added the following code before calling the program.run()
CommandCall command = new CommandCall(system);
command.run("ADDLIBLE LIB(LIBNAME)");
command.run("ADDLIBLE LIB(LIBNAME2)");
This simply add this LIBNAME, and LIBNAME2 to the user's library list.
Oh yes, the problem is Library list not set ... take a look at this discussion on Midrange.com, there are different work-around ...
http://archive.midrange.com/java400-l/200909/msg00032.html
...
Depe