Get most recent ldap entry - java

I'm running into an issue currently that I can't seem to figure out. I'm attempting to write an LDAP query that retrieves the most recent entry in a directory. There doesn't seem to be native functionality to do this, and all information I'm finding requires other information about the sought after entry to be known.
If I were using a database, I could just sort the entries by 'dateCreated' and limit the results to 1, however with an LDAP query to a Directory Server, I don't believe that's possible.
Any tips/advice would be greatly appreciated, thanks!

LDAP RFC 4512 defines a standard attribute named createTimestamp that is automatically set by the server on every object created in the directory. It also defines modifyTimestamp for update operations.
Since these are operational attribute, they will only be returned when you query the server if you explicitly include them in the object attribute list to retrieve
Some LDAP servers like Redhat Directory Server support server-side sorting so you can also use these attributes as your sort criteria.
On a large directory deployment , you should make sure that server indexes are created for these attributes to achieve reasonable query performance.

Related

Grails - Spring Security not working with Mysql 8

I have a application where I used these technologies:
Grails 3.3.0
JDK 1.8
Spring 4+
Mysql 8
GORM 6.1.6.RELEASE
org.grails.plugins:spring-security-core:3.2.0
Hibernate 5+
Problem is when I am trying to connect the application with Mysql 8(with 5.6+ it is working fine), I am not able to get the information related to user from Grails-Spring-Security plugin.
The application is running even connect to DB but wont be able to authenticate or fetch the information of the User like findByUsername where username is property in my user domain class.
I have User domain class defined in application.properties file.
grails.plugin.springsecurity.userLookup.userDomainClassName = 'com.aaa.User'
At some point I found this error but not sure whether it is related to this or not.
java.lang.IllegalStateException:
Either class [com.aaa.User] is not a domain class or GORM has not been initialized correctly or has already been shutdown.
Ensure GORM is loaded and configured correctly before calling any methods on a GORM entity.
Want to understand why it wont be able to fetch the information from DB. I have tried lot of things like change the GORM version to 6.1.7 and grails spring-secuirty-core plugin version but not able to get anything.
Any help would be appreciated.
Thanks,
Atul
We are able to solve the issue. It happens when grails-hibernate plugin look for the property(in my case its username as I am calling API findByUsername) which is present in the static api or not. If it is not there it throws the exception.
The thing worked for me is, I have to put the property in static mapping of User domain class:
static mapping = {
table 'userTable'
id column: 'id'
password column: 'userpassword'
username column: 'username'
}
I am not sure why it is happening, when I run the application with Mysql5.6+ its working.(The driver is according to it) but when go with Mysql8, it look for the property in static mapping.
One more point I would like to mention to fix the issue is, make sure you have tablename, columnname same as defined in DB. Case sensitivity is what expected.
As I mentioned the case sensitivity, if you are using linux with Mysql8, the lower_case_table_names=0, this check with the following as per Mysql official documentation:
Table and database names are stored on disk using the lettercase
specified in the CREATE TABLE or CREATE DATABASE statement. Name
comparisons are case sensitive. You should not set this variable to 0
if you are running MySQL on a system that has case-insensitive file
names (such as Windows or macOS). If you force this variable to 0 with
--lower-case-table-names=0 on a case-insensitive file system and access MyISAM tablenames using different lettercases, index corruption
may result.
So if the table name is static is in camel or upper case and in db it is lower case it wont match. and the error occurred.
Hope this helps to someone.

How to migrate database on multi-node production server?

I have a multi-node production server (Tomcat 8.x on CentOS 7.x; each node is a separate CentOS instance), that uses a single MySQL database server (MySQL 5.7.x). Each node of the server will be updated manually: system administrator stops each node and deploys a new version of the application (.war file). It means that the service won't have downtime, because at every moment there is at least one working node.
Database migrations are implemented using Liquibase changesets, which are placed in the .war file. So each node validates and updates (if requires) the database schema. Actually, only the first node executes changesets and other nodes just validate it.
The problem is that there is a time gap between updates of each node: when the first node is already updated with the new application version, the last node still works on the previous application version (that might use old columns in database for example). It might lead to inconsistency of the database.
Example
Let's say the server has 3 nodes. At this moment they work on an application of version N.
Next releases need to change a database schema: rename a column title to title_new.
To make it possible to update database schema without downtime, we need to use "two-steps change":
version N+1:
adds a new column title_new,
doesn't use a column title anymore (it's marked as deprecated);
copy all data from the column title to title_new;
uses a column title_new;
version N+2 drops a column title.
Now administrator is going to deploy version N+1. He stops the first node for update, but the other two nodes are still working on the version N. While the first node is updating, some users might change their data using node 2 or 3 (there is a load balancer, that routes requests to different nodes). So we need a way to forbid users to make any changes via nodes 2 and 3, while they are not updated with a version N+1.
I see two different ways to solve this problem:
Use some read_only mode on the application level - then the application logic forbids users to make any changes. But then we need to implement some ways to enable this mode at any time using a console or admin panel (administrator must be allowed to enable this mode).
Use read_only mode on database level. But I couldn't find any ready-for-use methods for MySQL to do this.
The question: what's the best way to solve the described issue?
P.S. Application is based on the Spring 4.x framework + Hibernate 4.x.
An alternative way of solving this may be: "using database trigger":
version N+1 :
for every renamed column create a trigger that copy data inserted/updated in title to title_new (see here)
version N+2 :
Drop the trigger, drop the old column
The advantage of this approach are:
it can be done completely with liquibase (don't need additional steps for the administrator)
all your nodes remains fully functional (no read-only)
The drawbacks :
you must write/use triggers
can be tricky if your db updates are more complex (like column renamed + new db-constraints)
Zero Downtime Deployment with a Database
I found the above article to be very insightful as to the various options for database migrations without downtime.

How to determine what SQL query a java web application is using to return the data?

I have been given a java web application for which I have the source code to. The application queries an Oracle database to return data back to the user in web page. I need to update a value for the returned data in the database, without knowing the table or column names. Is there a general way to determine what query the application is submitting to return the data for a particular page so I can find the right tables and columns?
I am new to java and web development so not sure where to start looking.
Thanks!
Well, there's always the old fashioned way of finding out. You can find the source code for the specific page you're looking at and identify the query that's being executed to retrieve the data. I'm assuming that's not what you're looking for, though.
Some other options include using JDBC (Enabling and Using JDBC Logging) logging feature or JProfiler (the JDBC probe shows you all SQL statements in the events view). Once you find the SQL statement, you can use standard text search features within your IDE to locate the specific code and make alterations.
Hope that helps!
If you can run a controlled test (e.g., you are the only person on that web application), you could turn on SQL tracing on the DB connection and then run your transaction several times. To do this
look at all the connections from that application using v$session -- you can control this by tweaking your connection pool setting (e.g., set min and max connection to 1). Assuming this is your test environment.
turn on 10046 trace (see https://oracle-base.com/articles/misc/sql-trace-10046-trcsess-and-tkprof -- there are many other examples).
The 10046 trace will show you what the application is doing -- SQL by SQL. You can even set the level to 12 to get the bind variable values (assuming you are using prepared statements).

Different environments configuration on Google AppEngine

I have a web-application running on Google AppEngine.
I have a single PRODUCTION environment, a STAGING env and multiple development & QA envs. There are many configuration parameters that should differ between PRODUCTION and other environments - such as API keys for services we integrate with (GoogleAnalytics for example). Some of those parameters are defined in code, other are defined in web.xml (inside init-param tag for Filters, for example), and others cases as well.
I know that there are a couple of approaches to do so:
Saving all parameters in the datastore (and possible caching them in each running instance / Memcached)
Deploying the applications with different system-properties / environment-variables in the web.xml
Other options...?
Anyway, I'm interested to hear your best-practices for resloving this issue.
My favorite approach is to store them all in datastore and having only one master record in it with all the different properties and making a good use of the memcache. By doing that you don't need to have different configuration files or polluting your code with different configuration settings. Instead you can deploy and change this values from an administrative form that you will have to create in order to update this master record.
Also if you are storing tokens and secret keys then you are aware of the fact that is definitely not a good idea to have them in the web.xml or anywhere else in the code, but rather having it per application on something more secure, like datastore.
Once you have that, then you can have one global function that will retrieve properties by name and if you want to get the Google Analytics ID from anywhere in your app you should use it by having something like this:
getProperty('googleAnalyticsID')
where this global getProperty() function will try to find this value with these steps:
Check if it exist in memcache and return
If not in memcache, update memcache from master entity from datastore and return
If not in datastore create an entity with a default values, update memcache and return
Of course there are different approaches on how to retrieve data from that Model but the idea is the same: Store in one record and use the memcache.
You must have separate app ids for your production/staging/qa envs. This must be hardcorded into your web.xml (or you have a script of some sort that updates your web.xml)
After that you can code in your settings based on appid. I assume there's a java equivalent to this:
https://developers.google.com/appengine/docs/python/appidentity/functions#get_application_id
You could put it in the datastore if they're settings that change dynamically, but if they are static to the environment, it doesn't make sense to keep fetching from datastore.

Two Applications using the same index file with Hibernate Search

I want to know if it is possible to use the same index file for an entity in two applications. Let me be more specific:
We have an online Application with a frondend for the users and an application for the backend tasks (= administrator interface). Both are running on the same JBOSS AS. Both Applications are using the same database, so they are using the same entities. Of course the package names are not the same in both applications for the entities.
So this is our usecase: A user should be able to search via the frondend. The user is only allowed to see results which are tagged with "visible". This tagging happens in our admin interface, so the index for the frontend should be updated every time an entity is tagged as "visible" in the backend.
Of course both applications do have the same index root folder. In my index folder there are 2 index files:
de.x.x.admin.model.Product
de.x.x.frondend.model.Product
How to "merge" this via Hibernate Search Configuration? I just did not get it via the documentation...
Thanks for any help!
Ok, it seems, that this is not possible...

Categories