I have a grails project with a class that I can delete no problem when doing it "manually" from the controller. I use the following code.
def delete = {
def projectInstance = Project.get( params.id )
def employee = projectInstance.employee
def projectarray = new ArrayList<Project>();
projectarray += employee.getProjects()
println("Size of projectarray is " + projectarray.size())
if(projectInstance) {
def rolearray = []
projectarray.remove(projectInstance)
def temp = new TreeSet<Project>();
temp += employee.getProjects()
temp.clear()
temp.addAll(projectarray)
employee.projects = temp
projectInstance.employer = null
projectInstance.delete(flush:true)
flash.message = "Project ${params.id} deleted"
redirect(action:"edit", controller: "employee", id: employee.id)
}
else {
flash.message = "Project not found with id ${params.id}"
redirect(action:list)
}
}
So that deletes a single instance fine.
Now i want to, from a different controller, remove ALL projects from an employee.
This is stored in the employee like so:
class Employee implements Comparable
{
static hasMany = [projects:Project]
static constraints =
{
}
static mapping = {
projects cascade:"all-delete-orphan", lazy:false
}
#XmlElementWrapper(name="projectslist")
SortedSet<Project> projects = new TreeSet<Project>(); // make a sortedSet?
}
So how would I now delete all projects from a particular employee instance?
I might be misunderstanding your question because I can't make sense of some of your code. It seems unnecessary. If your relationships are setup correctly (i.e. Project belongsTo Employee), this should be sufficient to delete a single project:
def delete = {
def projectInstance = Project.get( params.id )
projectInstance.delete(flush:true)
flash.message = "Project ${params.id} deleted"
redirect(action:"edit", controller: "employee", id: employee.id)
}
If this is a one-to-many, the next time you retrieve the employee the project will be gone. And this should work to delete all projects of an employee:
def delete = {
def employee = Employee.get( params.id )
employee.getProjects().clear()
employee.save(flash:true)
flash.message = "All projects of employee deleted."
redirect(action:"edit", controller: "employee", id: employee.id)
}
That assumes cascade:"all-delete-orphan". If that's not the case then you might need to also delete the instances and that might look something like this:
def delete = {
def employee = Employee.get( params.id )
// Make copy to avoid concurrent modification issues later
def copy = new TreeSet<Project>(employee.getProjects());
employee.getProjects().clear();
employee.save(flash:true)
copy.each{
$it.delete();
}
flash.message = "All projects of employee deleted."
redirect(action:"edit", controller: "employee", id: employee.id)
}
I'm not a groovy expert, so not sure if the copy is needed, or if you can just iterate on the collection directly. Seems like there is always a groovier way to do things. You might also want to check out the deleteFrom dynamic domain class method. That might be a more efficient grails approach depending on number of relationships to be deleted.
You could use the removeFrom* method that is generated by Grails when you declare the hasMany relationship - it's the equivalent of the addTo* methods:
def employee = Employee.get(params.id)
employee.projects.toList().each { employee.removeFromProjects(it) } // toList() prevents a ConcurrentModifactionException
Related
I am trying to setup a distributed cache using Apache Ignite with Scala.
After setting up the cache, I am able to put and get items knowing the key, but SQL queries of any type returns always a cursor with null iterator.
Here is how I setup my cache (please note that this is done before the ignition.start):
def setupTelemetryCache(): CacheConfiguration[TelemetryKey, TelemetryValue] = {
val dataRegionName = "persistent-region"
val cacheName = "telemetry-cache"
// This object is required to perform SQL queries over custom key object
val queryEntity = new QueryEntity("TelemetryKey", "TelemetryValue")
val fields: util.LinkedHashMap[String, String] = new util.LinkedHashMap[String, String]
fields.put("deviceId", classOf[String].getName)
fields.put("metricName", classOf[String].getName)
fields.put("timestamp", classOf[String].getName)
queryEntity.setFields(fields)
val keyFields: util.HashSet[String] = new util.HashSet[String]()
keyFields.add("deviceId")
keyFields.add("metricName")
keyFields.add("timestamp")
queryEntity.setKeyFields(keyFields)
queryEntity.setIndexes(Collections.emptyList[QueryIndex]())
new CacheConfiguration()
.setName(cacheName)
.setDataRegionName(dataRegionName)
.setCacheMode(CacheMode.PARTITIONED) // Data is split among nodes
.setBackups(1) // each partition has 1 backup
.setIndexedTypes(classOf[String], classOf[TelemetryKey]) // Index by ID
.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC) // Faster, clients do not wait for cache
// synchronization. Consistency issues?
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) // Allows transactional query
.setQueryEntities(Collections.singletonList(queryEntity))
}
And those are the code of my TelemetryKey:
case class TelemetryKey private (
#(AffinityKeyMapped #field)
#(QuerySqlField#field)(index = true)
deviceId: String,
#(QuerySqlField#field)(index = false)
metricName: String,
#(QuerySqlField#field)(index = true)
timestamp: String) extends Serializable
And TelemetryValue:
class TelemetryValue private(valueType: ValueTypes.Value, doubleValue: Option[Double],
stringValue: Option[String],
longValue: Option[Long]) extends Serializable
A sample SQL query I have to achieve could be "Select * from CACHE where deviceId = 'dev1234'" and I expect to receive all the Cache.Entry[TelemetryKey, TelemetryValue] of the same deviceId
Here is how I perform the query:
private def sqlQuery(query: SqlQuery[TelemetryKey, TelemetryValue]):
QueryCursor[Cache.Entry[TelemetryKey, TelemetryValue]] = {
cache.query(query)
}
def getEntries(ofDeviceId: String):
QueryCursor[Cache.Entry[TelemetryKey, TelemetryValue]] = {
val q = new SqlQuery[TelemetryKey, TelemetryValue](classOf[TelemetryKey], "deviceId = ?")
sqlQuery(q.setArgs(ofDeviceId))
}
Even changing the body of the query i receive a cursor object which is empty. I cannot even perform a "Select *" query.
Thanks for the help
There are two ways to configure indexes and queryable fields.
Annotation based configuration
Your key and value classes need to be annotated #QuerySqlField as follows.
case class TelemetryKey private (
#(AffinityKeyMapped #field)
#(QuerySqlField#field)(index = true)
deviceId: String,
#(QuerySqlField#field)(index = false)
metricName: String,
#(QuerySqlField#field)(index = true)
timestamp: String) extends Serializable
After indexed and queryable fields are defined, they have to be registered in the SQL engine along with the object types they belong to.
new CacheConfiguration()
.setName(cacheName)
.setDataRegionName(dataRegionName)
.setCacheMode(CacheMode.PARTITIONED)
.setBackups(1)
.setIndexedTypes(classOf[TelemetryKey], classOf[TelemetryValue])
.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC)
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
UPD:
One more thing that should be fixed as well is your SqlQuery
def getEntries(ofDeviceId: String):
QueryCursor[Cache.Entry[TelemetryKey, TelemetryValue]] = {
val q = new SqlQuery[TelemetryKey, TelemetryValue](classOf[TelemetryValue], "deviceId = ?")
sqlQuery(q.setArgs(ofDeviceId))
}
QueryEntity based approach
val queryEntity = new QueryEntity(classOf[TelemetryKey], classOf[TelemetryValue]);
new CacheConfiguration()
.setName(cacheName)
.setDataRegionName(dataRegionName)
.setCacheMode(CacheMode.PARTITIONED)
.setBackups(1)
.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC)
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL)
.setQueryEntities(Collections.singletonList(queryEntity))
Long story short, you should supply full JVM class names to QueryEntity.
As in:
val queryEntity = new QueryEntity("com.pany.telemetry.TelemetryKey",
"com.pany.telemetry.TelemetryValue") // or e.g. TelemetryKey.class.getName()
Ignite needs these to distinguish various types that can be stored in one cache, it's not decorative - there's got to be an exact match.
Better yet? Use setIndexedTypes() instead of setQueryEntities(). It allows you to pass classes instead of Strings and it will scan annotations, which you already have.
Printscreen additional fields useradmin
How can I add some new User Properties to the CQ Users?
I found an solution but it don't work --> http://experience-aem.blogspot.ch/2014/01/aem-cq-56-extend-useradmin-add-new-user.html
I tried to manipulate in CRX the UserProperties.js with new Properties, I see them in useradmin but if I try to add the new propertie in Java Code (not via useradmin) I can save it without error, but the value is empty in useradmin.
And if I try to add some value via useradmin for the new propertie, all user gets the same value.
How can I add new User Properties, that I can set the Value via Java code like the standard properties.
user = userManager.createUser(username, password);
ValueFactory valueFactory = session.getValueFactory();
emailValue = valueFactory.createValue(email);
givennameValue = valueFactory.createValue(givenname);
nameValue = valueFactory.createValue(name);
//User class just accepts Value Object
user.setProperty("profile/" + UserProperties.EMAIL, emailValue);
user.setProperty("profile/" + UserProperties.FAMILY_NAME, nameValue);
user.setProperty("profile/" + UserProperties.GIVEN_NAME, givennameValue);
I found an solution.
Go to crx /libs/cq/security/widgets/source/widgets/security/UserProperties.js
add the fields you need in the items array of the user (Caution - there are items for user and items for groups in the same place)
in the loadRecord method of your JS, you have to add each new field to the "record" object
"items":[{
"xtype":"textfield",
"fieldLabel":CQ.I18n.getMessage("Mail"),
"anchor":"100%",
"vtype":"email",
"msgTarget":"under",
"name":"email"
},{
"xtype":"textfield",
"fieldLabel":CQ.I18n.getMessage("My Field"),
"anchor":"100%",
"msgTarget":"under",
"name":"myfield"
},{
"xtype":"textarea",
"fieldLabel":CQ.I18n.getMessage("About"),
"anchor":"100% -155",
"name":"aboutMe"
}],
loadRecord: function(rec) {
this.enableUserSaveButton(false);
this.enableGroupSaveButton(false);
var type = rec.get("type");
if (type=="user") {
this.activeForm = this.userForm;
this.hiddenForm = this.groupForm;
if (rec.id==CQ.security.UserProperties.ADMIN_ID) {
this.pwdButtons.each(function(bt) {bt.hide(); return true;} )
} else {
this.pwdButtons.each(function(bt) {bt.show(); return true;} )
}
} else {
this.activeForm = this.groupForm;
this.hiddenForm = this.userForm;
}
//is loading additional property from json and show it in formular
rec.data["myfield"] = rec.json["myfield"];
this.activeForm.getForm().loadRecord(rec);
In the java code you can then add the new properties via the "user" object to the new properties. Note that the properties are put into the subfolder "profile".
user.setProperty("profile/" + "myfield", myFieldValue);
Did you try the second approach, posted by "pedro" in the link you've posted?
It probably has to do with pushing the new field to the record:
http://experience-aem.blogspot.com/2014/01/aem-cq-56-extend-useradmin-add-new-user.html?showComment=1390804750445#c2823498719990547675
i hope this may helps you the file exist on http://[host name]:[port]/crx/de/index.jsp#/libs/cq/security/widgets/source/widgets/security/UserProperties.js
and you will have two major properties the first one is for the user this.userForm the other one is this.groupForm for groups.
I'm using Groovy's StreamingMarkupBuilder to generate XML dynamically based on the results of a few SQL queries. I'd like to call a method from inside of the closure but the markup builder tries to create an XML node using the method name.
Here's an example of what I'm trying to do:
Map generateMapFromRow(GroovyRowResult row) {
def map = [:]
def meta = row.getMetaData()
// Dynamically generate the keys and values
(1..meta.getColumnCount()).each { column -> map[meta.getColumnName(column)] = row[column-1] }
return map
}
def sql = Sql.newInstance(db.url, db.user, db.password, db.driver)
def builder = new StreamingMarkupBuilder()
def studentsImport = {
students {
sql.eachRow('select first_name, middle_name, last_name from students') { row ->
def map = generateMapFromRow(row) // Here is the problem line
student(map)
}
}
}
println builder.bind(studentsImport).toString()
This will generate XML similar to the following:
<students>
<generateMapFromRow>
[first_name:Ima, middle_name:Good, last_name:Student]
</generateMapFromRow>
<student/>
<generateMapFromRow>
[first_name:Ima, middle_name:Bad, last_name:Student]
</generateMapFromRow>
<student/>
</students>
I've tried moving the method out to a class and calling to statically on the class, which doesn't work also.
Due to the nature of how StreamingMarkupBuilder works, I'm afraid that it isn't actually possible to do this, but I'm hoping that it is.
I may loose smth during example simplification, but such code will work.
In your example students is a closure call, so it may mess smth inside.
def builder = new groovy.xml.StreamingMarkupBuilder()
def generateMapFromRow = { ["$it": it] }
builder.bind {
10.times {
def map = generateMapFromRow(it) // Now closure is escaped, there is local variable with such name.
student(map)
}
}
As said here: http://groovy.codehaus.org/Using+MarkupBuilder+for+Agile+XML+creation
Things to be careful about when using markup builders is not to overlap variables you currently have in scope. The following is a good example
import groovy.xml.MarkupBuilder
def book = "MyBook"
def writer = new StringWriter()
def xml = new MarkupBuilder(writer)
xml.shelf() {
book(name:"Fight Club") { // Will produce error.
}
}
println writer.toString()
Builder's work similar to MethodMissing captors, ans if there is local variable in scope, no node will be produced.
Merged with Grails addTo in for loop.
I am facing a problem due to that i'm newbie to grails
i'm doing a website for reading stories and my goal now is to do save the content of the story into several pages to get a list and then paginate it easily .. so i did the following.
in the domain i created two domains one called story and have this :
class Story {
String title
List pages
static hasMany=[users:User,pages:Page]
static belongsTo = [User]
static mapping={
users lazy:false
pages lazy:false
}
}
and have of course domain called page have this :
class Page {
String Content
Story story
static belongsTo = Story
static constraints = {
content(blank:false,size:3..300000)
}
}
and the controller saving method gone like this:
def save = {
def storyInstance = new Story(params)
def pages = new Page(params)
String content = pages.content
String[] contentArr = content.split("\r\n")
int i=0
StringBuilder page = new StringBuilder()
for(StringBuilder line:contentArr){
i++
page.append(line+"\r\n")
if(i%10==0){
pages.content = page
storyInstance.addToPages(pages)
page =new StringBuilder()
}
}
if (storyInstance.save(flush:true)) {
flash.message = "${message(code: 'default.created.message', args: [message(code: 'story.label', default: 'Story'), storyInstance.id])}"
redirect(action: "viewstory", id: storyInstance.id)
}else {
render(view: "create", model: [storyInstance: storyInstance])
}
}
i know it looks messy but it's a prototype..any way.. the problem is that i'm waiting from "storyInstance.addToPages(pages)" line to add to the set of pages an instance of the pages every time the condition is true..but what actually happen that it's give me the last instane only with the last page_idx while i thought it should save the pages one by one and so i can get a list of pages to every story..
why this happen and is there a simpler way to do it than what i did..
i'm waiting for any help here that is appreciated
I see a similar question in Problems while saving a pre-persisted object in Google App Engine (Java), and indeed I was not calling close() on my persistence manager. However, I am now calling close, but my object update is not being persisted. Specifically, I want to remove an element from a Set, and save that smaller set. Here is the persistence manager related code, that doesn't throw an exception, but doesn't save my data:
UserService userService = UserServiceFactory.getUserService();
User user = userService.getCurrentUser();
PersistenceManager pm = PMF.get().getPersistenceManager();
UserProfileInfo userProfile = pm.getObjectById(UserProfileInfo.class,user.getUserId());
int presize = userProfile.getAccounts().size();
AccountInfo ai = userProfile.removeAccount(id);
int postsize = userProfile.getAccounts().size();
UserProfileInfo committed = (UserProfileInfo)pm.makePersistent(userProfile);
int postcommitsize = committed.getAccounts().size();
pm.close();
And here is the relevant part of the UserProfileInfo class:
#PersistenceCapable(identityType = IdentityType.APPLICATION)
class UserProfileInfo {
#Persistent
private Set<AccountInfo> accounts;
public AccountInfo removeAccount(Long id) throws Exception {
Iterator<AccountInfo> it = accounts.iterator();
StringBuilder sb = new StringBuilder();
while(it.hasNext()) {
AccountInfo acctInfo = it.next();
Long acctInfoId = acctInfo.getId();
if(acctInfoId.equals(id)) {
it.remove();
return acctInfo;
}
sb.append(" ");
sb.append(acctInfoId);
}
throw new Exception("Cannot find id " + id + " Tried " + sb.toString());
}
}
So it looks like the answer is owned objects cannot use a Long primary key. The datanucleus enhancer told me this for another object type I added. I'm not sure why it skipped this warning for my AccountInfo object.
I switched my key over to a String, and changed the annotations to use the string properly, and now I'm able to delete from the collection.
I'd have thought that the first thing to do when debugging anything would be to look at the log (DEBUG level). It tells you what states the objects are in at the different points. So what state is it in when you call makePersistent() ? and after ? and what happens when you call pm.close() ...