I have a field in a class that should only be accessed directly from a getter. As an example...
public class CustomerHelper {
private final Integer customerId;
private String customerName_ = null;
public CustomerHelper(Integer customerId) {
this.customerId = customerId;
}
public String getCustomerName() {
if(customerName_ == null){
// Get data from database.
customerName_ = customerDatabase.readCustomerNameFromId(customerId);
// Maybe do some additional post-processing, like casting to all uppercase.
customerName_ = customerName_.toUpperCase();
}
return customerName_;
}
public String getFormattedCustomerInfo() {
return String.format("%s: %s", customerId, getCustomerName());
}
}
So even within the class itself a function like getFormattedCustomerInfo should not be able to access it via customerName_. Is there a way to enforce a class not access a field directly aside from the provided getter function?
There is no such mechanism in Java (or at least I think there should not be). If you are sure that getFormattedCustomerInfo should be prohibited from direct access to customerName_, create another class and compose them.
I would recommend CustomerInfoFormatter.
Also, I would change customerName_ to customerName as the language supports privacy by explicit declaration and it is not needed to add more indicators.
It looks like you are trying to cache the database value, and want to protect against accessing a value which has yet to be cached.
If this is true, then the variable customerName_ should not exist in the CustomerHelper class; the cached value should exist closer to the database.
The method customerDatabase.readCustomerNameFromId(customerId) should first look at a cache, and if the cache is empty, call the database and cache the result.
Effectively, customerName_ becomes a value in the cache: Map<Integer, String> cache where the key is customerId.
Related
I have a java class:
class User {
private String name;
private String address;
private int age;
private BigDecimal salary;
// other fields
//getters setters
}
I can receive a map of new values in these fields and update it. It looks like this: ChangeItem changeItem where changeItem.key is field's name and changeItem.value is the field's value
I create strategies for updating each field. For example common interface:
public interface UpdateStrategy<T> {
T updateField(T t, ChangeItem changeItem) throws ValidationExceptions;
}
And some implementation:
public class UpdateNameStrategy implements UpdateStrategy<User> {
private static final Pattern USER_NAME = Pattern.compile(...);
#Override
public User updateField(User user, ChangeItem changeItem) throws ValidationExceptions {
String fieldValue = changeItem.value;
if (!validate(fieldValue))
throw new ValidationExceptions(changeItem);
user.setName(fieldValue);
return user;
}
private boolean validate(String value){
return USER_NAME.matcher(value).matches();
}
}
In the real project I have 40 fields and 40 strategies for each field(with different validation and logic).
I think this class violates the SRP(single responsibility principle). And I move validation logic to separately class. I change the validation method to:
public class UpdateNameStrategy implements UpdateStrategy<User> {
#Override
public User updateField(User user, ChangeItem changeItem) throws ValidationExceptions {
String fieldValue = changeItem.value;
ValidateFieldStrategy fieldValidator = new UserNameValidate(fieldValue);
if (!fieldValidator.validate())
throw new ValidationExceptions(changeItem);
return user;
}
}
and
public class UserNameValidate implements ValidateFieldStrategy {
private static final Pattern USER_NAME = Pattern.compile(...);
private String value;
public UserNameValidate(String value) {
this.value = value;
}
#Override
public boolean validate() {
return USER_NAME.matcher(value).matches();
}
}
And now I have 40 strategies for update fields and 40 validators. Is it the correct way? Or maybe I can change this code more clear?
I'm sorry for being blunt, my eyes are bleeding while looking at this. You took one unnecessarily complicated validation model and you split it in two to make it even more complicated. And none of it has much to do with the Single Responsibility Principle.
Without knowing anything specific to your domain problem, this looks like a superfluous usage of the Strategy pattern.
I've never seen a legitimate domain problem requiring a validation strategy split like this, for every single field.
An object in a domain is not just a collection of fields. It is also behavior governing the fields (which is the object state) and the rules governing the mutability of that state.
In general we want rich objects with behavior. And that behavior typically includes validation.
I sincerely doubt that every single field in a model requires validation to this level of granularity. Put the validation in the object's setter methods and be done with it.
You are killing yourself doing all this elaborate setup. We all want structure, but at some point all of this is just ceremony for building very tall sand castles.
Validation in general is part of an object. And an object is responsible, it is its responsibility to govern its state, the collection of fields and values it possesses and controls.
Single Responsibility Principle does not mean extracting the responsibility of validating fields out of an object. That responsibility is intrinsic to the object.
Single Responsibility Principle concerns itself with "external" responsibility, the responsibility of an object to provide a single coherent function (or set of coherent functions) to someone that uses that object.
Consider a Printer object. This object is responsible to print. It is not responsible to manage the network connections between a printer and a user, for instance.
SRP is not limited to classes, but also packages and modules. A Mathematics module should provide you with, obviously, mathematical routines. It should not provide routines for filesystem manipulation, right?
That's what the SRP is about. What you are doing, extracting validation behavior out of an object, that has little, if anything, to do with SRP.
Sometimes one might want to extract out common validation routines (check if a string is black or null, or whether a number is a natural number.)
So you might have a class like this:
public class User {
// some fields, blah blah
public void setName(final String aName){
if( aName == null || a.aName.trim().length() < 1){
throw new SomeException("empty string blah blah");
}
this.name=aName.trim(); // model requires this to be trimmed.
}
public void setRawField(final String aValue){
if( aName == null || a.aName.trim().length() < 1){
throw new SomeException("empty string blah blah");
}
this.rawField=aValue; // model requires this to not be trimmed.
}
public void setRawField2(final String aValue){
// model requires this field to be non-null,
// can be blank, and if not blank, must be all lower case.
if(aValue == null) {
throw new NullPointerException("null string blah blah");
}
this.rawField2=aValue.toLowerCase();
}
changed into a class that delegates minutia to an external validation utility class or module.
public class User {
// some fields, blah blah
public void setName(final String aName){
// model requires this to be trimmed
this.name=Validator.notEmptyOrDie(aName).trim();
}
public void setRawField(final String aValue){
// model requires this to *not* be trimmed
this.rawField=Validator.notEmptyOrDie(aValue);
}
public void setRawField2(final String aValue){
// model requires this field to be non-null,
// can be blank, and if not blank, must be all lower case.
// too contrive to refactor, leave it here.
if(aValue == null) {
throw new NullPointerException("null string blah blah");
}
this.rawField2=aValue.toLowerCase();
}
public class Validator {
static public String notEmptyOrDie(final String aString){
if( aString == null || aString.trim().length() < 1){
throw new SomeException("empty string blah blah");
}
return aString;
}
This is an approach I actually follow, to refactor parts of common validation. I factor out minutia.
But the core validation logic, if any, it remains in the object. Notice that validation is still part of the User class. All that got extracted is the minutia.
The logic that declares the intent of validation (check if black or die) still remains part of the User class. It is intrinsic to the class' behavior.
In some models, the User class might not require validation at all. It might be just a data shuttle, a POJO.
OTH, in a model that requires it to validate its state, that state should usually go inside the class, and a developer must have a very good argument for extricating that logic the way you did in your sample code.
SRP says nothing about how you compose responsibility internal to the object, only external to consumers of said object.
As a rule of thumb, validation of object fields belong to the object as logic internal to the object. It is intrinsic to the object's behavior, invariants, pre conditions and post conditions.
Very rarely you extract out the entire validation out of an object (unless we are talking about POJOs serialized and deserialized by an external package, and with validations added declaratively via annotations or some sort of controlling configuration descriptor.)
Hit me up if you still have any questions. Not sure how fast I can answer back, but I don't mind to answer questions if I can.
**** EDIT ***
User #qujck mentions the valid concern in this proposed approach, that it is not possible to differentiate all validation exceptions (becuase they use common exceptions for all.)
One possibility (which I've used) is to have overloaded and/or polymorphic validators:
public class Validator {
static public String notEmptyOrDie(final String aString){
return Validator.notEmptyOrDie(aString, null);
}
static public String notEmptyOrDie(final String aString,
final String aFieldName){
if( aString == null || aString.trim().length() < 1){
throw new SomeException(
(aFieldName==null? "" : aFieldName + " ")
+ "empty string blah blah");
}
return aString;
}
}
If one uses a hierarchy of validation exceptions with common constructors, then one could take this further by passing the desired exception class, and use reflection to create instances to be thrown.
I've done that also. Actually, I'm doing that now for a common error-throwing mechanism in an EJB layer that itself reaches to another system via network protocols.
But that's something I do to cope with an existing system, not something I would do if I had a design choice. And it still limits itself to refactoring validation or error handling to its core elements.
Actual, object-specific validation still remains at/within the object itself.
It seems like my RealmObject values are being hidden by the RealmProxy class, but can be set from the proxyclass.
My model is pretty straight forward as you can see.
public class GroupRealm extends RealmObject {
#PrimaryKey
public String id;
#Index
public String name;
public String imageUrl;
public int order;
public GroupRealm parent;
public RealmList<GroupRealm> children;
public RealmList<ContentRealm> contents;
}
This is how i am setting the values(db is a valid Realm, and everything is in a transaction that commits fine):
GroupRealm gr = db.where(GroupRealm.class).equalTo("id",g.GroupID).findFirst();
if(gr==null){
gr = db.createObject(GroupRealm.class,g.GroupID);
}
gr.imageUrl = g.GlyphUrl;
gr.name = g.Title;
gr.order = g.OrderNum;
The image below is what I get when i query the db latter on.(same variable name not same place in code)
In my android.library where my RealmObjects are defined project I have the necessary plugins.
apply plugin: 'com.android.library'
apply plugin: 'realm-android'
and on the project level I am setting the correct dependencies:
dependencies {
classpath 'com.android.tools.build:gradle:2.1.0'
classpath "io.realm:realm-gradle-plugin:0.90.1"
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
I am out of ideas. If I try to access anything I retrieve the GroupRealm as expected but all of the public properties exposed through the proxy class return null!
Relevant FAQ in documentation: https://realm.io/docs/java/latest/#debugging
Realm uses Android Gradle Transform API. It gives a possibility to manipulate compiled class files before they are converted to dex files.
More details inside io.realm.transformer.RealmTransformer and io.realm.transformer. BytecodeModifier classes which can be found in the realm's github.
What RealmTransformer does, among others, is:
replacing all accesses to fields of user's RealmObjects with the appropriate Realm accessors.
You can also check result classes inside folder app/build/intermediates/transforms/RealmTransformer/
Example of setter:
Line of your code:
gr.imageUrl = g.GlyphUrl;
will be replaced with something like this:
String var5 = g.GlyphUrl;
gr.realmSet$imageUrl(var5);
Example of getter:
String url = gr.imageUrl;
will be replaced with something like this:
String url = gr.realmGet$imageUrl();
Example use case
You have created class GroupRealm. Realm using Transform API generates GroupRealmRealmProxy. This proxy class looks like this:
public class GroupRealmRealmProxy extends GroupRealm implements RealmObjectProxy, GroupRealmRealmProxyInterface {
private final GroupRealmRealmProxy.GroupRealmColumnInfo columnInfo;
private final ProxyState proxyState;
private RealmList<GroupRealm> childrenRealmList;
private RealmList<ContentRealm> contentsRealmList;
private static final List<String> FIELD_NAMES;
GroupRealmRealmProxy(ColumnInfo columnInfo) {
...
}
public String realmGet$id() {
this.proxyState.getRealm$realm().checkIfValid();
return this.proxyState.getRow$realm().getString(this.columnInfo.idIndex);
}
public void realmSet$id(String value) {
this.proxyState.getRealm$realm().checkIfValid();
if(value == null) {
this.proxyState.getRow$realm().setNull(this.columnInfo.idIndex);
} else {
this.proxyState.getRow$realm().setString(this.columnInfo.idIndex, value);
}
}
public String realmGet$name() {
this.proxyState.getRealm$realm().checkIfValid();
return this.proxyState.getRow$realm().getString(this.columnInfo.nameIndex);
}
public void realmSet$name(String value) {
this.proxyState.getRealm$realm().checkIfValid();
if(value == null) {
this.proxyState.getRow$realm().setNull(this.columnInfo.nameIndex);
} else {
this.proxyState.getRow$realm().setString(this.columnInfo.nameIndex, value);
}
}
...
}
You can observe that methods realmSet$name and realmGet$name don't have access to field name declared in the class GroupRealm. They use proxyState.
Now, let's back to the usage of GroupRealm. When you debug your code:
GroupRealm gr = db.where(GroupRealm.class).equalTo("id",g.GroupID).findFirst();
if(gr==null){
gr = db.createObject(GroupRealm.class,g.GroupID);
}
gr.imageUrl = g.GlyphUrl;
gr.name = g.Title;
gr.order = g.OrderNum;
in a reality it's decompiled version looks like this:
GroupRealm gr = (GroupRealm)realm.where(GroupRealm.class).equalTo("id", g.GroupId).findFirst();
if(gr == null) {
gr = (GroupRealm)realm.createObject(GroupRealm.class, g.GroupId);
}
String var7 = g.GlyphUrl;
gr.realmSet$imageUrl(var7);
var7 = g.Title;
gr.realmSet$name(var7);
int var8 = g.OrderNum;
gr.realmSet$order(var8);
First of all, gr is the instance of GroupRealmRealmProxy class. As you can see, setting of gr.name is replaced by gr.realmSet$name(var7). It means that the field name of GroupRealm is never used. The situation is analogous in the case of realmGet$.
While debugging you see your version of source code but actually you're using a modified version with injected methods realmSet$ and realmGet$.
The fields are null. You access the properties through a native method that replaces all field access. Previously (before 0.88.0) it used to create a dynamic proxy that overrode your getters and setters to use their native proxy implementation.
The fields don't have values. But as you can see, the Realm object has the values just fine: it says so in the toString() value.
There is nothing to be done about this. Because of the "clever" thing that Realm is doing, the debugger is completely prevented from doing what it is supposed to. You'll have to rely on a lot of Log.d statements.
I'm sorry. That's just the reality of it.
This is because of the Realm proxies model which is zero-copy storage.
You can use Kotlin Realm extension, Vicpinm library https://github.com/vicpinm/Kotlin-Realm-Extensions
If you still want to use in Java then you achieve it by:-
Realm.getDefaultInstance().copyFromRealm(realmObject)
The answers above are all right if you directly use an RealmObject retrieved from your Realm. With Managed RealmObject (Objects "directly" connected with your Realm, so the "Real Instance" of the object inside your Realm which you can Modify only inside RealmTransaction and which changes will affect all other Managed RealmInstance instantly) you can't see their values inside of the debugger because of the proxy.
Anyway you can work around this by using a NO MANAGED object, so by COPYING the RealmObject from the realm:
MyRealmObject obj = getRealmObjectFromRealm();
if(obj != null){
obj = mRealm.copyFromRealm(obj);
}
This way you will see all properties of your realm object inside the debugger.
Obviously if you need to use a Managed Realm Object inside your code, when you are debugging you need to change your code by creating another "MyRealmObject" instance which is a copy from the Realm of the other "MyRealmObject".
This way you will see all objects properties inside the debugger (:
Hope this is helpful,
Greetings & have a nice coding!
:D
I am trying to implement distributed cache using Hazelcast in my application. I am using Hazelcast’s IMap. The problem I have is every time I get a value from a map and update the value, I need to do a put(key, value) again. If my value object has 10 properties and I have to update all 10, then I have to call put(key, value) 10 times. Something like -
IMap<Integer, Employee> mapEmployees = hz.getMap("employees");
Employee emp1 = mapEmployees.get(100);
emp1.setAge(30);
mapEmployees.put(100, emp1);
emp1.setSex(“F”);
mapEmployees.put(100, emp1);
emp1.setSalary(5000);
mapEmployees.put(100, emp1);
If I don’t do this way, some other node which operates on the same Employee object will update it and the final result is that the employee object is not synchronized. Is there any solution to avoid calling put explicitly multiple times? In a ConcurrentHashMap, I don’t need to do this because if I change the object, the map also gets updated.
As of version 3.3 you'll want to use an EntryProcessor:
What you really want to do here is build an EntryProcessor<Integer, Employee> and call it using
mapEmployees.executeOnKey( 100, new EmployeeUpdateEntryProcessor(
new ObjectContainingUpdatedFields( 30, "F", 5000 )
);
This way, Hazelcast handles locking the map on the key for that Employee object and allows you to run whatever code is in the EntryProcessor's process() method atomically including updating values in the map.
So you'd implement EntryProcessor with a custom constructor that takes an object that contains all of the properties you want to update, then in process() you construct the final Employee object that will end up in the map and do an entry.setValue(). Don't forget to create a new StreamSerializer for the EmployeeUpdateEntryProcessor that can serialize Employee objects so that you don't get stuck with java.io serialization.
Source:
http://docs.hazelcast.org/docs/3.5/manual/html/entryprocessor.html
Probably a transaction is what you need. Or you may want to take a look at distributed lock.
Note that in your solution if this code is ran by two threads changes made by one of them will be overwriten.
This may interest you.
You could do something like this for your Employee class (simplified code with one instance variable only):
public final class Employee
implements Frozen<Builder>
{
private final int salary;
private Employee(Builder builder)
{
salary = builder.salary;
}
public static Builder newBuilder()
{
return new Builder();
}
#Override
public Builder thaw()
{
return new Builder(this);
}
public static final class Builder
implements Thawed<Employee>
{
private int salary;
private Builder()
{
}
private Builder(Employee employee)
{
salary = employee.salary;
}
public Builder withSalary(int salary)
{
this.salary = salary;
return this;
}
#Override
public Employee freeze()
{
return new Employee(this);
}
}
}
This way, to modify your cache, you would:
Employee victim = map.get(100);
map.put(100, victim.thaw().withSalary(whatever).freeze());
This is a completely atomic operation.
If there is possibility that another node can update data that your node is working with then using put() will overwrite changes made by another node. Usually it is unwanted behavior, cause it leads to data loss and inconsistent data state.
Take a look at IMap.replace() method and other ConcurrentMap related methods. If replace() is failed then you've faced changes collision. In this case you should give it another attempt:
re-read entry from hazelcast
update it's fields
save to hazelcast with replace
After some failed attempts you can throw StorageException to the upper level.
You should use tryLock on your map entry :
long timeout = 60; // Define your own timeout
if (mapEmployees.tryLock(100, timeout, TimeUnits.SECONDS)){
try {
Employee emp1 = mapEmployees.get(100);
emp1.setAge(30);
emp1.setSex(“F”);
emp1.setSalary(5000);
mapEmployees.put(100, emp1);
} finally {
mapEmployees.unlock(100);
}
}else{
// do something else like log.warn(...)
}
See : https://docs.hazelcast.com/imdg/4.2/data-structures/fencedlock#releasing-locks-with-trylock-timeout
In my datamodel a have many entities where attributes are mapped to enumerations like this:
#Enumerated(EnumType.STRING)
private MySpecialEnum enumValue;
MySpecialEnum defines some fixed values. The mapping works fine and if the database holds a NULL-value for a column I get NULL in the enumValue-attribute too.
The problem is, that my backend module (where I have no influence on) uses spaces in CHAR-columns to identify that no value is set. So I get an IllegalArgumentException instead of a NULL-value.
So my question is: Is there a JPA-Event where I can change the value read from the database before mapping to the enum-attribute?
For the write-access there is the #PrePersist where I can change Null-values to spaces. I know there is the #PostLoad-event, but this is handled after mapping.
Btw: I am using OpenJpa shipped within WebSphere Application Server.
You could map the enum-type field as #Transient (it will not be persisted) and map another field directly as String, synchronizing them in #PostLoad:
#Transient
private MyEnum fieldProxy;
private String fieldDB;
#PostLoad
public void postLoad() {
if (" ".equals(fieldDB))
fieldProxy = null;
else
fieldProxy = MyEnum.valueOf(fieldDB);
}
Use get/setFieldProxy() in your Java code.
As for synchronizing the other way, I'd do it in a setter, not in a #PreUpdate, as changes to #Transient fields probably do not mark the entity as modified and the update operation might not be triggered (I'm not sure of this):
public void setFieldProxy(MyEnum value) {
fieldProxy = value;
if (fieldProxy == null)
fieldDB = " ";
else
fieldDB = value.name();
}
OpenJPA offers #Externalizer and #Factory to handle "special" database values.
See this: http://ci.apache.org/projects/openjpa/2.0.x/manual/manual.html#ref_guide_pc_extern_values
You might end up with something like this: not tested...
#Factory("MyClass.mySpecialEnumFactory")
private MySpecialEnum special;
...
public static MySpecialEnum mySpecialEnumFactory(String external) {
if(StringUtils.isBlank(external) return null; // or why not MySpecialEnum.NONE;
return MySpecialEnum.valueOf(external);
}
public MyClass(Integer userId, Integer otherId) {
if(!userId.equals(otherId)){
this.userId = userId;
this.otherId = otherId;
}
}
Thats as far as I got, I want to ensure an instance if never created with matching id's ?
If you can't allow the two values to be equal then pretty much your only option it to raise an exception in that case.
I created another method and made the constructor private, it returns null if matching ids
private MyClass(Integer userId, Integer otherId) {
{
this.userId = userId;
this.otherId = otherId;
}
}
public static MyClass getInstance(Integer userId, Integer otherId)
if(!userId.equals(otherId)){
return new MyClass(userId,otherId);
}
return null;
}
I might be completely missing the point of your design, but if you want to create instances of an object with unique ID's that never clash consider using a UUID. Your instances should never have to do a 'circle-jerk' of ID comparisons to make sure none of them are violating the uniqueness constraints.
Documentation on UUID.
I use another approach, I keep a registry of newly created instances (in an HashSet) and allow instatiation of Objects via a static factory.
class User {
private int _id;
private static HashSet _instanced = new HashSet();
public static User getInstance(Integer id) {
if (_instanced.contains(id)) {
return null;
}
return new User(id);
}
private User(Integer id) {
_id = id.toInt();
}
// Getter/Setter for ID
}
Since the constructor is private, none will instantiate another User with the same id.
in your methods you could then write
User x = User.getInstance(1);
Of course this will add one more level to your solution. Still I prefer this kind of approach.