I am trying to insert values to room database but it's not working, I checked the database, and the tables were not created. I have created the database, DAO and Repository in java and calling the insert dao inside a coroutine in MainActivity Kotlin class.
DAO
#Insert
public void addExpense(List<Expenses> exp);
Repository
public class Repository {
private ExpensesDao expensesDao;
private SubscriptionsDao subscriptionsDao;
private static AppDatabase db;
public Repository(Context context) {
initDb(context);
expensesDao = db.expensesDao();
subscriptionsDao = db.subscriptionsDao();
}
private static void initDb(Context context) {
if (db == null) {
db = Room.databaseBuilder(
context,
AppDatabase.class, "local_db"
)
.addMigrations(AppDatabase.DbMigration)
.build();
}
}
public void addExpense(List<Expenses> exp) {
expensesDao.addExpense(exp);
}
}
MainActivity.kt
class MainActivity : AppCompatActivity() {
private var firstRun = true
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val db = Repository(applicationContext)
var spacesList: List<String> = listOf("No Spaces Found")
var expList: List<Expenses> = listOf(
Expenses("dummy", LocalDate.now().toString(), 12.22, "dummy1"),
Expenses("dummy", LocalDate.now().toString(), 12.22, "dummy2"),
Expenses("dummy", LocalDate.now().toString(), 13.22, "dummy3"),
Expenses("dummy", LocalDate.now().toString(), 14.22, "dummy4"),
Expenses("dummy", LocalDate.now().toString(), 15.22, "dummy5"),
Expenses("dummy", LocalDate.now().toString(), 16.22, "dummy6")
)
CoroutineScope(Dispatchers.IO).launch {
// the insert call
val x = db.addExpense(expList)
println("-->> "+x.toString())
}
val tempSpacesList = db.getAllSpaces().value
if (tempSpacesList?.isEmpty() == true) {
spacesList = tempSpacesList
}
}
}
Edit
#Entity
public class Expenses {
public Expenses(String space, String date, double amount, String description) {
this.uid = uid;
this.space = space;
this.date = date;
this.amount = amount;
this.description = description;
}
#PrimaryKey(autoGenerate = true)
int uid;
#ColumnInfo(name = "space")
String space;
#ColumnInfo(name = "date")
String date;
#ColumnInfo(name = "amount")
double amount;
#ColumnInfo(name = "description")
String description;
}
Logcat (not much here..)
15:11:55.340 E Could not remove dir '/data/data/org.android.app/code_cache/.ll/': No such file or directory
Is this the right way of using insert dao?
What can I improve on this implementation?
Your code (less use of Subscirptions and getAllApaces) i.e. MainActivity being:-
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
val db = Repository(applicationContext)
var spacesList: List<String> = listOf("No Spaces Found")
var expList: List<Expenses> = listOf(
Expenses("dummy", LocalDate.now().toString(), 12.22, "dummy1"),
Expenses("dummy", LocalDate.now().toString(), 12.22, "dummy2"),
Expenses("dummy", LocalDate.now().toString(), 13.22, "dummy3"),
Expenses("dummy", LocalDate.now().toString(), 14.22, "dummy4"),
Expenses("dummy", LocalDate.now().toString(), 15.22, "dummy5"),
Expenses("dummy", LocalDate.now().toString(), 16.22, "dummy6")
)
CoroutineScope(Dispatchers.IO).launch {
// the insert call
val x = db.addExpense(expList)
println("-->> "+x.toString())
}
/*
val tempSpacesList = db.getAllSpaces().value
if (tempSpacesList?.isEmpty() == true) {
spacesList = tempSpacesList
}
*/
}
}
Works with AppInspection showing:-
Perhaps your issue is with how you are trying to see if tables exist. Have you used App Inspection (and waited for it to do it's business)?
The following assumptions have been made:
AppDatabase is Java as per
:-
#Database(entities = {Expenses.class}, exportSchema = false, version = 1)
abstract class AppDatabase extends RoomDatabase {
abstract ExpensesDao expensesDao();
}
The dependencies, for room are use are (as per the module build.gradle)
:-
kapt 'androidx.room:room-compiler:2.5.0'
implementation 'androidx.room:room-ktx:2.5.0'
implementation 'androidx.room:room-runtime:2.5.0'
annotationProcessor 'androidx.room:room-compiler:2.5.0'
with the plugins section including
:-
id 'kotlin-kapt'
Is this the right way of using insert dao?
Yes
What can I improve on this implementation?
You could eradicate the inefficient use of autoGenerate=true, which doesn't actually cause auto generation but instead adds the AUTOINCREMENT keyword, but accepts 0 as not being 0 rather interpreting the value of 0 as no value and hence having the value generated.
That is you could use:-
#Entity
public class Expenses {
public Expenses(String space, String date, double amount, String description) {
this.space = space;
this.date = date;
this.amount = amount;
this.description = description;
}
#PrimaryKey
Integer uid= null; /* <<<<<<<< now Inetger (object not primitive) so can be null
....
AUTOINCREMENT aka autoGenerate=true introduces a constraint (rule) that forces a generated value to be greater than any ever used and a per table basis. Without AUTOINCRMENT the generated value will be 1 greater than the current maximum value, thus allowing the reuse of values of rows that have been deleted.
The inefficiency is that to know the values of rows that have existed that SQLite creates a table named sqlite_sequence which stores the highest allocated value. This additional table needs to be read and updated whenever generating a value and hence the documentation saying
The AUTOINCREMENT keyword imposes extra CPU, memory, disk space, and disk I/O overhead and should be avoided if not strictly needed. It is usually not needed.
Here's all of the tables when using autoGenerate=true (again using App Inspection)
With the suggested removal of autoGenerate=true then
Another improvement is that instead of int or Integer, it is more correct to use long or Long for such columns (INTEGER column type and the PRIMARY KEY) and thus a generated value if not value is provided when inserting. The value can exceed what an int/Integer can hold (max 2147483647) as SQLite stores up to a 64bit signed integer (9223372036854775807).
with AUTOINCREMENT when this massive value has been reached the next insert would fail with an SQLITE_FULL error. Without then an unused value would be applied if available.
Additionally the actual generated value can be very useful to have as it uniquely identifies the row (especially so if using relationships between tables). You can get this value when inserting a row or rows but it will be returned as a long not an Int. You can also retrieve an array of values when inserting multiple rows again longs not ints. So you could have:-
#Insert
public long[] addExpense(List<Expenses> exp); /* or Long[] */
If using #Insert(onConflict = OnConflictStrategy.IGNORE) then if a row were IGNOREd due to an ignorable conflict (e.g. duplicate) then the value for that insert would be -1.
With the suggested changes then App Inspection shows:-
i.e. no noticeable difference, the data has been added (app was uninstalled deleting the entire database)
Related
I am trying to load data to the grid from another database table.
My grid is created from User entity
private Grid<User> grid = new Grid<>(User.class, false);
User has several columns like
private Long userId;
private String username;
private String email;
.
.
.
The User contains the record identifier from the second table (entity) too:
private Long organizationId;
So I added columns to the grid:
grid.addColumn(User::getUserId).setAutoWidth(true).setHeader("id");
grid.addColumn(User::getUsername).setAutoWidth(true).setHeader("login");
And I created my "own" column which using data from another table (entity):
grid.addColumn(user -> organizationService.findOrganizationNameByOrganizationId(user.getOrganizationId()).setAutoWidth(true).setHeader("organization name");
But problem is, that loading is very slow, because User table has about 500 000 rows and it send query to the database for each row ...
Is possible to load organization name by organization id defined in user entity in batch e.g. ?
First of all, I'd not bother with with enriching rows, in that way, if
I don't have to. If the solution is as simple as just joining the data
in the DB, creating a view, ... then just do that.
Yet, there are times, this has to be done efficiently, because the data
to enrich does not come from the same source.
I think the best place to do that, is as part of the lazy loading of the
data. Once you hold the stream in hand for the currently loaded page,
you can map over the stream and enrich the data, or do an batch load
etc. Pick what is most efficient.
If you have to do that a lot, make sure to provide useful tools inside
your repositories; for quickly adding things just for one grid, you can
as well highjack the DataProvider or it's successor.
Following an example (note the XXX):
#Route("")
class GridView extends Div {
GridView() {
add(new Grid<User>(User).tap {
setItems({ q ->
// XXX: this materializes the stream first to get all
// unique organizationId:s, batch-load the
// Organization, and finally transform the User
def users = FakeUserRepo.page(q.offset, q.limit).toList()
def orgaMap = FakeOrganizationRepo.batchLoad(users*.organizationId.toSet())
users.stream().map {
it.organizationName = orgaMap[it.organizationId].name; it
}
}, { q ->
FakeUserRepo.count()
})
})
}
}
#TupleConstructor
class Organization {
Integer id
String name
}
class FakeOrganizationRepo {
public static final Map<Integer, Organization> ORGANIZATIONS = (0..<5).collectEntries { [it, new Organization(it, "Organization $it")] }
static Map<Integer, Organization> batchLoad(Set<Integer> ids) {
ORGANIZATIONS.subMap(ids)
}
}
#TupleConstructor
class User {
Integer id
String name
Integer organizationId
String organizationName
}
class FakeUserRepo {
public static final Collection<User> USERS = (1..500_000).collect {
new User(it, "User $it", it % FakeOrganizationRepo.ORGANIZATIONS.size())
}
static int count() {
USERS.size()
}
static Stream<User> page(int offset, int limit) {
USERS.stream().skip(offset).limit(limit)
}
}
I do have a model class:
public class LimitsModel {
private Long id;
private Long userId;
private Long channel;
}
I also have a unique constraint on my entity set on fields userId and channel. Throughtout the application, there's no chance those could duplicate.
The limits were added mid development, so we already had users and channels and had to create limits entity for every existing user. So we're creating them during some operation and there's no other place they're created. Here's how we create them:
List<LimitsModel> limits = userDAO.getUserLimits(userId, channel);
if(isNull(limits) || limits.isEmpty()){
List<limitsModel> limitsToSave = this.prepareLimits();
limits = userDAO.createOrUpdateLimits(limitsToSave);
}
.
.
.
other operations
What I'm getting is
Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (USER_LIMITS_UNIQUE) violated
Any clues what could be the case? I'm simply drawing the limits from the database, checking if they exist and if not creating them. Where's the place for unique constraint violation?
EDIT
createOrUpdateLimits just calls this method:
public void createOrUpdateAll(final Collection<?> objects) {
getHibernateTemplate().executeWithNativeSession(session -> {
Iterator<?> iterator = objects.iterator();
while (iterator.hasNext()) {
session.saveOrUpdate(iterator.next());
}
return null;
});
}
prepareLimits nothing complicated, a simple builder:
private List<LimitsModel> prepareLimits() {
List<LimitsModel> limitsToSave = LimitsModel.CHANNELS.stream()
.map(channel -> LimitsModel.builder()
.userId(UserUtils.getId())
.channel(channel)
.build())
.collect(Collectors.toList());
return scaLimitsToSave;
}
getUserLimits:
public List<LimitsModel> getUserLimits(Long userId, Long channel) {
return getHibernateTemplate().execute(session -> {
final Criteria criteria = session.createCriteria(LimitsModel.class)
.add(Restrictions.eq(LimitsModel.PROPERTY_USER_ID, userId));
if (nonNull(channel)){
criteria.add(Restrictions.eq(LimitsModel.PROPERTY_CHANNEL, channel));
}
return criteria.list();
});
}
The constraint is on userId, channel. There is a possibility that the block that gets the limits and then creates them is called twice. Shouldn't the new limits be already in the database when it's called the second time? Isn't the transaction commited already?
https://vladmihalcea.com/how-to-implement-a-custom-string-based-sequence-identifier-generator-with-hibernate/
i tried to this for a field that is not primary key.
Also same solution for here:
How to implement IdentifierGenerator with PREFIX and separate Sequence for each entity
But even it does not go to Java method when i run the program. It saves as null.
And i cant see the log that i put inside my class. There is no log for my class.
I copied from that blog but my code is:
public class StringSequenceIdentifier
implements IdentifierGenerator, Configurable {
public static final String SEQUENCE_PREFIX = "sequence_prefix";
private String sequencePrefix;
private String sequenceCallSyntax;
#Override
public void configure(
Type type, Properties params, ServiceRegistry serviceRegistry)
throws MappingException {
System.out.println("xxx");
final JdbcEnvironment jdbcEnvironment =
serviceRegistry.getService(JdbcEnvironment.class);
final Dialect dialect = jdbcEnvironment.getDialect();
final ConfigurationService configurationService =
serviceRegistry.getService(ConfigurationService.class);
String globalEntityIdentifierPrefix =
configurationService.getSetting( "entity.identifier.prefix", String.class, "SEQ_" );
sequencePrefix = ConfigurationHelper.getString(
SEQUENCE_PREFIX,
params,
globalEntityIdentifierPrefix);
final String sequencePerEntitySuffix = ConfigurationHelper.getString(
SequenceStyleGenerator.CONFIG_SEQUENCE_PER_ENTITY_SUFFIX,
params,
SequenceStyleGenerator.DEF_SEQUENCE_SUFFIX);
final String defaultSequenceName = ConfigurationHelper.getBoolean(
SequenceStyleGenerator.CONFIG_PREFER_SEQUENCE_PER_ENTITY,
params,
false)
? params.getProperty(JPA_ENTITY_NAME) + sequencePerEntitySuffix
: SequenceStyleGenerator.DEF_SEQUENCE_NAME;
sequenceCallSyntax = dialect.getSequenceNextValString(
ConfigurationHelper.getString(
SequenceStyleGenerator.SEQUENCE_PARAM,
params,
defaultSequenceName));
}
#Override
public Serializable generate(SharedSessionContractImplementor session, Object obj) {
System.out.println("xxx");
if (obj instanceof Identifiable) {
Identifiable identifiable = (Identifiable) obj;
Serializable id = identifiable.getId();
if (id != null) {
return id;
}
}
long seqValue = ((Number) Session.class.cast(session)
.createSQLQuery(sequenceCallSyntax)
.uniqueResult()).longValue();
return sequencePrefix + String.format("%011d%s", 0 ,seqValue);
}
}
That is in my domain:
#GenericGenerator(
name = "assigned-sequence",
strategy = "xxxxxx.StringSequenceIdentifier",
parameters = {
#org.hibernate.annotations.Parameter(
name = "sequence_name", value = "hibernate_sequence"),
#org.hibernate.annotations.Parameter(
name = "sequence_prefix", value = "CTC_"),
}
)
#GeneratedValue(generator = "assigned-sequence", strategy = GenerationType.SEQUENCE)
private String referenceCode;
WHAT I WANT IS
I need a unique field, which is not primary. So, i decided that incrementing is best solution because otherwise, i have to check for each created random if it exists in database (i also open suggestions for this).
It will be around 5-6 characters and alphanumeric.
I want to make JPA increment this but it seems i cant do it.
This is very similar to Hibernate JPA Sequence (non-Id) but I don't think it's an exact duplicate. Yet the answers seem to apply and they seem to suggest the following strategies:
Make the field to be generated a reference to an entity with the only purpose that the field now becomes an ID and can get generated by the usual strategies. https://stackoverflow.com/a/536102/66686
Use #PrePersist to fill the field before it gets persisted. https://stackoverflow.com/a/35888326/66686
Make it #Generated and generate the value in the database using a trigger or similar. https://stackoverflow.com/a/283603/66686
I've got a dynamodb table with a timestamp ("creationDate") as a range. The model is using a joda DateTime to make it easy to use (compatibility with the rest of the code). To be able to make between queries on this range, I used a numeric type for the attribute in the table, and planned to store it as a java timestamp (milliseconds since epoch). Then, I added a marshaller to convert a joda DateTime to a String representing a long and vice-versa.
The table structure (creation):
void CreateTable()
{
CreateTableRequest createTableRequest = new CreateTableRequest().withTableName(LinkManager.TABLE_NAME);
ProvisionedThroughput pt = new ProvisionedThroughput()
.withReadCapacityUnits(LinkManager.READ_CAPACITY_UNITS)
.withWriteCapacityUnits(LinkManager.WRITE_CAPACITY_UNITS);
createTableRequest.setProvisionedThroughput(pt);
ArrayList<AttributeDefinition> ad = new ArrayList<AttributeDefinition>();
ad.add(new AttributeDefinition().withAttributeName("creationDate").withAttributeType(ScalarAttributeType.N));
ad.add(new AttributeDefinition().withAttributeName("contentHash").withAttributeType(ScalarAttributeType.S));
createTableRequest.setAttributeDefinitions(ad);
ArrayList<KeySchemaElement> ks = new ArrayList<KeySchemaElement>();
ks.add(new KeySchemaElement().withAttributeName("contentHash").withKeyType(KeyType.HASH));
ks.add(new KeySchemaElement().withAttributeName("creationDate").withKeyType(KeyType.RANGE));
createTableRequest.setKeySchema(ks);
this.kernel.DDB.createTable(createTableRequest);
}
The model:
#DynamoDBTable(tableName="Link")
public class Link {
private String ContentHash;
private DateTime CreationDate;
#DynamoDBHashKey(attributeName = "contentHash")
public String getContentHash() {
return ContentHash;
}
public void setContentHash(String contentHash) {
ContentHash = contentHash;
}
#DynamoDBRangeKey(attributeName = "creationDate")
#DynamoDBMarshalling(marshallerClass = DateTimeMarshaller.class)
public DateTime getCreationDate() {
return CreationDate;
}
public void setCreationDate(DateTime creationDate) {
CreationDate = creationDate;
}
}
The marshaller:
public class DateTimeMarshaller extends JsonMarshaller<DateTime>
{
public String marshall(DateTime dt)
{
return String.valueOf(dt.getMillis());
}
public DateTime unmarshall(String dt)
{
long ldt = Long.parseLong(dt);
return new DateTime(ldt);
}
}
I get the following error:
Exception in thread "main" com.amazonaws.AmazonServiceException: Type of specified attribute inconsistent with type in table (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 8aabb703-cb44-4e93-ab47-c527a5aa7d52)
I guess this is because the marshaller returns a String, and dynamoDB wants a numeric type, as the attribute type is N. I don't know what people do in this case, I searched for a solution but couldn't find it. I only tested this on a local dynamodb instance, which I don't think makes any difference (this is a validation check failing, there's no request even made).
The obvious workaround is to use long type for the dates in the model and add special getters and setters to work with DateTime. Still, is there a cleaner way ? I am sure I'm not the only one using DatTime range in a model.
What I would do is re-create the table with the Range key as String itself.
Even if it's going to be populated with long numbers, making it type: S will ensure compatibility with the Marshaller
Let's for example say I have the following objectify model:
#Cache
#Entity
public class CompanyViews implements Serializable, Persistence {
#Id
private Long id;
private Date created;
private Date modified;
private Long companyId;
........
private Integer counter;
........
#Override
public void persist() {
persist(false);
}
#Override
public void persist(Boolean async) {
ObjectifyService.register(Feedback.class);
// setup some variables
setUuid(UUID.randomUUID().toString().toUpperCase());
setModified(new Date());
if (getCreated() == null) {
setCreated(new Date());
}
// do the persist
if (async) {
ofy().save().entity(this);
} else {
ofy().save().entity(this).now();
}
}
}
I want to use the counter field to track the number of views, or number opens or basically count something using an integer field.
What happens now is that for one GAE instance, the following will be called:
A:
CompanyViews views = CompanyViews.findByCompanyId(...);
views.setCounter(views.getCounter() + 1);
views.persist();
and for another instance:
B:
CompanyViews views = CompanyViews.findByCompanyId(...);
views.setCounter(views.getCounter() + 1);
views.persist();
If they both read the counter at the same time or read the counter before the other instance has persisted it, they will overwrite each other.
In MySQL / Postgres you get row-level locking, how does one do a "row-level lock" for Objectify entities on GAE?
You need to use transactions when concurrently updating entities.
Note that since you update same entity you will have a limitation of about 1 write/s. To work around that look into sharding counters.