Back to Back Retrieval & Storage of Material Item - java

I've built a model for a storage facility using the material handling library. It is essentially a facility where users can storage their belongings underground and have the system automatically store it into a shelving rack(s). Later they can to retrieve it as needed and the system would transport the associated belonging to a "pick up" room automatically.
I've modelled the racks as a StorageSystem with Storage racks. For typical operations, I use the Retrieve block to extract and Store block to insert the belonging into the shelves. However, its possible that another unrelated item is in the way that must first be moved to another free location before the requested item can be extracted. In other words, the blocking item needs to be moved from one part of storage to another part of storage.
The cells in the blocked out section have been deactivated. I tried to do a back to back Retrieve and Store blocks where the destination and pick up location are the same (i.e. the xyz of the material agent).
Is there a better way to model this in one continuous operation? I'm finding that the item can be "misplaced" (its not in the network anymore) during the transition between blocks and then the item floats around trying to find the transporter. The transporter is path guided and can only be on the access zone corridor. I think the item floats around because it cannot find a path to the transporter. But the transporter path is in the access zone corridor beside the storage cell. I don't understand why it cannot find the path.
I can't seem to post pictures due to the reputation being below 10. But the relevant pics are in my earlier question here:
Link

Related

Choosing databasetype for a decentralized calendar project

I am developing a calendar system which is decentralised. It should save the data on each device and synchronise if they have both internet connection. My first idea was, just using a relational database and try to synchronise data after connection. But the theory says something else. The Brewers CAP-Theorem describes the theory behind it, but i am not sure if this theorem maybe is outdated. If i use this theorem i have "AP [Availability/Partition Tolerance] Systems". "A" because i need at any given time the data for my calendar and "P" because it can happen, that there is no connection between the devices and the data can't be synchronised. The example databases are CouchDB, RIAK or Cassandra. I have worked only with relational databases and doesn't know how to go on now. Is it that bad to use a relational Database for my project?
This is for my bachelor thesis. I just wanted to start using Postgres but then i found this theorem...
The whole project is based on Java.
I think the CAP theorem isn't really helpful to your scenario. Distributed systems that deal with partitions need to decide what to when one part wants to make a modification to the data, but can't reach the other part. One solution is to make the write wait - and this is giving up the "availability" because of the "partition", one of the options presented by the CAP theorem. But there are more useful options. The most useful (highly-available) option is to allow both parts to be written independently, and reconcile the conflicts when they can connect again. The question is how to do that, and different distributed systems choose different approaches.
Some systems, like Cassandra or Amazon's DynamoDB, use "last writer wins" - when we see a conflict between two conflicting writes, the last one (according some synchronized clock) wins. For this approach to make sense you need to be very careful about how you model your data (e.g., watch out for cases where the conflict resolution results in an invalid mixture of two states).
In other systems (and also in Cassandra and DynamoDB - in their "collection" types) writes can still happen independently on different nodes, but there is more sophisticat conflict resolution. A good example is Cassandra's "list": One can send an update saying "add item X to the list", and another update saying "add item Y to the list". If these updates happen on different partitions, the conflict is later resolved by adding both X and Y to the list. The data structures such as this list - which allows the content to be modified independently in certain ways on two nodes and then automatically reconciled in a sensible way, is known as a Conflict-free Replicated Data Type (CRDT).
Finally, another approach was used in Amazon's Dynamo paper (not to be confused by their current DynamoDB service!), known as "vector clocks": When you want to write to an object - e.g., a shopping cart - you first read the current state of the object and get with it a "vector clock", which you can think of as the "version" of the data you got. You then make the modification (e.g., add an item to the shopping cart), and write back the new version while saying what was the old version you started with. If two of these modifications happen on parallel on different partitions, we later need to reconcile the two updates. The vector clocks allow the system to determine if one modification is "newer" than the other (in which case there is no conflict), or they really do conflict. And when they do, application-specific logic is used to reconcile the conflict. In the shopping cart example, if we see the conflict is that in one partition item A was added to the shopping cart and in the other partition, item B was added to the shopping cart, the straightforward resolution is to just add both times A and B to the shopping cart.
You should probably pick one of these approaches. Just saying "the CAP theorem doesn't let me do this" is usually not an option ;-) In fact, in some ways, the problem you're facing is different than some of the systems I mentioned. In those systems, the common case is every node is always connected (no partition), with very low latency, and they want this common case to be fast. In your case, you can probably assume the opposite: the two parts are usually not connected, or if they are connected there is high latency, so conflict resolution because the norm, rather than the exception. So you need to decide how to do this conflict resolution - what happens if one adds a meeting on one device and a different meeting on the other device (most likely, just keep both as two meetings...), how do you know that one device modified a pre-existing meeting and didn't add a second meeting (vector clocks? unique meeting ids? etc.) so the conflict resolution ends up fixing the existing meeting instead of adding a second one? And so on. Once you do that, where you store the data on both partitions (probably completely different database implementations in the client and server) and which protocol you send the updates on become implementation details.
There's another issue you'll need to consider. When do we do these reconciliations? In many systems like I listed above, the reconciliation happens on read: If the client wants to read data and we suddenly see two conflicting versions on two reachable nodes, we reconcile. In your calendar application, you need a slightly different approach: It is possible that the client will only ever try to read (use) the calendar when not connected. You need to use the rare opportunities when he is connected to reconcile all the differences. Moreover, you may need to "push" changes - e.g., if the data on the server changed, the client may need to be told, "hey, I have some changed data, come and reconcile", so the end-user will immediately see an announcement on a new meeting, for example, that was added remotely (e.g., perhaps by a different user sharing the same calendar). You'll need to figure out how you want to do this. Again, there is no magic solution like "use Cassandra".

Tracking a variable using Google Analytics

I'm extremely new to Google Analytics on Android.
I've searched quite a bit for this, but I'm not sure I have understood it correctly, but here goes :
I want Google Analytics to track a particular variable in my app.
So for instance, a variable a has a separate value for every user of the app, is it possible for me to display the average of the value of the variable in a Google Analytics dashboard ?
As per my understanding goes, we can do this using Custom Dimensions and Metrics.
I haven't been able to find any tutorial for the same.
I'd be grateful if someone could help me with a tutorial or point me to something other than the developer pages from Google.
Thank You!
UPDATE
Firebase Analytics is now Google’s recommended solution for mobile app analytics. It’s user and event-centric and comes with unlimited app event reporting, cross-network attribution, and postbacks.
Older Answer
You may use GA Event Tracking
Check this guide and this one to check rate limits before you try this.
Events are a useful way to collect data about a user's interaction
with interactive components of your app, like button presses or the
use of a particular item in a game.
An event consists of four fields that you can use to describe a user's
interaction with your app content:
Field Name Type Required Description
Category String Yes The event category
Action String Yes The event action
Label String No The event label
Value Long No The event value
To send an event to Google Analytics, use HitBuilders.EventBuilder and send the hit, as shown in this example:
// Get tracker.
Tracker t = ((AnalyticsSampleApp) getActivity().getApplication()).getTracker(
TrackerName.APP_TRACKER);
// Build and send an Event.
tracker.send(new HitBuilders.EventBuilder()
.setCategory("Achievement")
.setAction("Earned")
.setLabel("5 Dragons Rescued")
.setValue(1)
.build());
On GA console you can see something like this:
where event value is
and avg value is
If you want to track users with specific attributes/traits/metadata then custom dimensions can be used to send this type of data to Google Analytics.
See Set up or edit custom dimensions (Help Center) and then update the custom dimension value as follows:
// Get tracker.
Tracker t = ((AnalyticsSampleApp) getActivity().getApplication()).getTracker(
TrackerName.APP_TRACKER);
t.setScreenName("Home Screen");
// Send the custom dimension value with a screen view.
// Note that the value only needs to be sent once.
t.send(new HitBuilders.ScreenViewBuilder()
.setCustomMetric(1, 5)
.build()
);
It is possible to send additional data to Google Analytics, using either Custom Dimensions or Custom Metrics.
Custom Dimensions are used for labels and identifiers that you will later use to separate your data. For example, you might have a Custom Dimension that tracks log-in status. This would allow you to break down your reports and compare logged-in traffic to not logged-in. These can contain text; while AB testing your site you might set up a custom dimension with the options 'alpha' and 'beta'. They can also contain numeric values, such as the time '08:15', or a unique identifier that you've generated (although you should be careful to follow Google's advice here, lest you include PII and rick account deletion https://developers.google.com/analytics/solutions/crm-integration#user_id).
Custom Metrics are used for numeric variables such as engagement time, or shopping cart value. They are a lot like custom dimensions, but are intended to be compared across dimensions. For example, you could compare the shopping basket value of your Organic users to those who come in via a paid link.
If you wanted to calculate an average, you would also require a calculated metric. This takes two metrics you already have, and produces a third. For example, if you site was all about instant engagement, and you wanted to track the time before the first click event on each page, you could set up that event click time as a custom metric. But this would only tell you what the total is; surely more customers are a good thing, but they make that total go up! So you set up a calculated metric that divides this total by the number of page views, giving you a value per page viewed.
There's a great guide by Simo Ahava about tracking Content Engagement that includes instructions for setting up Custom Metrics and Calculated Metrics.
http://www.simoahava.com/analytics/track-content-engagement-part-2/
However, I should warn you that his guide uses Google Tag Manager, which greatly simplifies the process of adding such customisation to your tags. If you don't want to take that step, you will have to code it manually, as recommended by Google's support https://support.google.com/analytics/answer/2709828?hl=en

What is the best way to build folders compact tree?

I'm making a browser of defined list of files.
I want to compact empty folders (Like idea based IDE usally can do)
Originally I have a list of files (I get it from MediaStore):
folder1/folder2/folder3/file1.mp3
folder1/folder2/folder3/file2.mp3
folder1/file3.mp3
And I want my browser has this structure:
folder1
-folder2/folder3
-file1.mp3
-file2.mp3
-file3.mp3
How I did that:
When the first time I get files from MediaStore I create a table in a database:
id name parent_id has_songs
0 folder1 -1 1
1 folder2 0 0
2 folder3 1 1
When every time the browser displays folders it makes a request to the database.
And then I start checking folders inside(An additional request to the db for every check is needed): if a folder doesn't have songs and has only one child folder then compact them, then check the next and the next.
This way for the example above if I want to see 'insides' of folder1 it does 3 requests to the local db:
1. Get list of all folders (Make a request to the db here)
2. Check folder2 has one subfolder and doesn't have songs (Make a request to the db here)
3. Check folder3 has one subfolder and doesn't have songs (Make a request to the db here)
1. Is that the best way to implement this?
2. Is it performance critical
to make so many requests to the local db on user click?
It depends on your environment. :)
In general:
If the file list is relatively small it is worth to consider if you could keep all the information in memory. If you have enough free memory, this could be the best solution since it is way faster than reaching the database. I used to create an object that actually represents the directory and file structure, afterwards the view side opend and closed the blocks according to user demands. (Open all, close all etc.)
Is that the best way to implement this?
The environment is the key. As I mentioned it could be a good solution. If you are using a distant (or more advanced [SQLite is able to do it]) database server it is also a good thing to consider to create a view which contains the child count. (Or simply change the generated table introducing the "inside_dir_count" field.) This way you could fetch entire paths.
Is it performance critical to make so many requests to the local db on user click?
It is a relative small request to the database you represented here, and if it really depends on the click of the user it is a good solution. The mentioned local database should respond in little to no time. This action is not something that will happen 1000 times per second, so in my personal view it is good solution but there is a room for improvement like in everything.

Google Places API - saving place_id and violation of terms and conditions

I want to build an app which shows places around user using Google Places based on user interests. As mentioned here:
Place IDs are exempt from the caching restrictions stated in Section
10.5.d of the Google Maps APIs Terms of Service. You can therefore store place ID values indefinitely.
So, can I save place_id in cloud database and perform any analytics operation over it? For example; if I gather place_ids added in each user's favorite places table and from analytics; I can know which place_id are the most ones added to favorites? or can I show something like 'Trending Places' in app from gathered place_ids in responses?
Will it violate the terms and conditions? I read the whole page of terms but couldn't find the answer.
can anyone help me out? Thanks.
Yes you can 100% store the place_id indefinitely and reuse it.
See Referencing a Place with a Place ID.
Please note one thing that
A single place ID refers to only one place, but a place can have
multiple place IDs
These terms and conditions are kind of self explanatory. Except your requirement which will be clarified after the below link is read carefully. As per your requirement , inorder to prevent calling services next time with same query which user had done with an intention of saving network calls is acceptable.
No caching or storage: You will not pre-fetch, cache, index, or store any Content to be used outside the Service, except that you may store limited amounts of Content solely for the purpose of improving the performance of your Maps API Implementation due to network latency (and not for the purpose of preventing Google from accurately tracking usage), and only if such storage
1) is temporary (and in no event more than 30 calendar days)
2) is secure 3)
does not manipulate or aggregate any part of the Content or Service 4) and
does not modify attribution in any way. Go through this Section 10.5 Intellectual Property Restrictions. Subsection (B)
You'll need to contact Google to get a 100% answer.
That being said, from my experience it looks like the clause you included is intended exactly for the kind of thing you want to do.
Again, I want to reiterate that contacting Google directly is something you should do if you still have concerns.
You can store place ID values indefinitely.
Just What part of
You can therefore store place ID Values indefinitely.
Don't you understand?
Indefinitely requires a server.

Resources vs. SQLite

I'm trying to analyze the trade-offs between using SQLite vs. using resources for an app that needs to ship with a fairly sizeable amount of text (several books). I've read this post on raw XML files vs. SQLite and this one on XML resources vs. SQLite. Both of those, however, seem to be comparing SQLite to parsing XML at run time. I don't know if the same issues apply to using string and int resource arrays. I actually have a number of unknowns and I'd appreciate any insights others can offer.
Data details: about 40 books; three languages per book; average book length 25 chapters; average chapter length 25 paragraphs; about 75,000 paragraphs total. Text is stored by paragraph; no finer granularity needed. For each language, the app's logical view of the text is as a single array of paragraphs spanning all the books. There are also "table of contents" (TOC) data down to the paragraph level. All the data are strictly read-only. I need to support two query types: 1) retrieve the text for a paragraph or range of paragraphs in a specified language; 2) given a paragraph number, determine the book, chapter, and paragraph offset in the chapter. I don't need to use any of SQLite's string functions.
My analysis so far:
SQLite: Create an SQLite data base off-line, package it as a raw resource or asset, and copy it to the data base location when the app is run for the first time (and/or upgraded). I have a implemented a prototype data base for this with half a dozen tables.
Can use SQL to query data base, so don't need to code any search algorithms.
I know it can handle this much data.
Requires several SQL range queries to answer type 2 queries.
Requires twice the space: in the .apk file and again when installed into the app's db area.
Android's SQLite implementation requires external storage (SD card), so app won't work without one. Amazon's guidelines for Kindle Fire apps state that apps cannot require an SD card, so going this way might rule out Kindle Fire compatibility. (Bad!)
Resources: Create a collection of xml array resource files off-line and copy them to the project's res/values folder. Text would be partitioned into many string arrays: one array per chapter per book. There would be about 3,000 arrays. Indexes would be implemented as int arrays. For each book, the index data would be shared across languages. I'd probably also need to generate some typed array resources to provide an index into the generated resource IDs. I expect that the index arrays are small enough to load entirely into memory at app startup.
Type 1 queries involve loading the correct string array(s) and accessing array elements. Type 2 queries involve binary search of the (already loaded) index data.
Don't know whether the resources system in Android can handle that many resource arrays.
Don't know what the performance would be compared to using SQLite.
I suppose a hybrid approach is also possible: store the TOC data one way and the text itself in another.
Again, I'd appreciate any thoughts or insights that would help with this analysis.
One tangential point...
Amazon's guidelines for Kindle Fire apps state that apps cannot require an SD card, so going this way might rule out Kindle Fire compatibility. (Bad!)
The version from today actually recommends
that you deploy a smaller APK that downloads and installs quickly, and then upon first launch downloads additional resources and saves them on a local file system.
for larger apps instead of packaging it altogether. Additionally, what they forbid seems to be [emphasis mine]
copying, recording, downloading, storing, or similar actions of any type of video or audio content onto the Amazon Fire TV or Fire TV Stick device, any SD memory card or any connected external storage (where applicable).
So that restriction seems obsolete now.

Categories