How can I manage availability of warehouses? - java

I'm working on a tool about humanitarian logistics. In this model I have some lorries which pick items for support affected people by an earthquake and, after picking them, go to eartquake epicenter to drop these items. I need to manage the availabilty of these warehouses: for example, if a warehouse has 5 items availables and lorries has a transport capacity by 2, availabilty have to become 3 for that warehouse. I need to realize obviously this process for all warehouse of my Supply Chain. I've dropped (as you can see in the pic that I've uploaded) a parameter (availability) in the class of the warehouses [named Magazzini]).
This is the algorithm that manages lorries movement, in which I need to code this command to change availabilty.
List <Magazzini> subsetlist = findAll(main.magazzinis, w->w.capacita>0);
List <Magazzini> sortmag = new ArrayList<Magazzini>();
List <Double> distance = new ArrayList<Double>();
sortmag = subsetlist;
System.out.println(sortmag);
for (Magazzini m : subsetlist)
{
m.distance = distanceTo(m);
}
sortmag = sortAscending(sortmag, p-> p.distance);
//main.magazzinis.cap = main.magazzinis.cap - 2;
moveTo(sortmag.get(0));
System.out.println(sortmag);
partenza = time();
I write a possible command to do it, but it doesn't work. How can I fix it?

Related

DeepLearning4j NN for prediction function doesn't converge

I'm trying to do a simple prediction in DL4j (going to use it later for a large dataset with n features) but no matter what I do my network just doesn't want to learn and behaves very weird. Of course I studied all the tutorials and did the same steps shown in dl4j repo, but it doesn't work for me somehow.
For dummy features data I use:
*double[val][x] features; where val = linspace(-10,10)...; and x= Math.sqrt(Math.abs(val)) * val;
my y is : double[y] labels; where y = Math.sin(val) / val
DataSetIterator dataset_train_iter = getTrainingData(x_features, y_outputs_train, batchSize, rnd);
DataSetIterator dataset_test_iter = getTrainingData(x_features_test, y_outputs_test, batchSize, rnd);
// Normalize data, including labels (fitLabel=true)
NormalizerMinMaxScaler normalizer = new NormalizerMinMaxScaler(0, 1);
normalizer.fitLabel(false);
normalizer.fit(dataset_train_iter);
normalizer.fit(dataset_test_iter);
// Use the .transform function only if you are working with a small dataset and no iterator
normalizer.transform(dataset_train_iter.next());
normalizer.transform(dataset_test_iter.next());
dataset_train_iter.setPreProcessor(normalizer);
dataset_test_iter.setPreProcessor(normalizer);
//DataSet setNormal = dataset.next();
//Create the network
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(seed)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT)
.weightInit(WeightInit.XAVIER)
//.miniBatch(true)
//.l2(1e-4)
//.activation(Activation.TANH)
.updater(new Nesterovs(0.1,0.3))
.list()
.layer(new DenseLayer.Builder().nIn(numInputs).nOut(20).activation(Activation.TANH)
.build())
.layer(new DenseLayer.Builder().nIn(20).nOut(10).activation(Activation.TANH)
.build())
.layer( new DenseLayer.Builder().nIn(10).nOut(6).activation(Activation.TANH)
.build())
.layer(new OutputLayer.Builder(LossFunctions.LossFunction.MSE)
.activation(Activation.IDENTITY)
.nIn(6).nOut(1).build())
.build();
//Train and fit network
final MultiLayerNetwork net = new MultiLayerNetwork(conf);
net.init();
net.setListeners(new ScoreIterationListener(100));
//Train the network on the full data set, and evaluate in periodically
final INDArray[] networkPredictions = new INDArray[nEpochs / plotFrequency];
for (int i = 0; i < nEpochs; i++) {
//in fit we have already Backpropagation. See Release deeplearning
// https://deeplearning4j.konduit.ai/release-notes/1.0.0-beta3
net.fit(dataset_train_iter);
dataset_train_iter.reset();
if((i+1) % plotFrequency == 0) networkPredictions[i/ plotFrequency] = net.output(x_features, false);
}
// evaluate and plot
dataset_test_iter.reset();
dataset_train_iter.reset();
INDArray predicted = net.output(dataset_test_iter, false);
System.out.println("PREDICTED ARRAY " + predicted);
INDArray output_train = net.output(dataset_train_iter, false);
//Revert data back to original values for plotting
// normalizer.revertLabels(predicted);
normalizer.revertLabels(output_train);
normalizer.revertLabels(predicted);
PlotUtil.plot(om, y_outputs_train, networkPredictions);
My output seems then very weird (see picture below), even when I use miniBatch (1, 20,100 Samples/Batch) change number of epochs or add hidden nodes and hidden Layers (tryed to add 1000 Nodes and 5 Layers). The network either outputs very stochastic values or the one constant y. I just can't recognize, what is going wrong here. Why the network even doesn't approach the train function.
Another question: what doesn iter.reset() do exactly. Does the Iterator turn the pointer back to 0-Batch in the DataSetIterator?
A pretty common problem is people doing toy problems like this is dl4j's assumption of minibatches (which 99% of problems tend to be). You aren't actually doing minibatch learning (which actually defeats the point of actually using an iterator, which is meant to iterate through slices of a dataset, not an in memory small dataset) - a small recommendation is to just use the normal dataset api (which is what's returned from dataset.next())
Ensure you turn off the minibatch penalty dl4j assigns to all losses with:
.minibatch(false) - you can see that configuration here:
https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/conf/NeuralNetConfiguration.java#L434
A unit test testing this behavior can be found here:
https://github.com/eclipse/deeplearning4j/blob/b4047006ac8175df295c2f3c008e7601437ea4dc/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/gradientcheck/GradientCheckTests.java#L94
For posterity, here is the relevant configuration:
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder().miniBatch(false)
.dataType(DataType.DOUBLE)
.optimizationAlgo(OptimizationAlgorithm.STOCHASTIC_GRADIENT_DESCENT).updater(new NoOp())
.list()
.layer(0,
new DenseLayer.Builder().nIn(4).nOut(3)
.dist(new NormalDistribution(0, 1))
.activation(Activation.TANH)
.build())
.layer(1, new OutputLayer.Builder(LossFunction.MCXENT)
.activation(Activation.SOFTMAX).nIn(3).nOut(3).build())
.build();
You'll notice 2 things: 1 is minibatch is false and 2 is the configuration for data type double. You are also welcome to try that for your problem.
Dl4j to save memory tends to also assume float for the default data type.
This is a reasonable assumption when working on larger problems, but may not work well for toy problems.
For reference, you can find the application of the minibatch math here:
https://github.com/eclipse/deeplearning4j/blob/fc735d30023981ebbb0fafa55ea9520ec44292e0/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/updater/BaseMultiLayerUpdater.java#L332
This affects the gradient updates.
The score penalty can be found in the output layer:
https://github.com/eclipse/deeplearning4j/blob/master/deeplearning4j/deeplearning4j-nn/src/main/java/org/deeplearning4j/nn/layers/BaseOutputLayer.java#L84
Essentially, both of these automatically penalize the loss update for your dataset reflected in both the loss and the gradient updates.

Training OpenNLP document classification

I'm trying to use OpenNLP to classify invoices. Based on it's description I will group it into two classes. I have built a training file with 20K descriptions and tagged each one into the correct class.
The training data looks like (first column is a code, that I use as class, and the second column is the invoice description):
85171231 IPHONE 5S CINZA ESPACIAL 16GB (ME432BZA)
85171231 Galaxy S6 SM-G920I
85171231 motorola - MOTO G5 XT1672
00000000 MOTONETA ITALIKA AT110
00000000 CJ BOX UNIBOX MOLA 138X57X188 VINHO
Using DocumentCategorizer from OpenNLP, I achieved 98,5% of correctness. But, trying to improve the efficience, I took the wrong categorized documents and used it to expand the training data.
For instance, when I first run it, the "MOTONETA ITALIKA AT110" was classified as "85171231". It's ok, since into the first run the "MOTONETA ITALIKA AT110" wasn't classified. So, I teached the classifier explicitly puting "MOTONETA ITALIKA AT110" tagged as "00000000".
But, running it again, OpenNLP insists to classify it as "85171231" even though the training data contains an explicity map to "000000".
So my question is: Am I teaching OpenNLP wright? How do I improve it's efficiency?
The code that I'm using is:
MarkableFileInputStreamFactory dataIn = new MarkableFileInputStreamFactory("data.train");
ObjectStream<String> lineStream = new PlainTextByLineStream(dataIn, StandardCharsets.UTF_8);
ObjectStream<DocumentSample> sampleStream = new DocumentSampleStream(lineStream);
TrainingParameters params = new TrainingParameters();
params.put(TrainingParameters.ITERATIONS_PARAM, "100");
params.put(TrainingParameters.CUTOFF_PARAM, "0");
DoccatModel model = DocumentCategorizerME.train("pt", sampleStream, params, new DoccatFactory());
DocumentCategorizer doccat = new DocumentCategorizerME(model);
double[] aProbs = doccat.categorize("MOTONETA ITALIKA AT110".replaceAll("[^A-Za-z0-9 ]", " ").split(" "));
doccat.getBestCategory(aProbs);
By default, DocumentCategorizer will use bag of words. It means that the sequence of terms are not take into account.
If any term of MOTONETA ITALIKA AT110 occurs with high frequency in the group 85171231, the classifier would be inclined to use that group.
You have a few alternatives:
You can add more variants of MOTONETA ITALIKA AT110 to the group 000000;
Try the to change the feature generator.
The second option would be to change the creation of your model, like this:
int minNgramSize = 2;
int maxNgramSize = 3;
DoccatFactory customFactory = new DoccatFactory(
new FeatureGenerator[]{
new BagOfWordsFeatureGenerator(),
new NGramFeatureGenerator(minNgramSize, maxNgramSize)
}
);
DoccatModel model = DocumentCategorizerME.train("pt", sampleStream, params, customFactory);
You can play with the feature generator by removing the BagOfWordsFeatureGenerator and changing the min and max ngram size.

Arraylist of lat/long points of interest?

i've made an app that implements augmented reality based on POI's and have all the functionality working for one POI but i would now like to be able to put in multiple points. Can any give me advice on how to do this ? Can i create an array of POI's ?? posted my relevant code below but don't really know where to go from here.
private void setAugmentedRealityPoint() {
homePoi = new AugmentedPOI(
"Home",
"Latitude, longitude",
28.306802, -81.601358
);
This is how its currently set and i then go on to use it in other area's as shown belown:
public double calculateAngle() {
double dX = homePoi.getPoiLatitude() - myLatitude;
double dY = homePoi.getPoiLongitude() - myLongitude;
}
and here:
private boolean isWithinDistance(double myLatitude, double myLongitude){
Location my1 = new Location("One");
my1.setLatitude(myLatitude);
my1.setLongitude(myLongitude);
Location target =new Location("Two");
target.setLatitude(homePoi.getPoiLatitude());
target.setLongitude(homePoi.getPoiLongitude());
double range =my1.distanceTo(target);
double zone = 20;
if (range < zone ) {
return true;
}
else {
return false;
}
}
Any help would be appreciated.
Using a List would be a smart idea. You could add all entries into it in code, or you could pull them in from a JSON file. When you're rendering them, you could check if they are in range.
If you have a lot of these POIs, you should divide them into smaller and smaller regions, and only load what you need. For example, structure them like this:
- CountryA
+ County 1
* POI
* POI
- CountryB
+ County 1
* POI
* POI
+ County 2
* POI
Get the country and county of the user, and only load what you really need. I assume this is a multiplayer game, so I'll share some of my code.
On the server side, I have 3 objects: Country, County and POI.
First I discover all countries on the disk, and make an object for it. Inside my country object I have a list for all counties, and inside my County object I have a list of POIs. When a player joins, they send a packet with their Country and County, and I can select the appropriate POIs for them. Storing them in smaller regions is essential, or your server will have a hard time if you go through all of the POIs for every player.
Here is my method for discovering data: Server.java#L311-L385
Code for selecting POIs for a player: Server.java#L139-L181
And how you can render it: PlayScreen.java#L209-L268
You need to port it to your own app, and I'm probably horrible at explaining, but I hope you got something out of it.

Which data structures to use when storing multiple entities with multiple query criteria?

There is a storage unit, with has a capacity for N items. Initially this unit is empty.
The space is arranged in a linear manner, i.e. one beside the other in a line.
Each storage space has a number, increasing till N.
When someone drops their package, it is assigned the first available space. The packages could also be picked up, in this case the space becomes vacant.
Example: If the total capacity was 4. and 1 and 2 are full the third person to come in will be assigned the space 3. If 1, 2 and 3 were full and the 2nd space becomes vacant, the next person to come will be assigned the space 2.
The packages they drop have 2 unique properties, assigned for immediate identification. First they are color coded based on their content and second they are assigned a unique identification number(UIN).
What we want is to query the system:
When the input is color, show all the UIN associated with this color.
When the input is color, show all the numbers where these packages are placed(storage space number).
Show where an item with a given UIN is placed, i.e. storage space number.
I would like to know how which Data Structures to use for this case, so that the system works as efficiently as possible?
And I am not given which of these operations os most frequent, which means I will have to optimise for all the cases.
Please take a note, even though the query process is not directly asking for storage space number, but when an item is removed from the store it is removed by querying from the storage space number.
You have mentioned three queries that you want to make. Let's handle them one by one.
I cannot think of a single Data Structure that can help you with all three queries at the same time. So I'm going to give an answer that has three Data Structures and you will have to maintain all the three DS's state to keep the application running properly. Consider that as the cost of getting a respectably fast performance from your application for the desired functionality.
When the input is color, show all the UIN associated with this color.
Use a HashMap that maps Color to a Set of UIN. Whenever an item:
is added - See if the color is present in the HashMap. If yes, add this UIN to the set else create a new entry with a new set and add the UIN then.
is removed - Find the set for this color and remove this UIN from the set. If the set is now empty, you may remove this entry altogether.
When the input is color, show all the numbers where these packages are placed.
Maintain a HashMap that maps UIN to the number where an incoming package is placed. From the HashMap that we created in the previous case, you can get the list of all UINs associated with the given Color. Then using this HashMap you can get the number for each UIN which is present in the set for that Color.
So now, when a package is to be added, you will have to add the entry to previous HashMap in the specific Color bucket and to this HashMap as well. On removing, you will have to .Remove() the entry from here.
Finally,
Show where an item with a given UIN is placed.
If you have done the previous, you already have the HashMap mapping UINs to numbers. This problem is only a sub-problem of the previous one.
The third DS, as I mentioned at the top, will be a Min-Heap of ints. The heap will be initialized with the first N integers at the start. Then, as the packages will come, the heap will be polled. The number returned will represent the storage space where this package is to be put. If the storage unit is full, the heap will be empty. Whenever a package will be removed, its number will be added back to the heap. Since it is a min-heap, the minimum number will bubble up to the top, satisfying your case that when 4 and 2 are empty, the next space to be filled will be 4.
Let's do a Big O analysis of this solution for completion.
Time for initialization: of this setup will be O(N) because we will have to initialize a heap of N. The other two HashMaps will be empty to begin with and therefore will incur no time cost.
Time for adding a package: will include time to get a number and then make appropriate entries in the HashMaps. To get a number from heap will take O(Log N) time at max. Addition of entries in HashMaps will be O(1). Hence a worst case overall time of O(Log N).
Time for removing a package: will also be O(Log N) at worst because the time to remove from the HashMaps will be O(1) only while, the time to add the freed number back to min-heap will be upper bounded by O(Log N).
This smells of homework or really bad management.
Either way, I have decided to do a version of this where you care most about query speed but don't care about memory or a little extra overhead to inserts and deletes. That's not to say that I think that I'm going to be burning memory like crazy or taking forever to insert and delete, just that I'm focusing most on queries.
Tl;DR - to solve your problem, I use a PriorityQueue, an Array, a HashMap, and an ArrayListMultimap (from guava, a common external library), each one to solve a different problem.
The following section is working code that walks through a few simple inserts, queries, and deletes. This next bit isn't actually Java, since I chopped out most of the imports, class declaration, etc. Also, it references another class called 'Packg'. That's just a simple data structure which you should be able to figure out just from the calls made to it.
Explanation is below the code
import com.google.common.collect.ArrayListMultimap;
private PriorityQueue<Integer> openSlots;
private Packg[] currentPackages;
Map<Long, Packg> currentPackageMap;
private ArrayListMultimap<String, Packg> currentColorMap;
private Object $outsideCall;
public CrazyDataStructure(int howManyPackagesPossible) {
$outsideCall = new Object();
this.currentPackages = new Packg[howManyPackagesPossible];
openSlots = new PriorityQueue<>();
IntStream.range(0, howManyPackagesPossible).forEach(i -> openSlots.add(i));//populate the open slots priority queue
currentPackageMap = new HashMap<>();
currentColorMap = ArrayListMultimap.create();
}
/*
* args[0] = integer, maximum # of packages
*/
public static void main(String[] args)
{
int howManyPackagesPossible = Integer.parseInt(args[0]);
CrazyDataStructure cds = new CrazyDataStructure(howManyPackagesPossible);
cds.addPackage(new Packg(12345, "blue"));
cds.addPackage(new Packg(12346, "yellow"));
cds.addPackage(new Packg(12347, "orange"));
cds.addPackage(new Packg(12348, "blue"));
System.out.println(cds.getSlotsForColor("blue"));//should be a list of {0,3}
System.out.println(cds.getSlotForUIN(12346));//should be 1 (0-indexed, remember)
System.out.println(cds.getSlotsForColor("orange"));//should be a list of {2}
System.out.println(cds.removePackage(2));//should be the orange one
cds.addPackage(new Packg(12349, "green"));
System.out.println(cds.getSlotForUIN(12349));//should be 2, since that's open
}
public int addPackage(Packg packg)
{
synchronized($outsideCall)
{
int result = openSlots.poll();
packg.setSlot(result);
currentPackages[result] = packg;
currentPackageMap.put(packg.getUIN(), packg);
currentColorMap.put(packg.getColor(), packg);
return result;
}
}
public Packg removePackage(int slot)
{
synchronized($outsideCall)
{
if(currentPackages[slot] == null)
return null;
else
{
Packg packg = currentPackages[slot];
currentColorMap.remove(packg.getColor(), packg);
currentPackageMap.remove(packg.getUIN());
currentPackages[slot] = null;
openSlots.add(slot);//return slot to priority queue
return packg;
}
}
}
public List<Packg> getUINsForColor(String color)
{
synchronized($outsideCall)
{
return currentColorMap.get(color);
}
}
public List<Integer> getSlotsForColor(String color)
{
synchronized($outsideCall)
{
return currentColorMap.get(color).stream().map(packg -> packg.getSlot()).collect(Collectors.toList());
}
}
public int getSlotForUIN(long uin)
{
synchronized($outsideCall)
{
if(currentPackageMap.containsKey(uin))
return currentPackageMap.get(uin).getSlot();
else
return -1;
}
}
I use 4 different data structures in my class.
PriorityQueue I use the priority queue to keep track of all the open slots. It's log(n) for inserts and constant for removals, so that shouldn't be too bad. Memory-wise, it's not particularly efficient, but it's also linear, so that won't be too bad.
Array I use a regular Array to track by slot #. This is linear for memory, and constant for insert and delete. If you needed more flexibility in the number of slots you could have, you might have to switch this out for an ArrayList or something, but then you'd have to find a better way to keep track of 'empty' slots.
HashMap ah, the HashMap, the golden child of BigO complexity. In return for some memory overhead and an annoying capital letter 'M', it's an awesome data structure. Insertions are reasonable, and queries are constant. I use it to map between the UIDs and the slot for a Packg.
ArrayListMultimap the only data structure I use that's not plain Java. This one comes from Guava (Google, basically), and it's just a nice little shortcut to writing your own Map of Lists. Also, it plays nicely with nulls, and that's a bonus to me. This one is probably the least efficient of all the data structures, but it's also the one that handles the hardest task, so... can't blame it. this one allows us to grab the list of Packg's by color, in constant time relative to the number of slots and in linear time relative to the number of Packg objects it returns.
When you have this many data structures, it makes inserts and deletes a little cumbersome, but those methods should still be pretty straight-forward. If some parts of the code don't make sense, I'll be happy to explain more (by adding comments in the code), but I think it should be mostly fine as-is.
Query 3: Use a hash map, key is UIN, value is object (storage space number,color) (and any more information of the package). Cost is O(1) to query, insert or delete. Space is O(k), with k is the current number of UINs.
Query 1 and 2 : Use hash map + multiple link lists
Hash map, key is color, value is pointer(or reference in Java) to link list of corresponding UINs for that color.
Each link list contains UINs.
For query 1: ask hash map, then return corresponding link list. Cost is O(k1) where k1 is the number of UINs for query color. Space is O(m+k1), where m is the number of unique color.
For query 2: do query 1, then apply query 3. Cost is O(k1) where k1 is the number of UINs for query color. Space is O(m+k1), where m is the number of unique color.
To Insert: given color, number and UIN, insert in hash map of query 3 an object (num,color); hash(color) to go to corresponding link list and insert UIN.
To Delete: given UIN, ask query 3 for color, then ask query 1 to delete UIN in link list. Then delete UIN in hash map of query 3.
Bonus: To manage to storage space, the situation is the same as memory management in OS: read more
This is very simple to do with SegmentTree.
Just store a position in each place and query min it will match with vacant place, when you capture a place just assign 0 to this place.
Package information possible store in separate array.
Initiall it have following values:
1 2 3 4
After capturing it will looks following:
0 2 3 4
After capturing one more it will looks following:
0 0 3 4
After capturing one more it will looks following:
0 0 0 4
After cleanup 2 it will looks follwong:
0 2 0 4
After capturing one more it will looks following:
0 0 0 4
ans so on.
If you have segment tree to fetch min on range it possible to done in O(LogN) for each operation.
Here my implementation in C#, this is easy to translate to C++ of Java.
public class SegmentTree
{
private int Mid;
private int[] t;
public SegmentTree(int capacity)
{
this.Mid = 1;
while (Mid <= capacity) Mid *= 2;
this.t = new int[Mid + Mid];
for (int i = Mid; i < this.t.Length; i++) this.t[i] = int.MaxValue;
for (int i = 1; i <= capacity; i++) this.t[Mid + i] = i;
for (int i = Mid - 1; i > 0; i--) t[i] = Math.Min(t[i + i], t[i + i + 1]);
}
public int Capture()
{
int answer = this.t[1];
if (answer == int.MaxValue)
{
throw new Exception("Empty space not found.");
}
this.Update(answer, int.MaxValue);
return answer;
}
public void Erase(int index)
{
this.Update(index, index);
}
private void Update(int i, int value)
{
t[i + Mid] = value;
for (i = (i + Mid) >> 1; i >= 1; i = (i >> 1))
t[i] = Math.Min(t[i + i], t[i + i + 1]);
}
}
Here example of usages:
int n = 4;
var st = new SegmentTree(n);
Console.WriteLine(st.Capture());
Console.WriteLine(st.Capture());
Console.WriteLine(st.Capture());
st.Erase(2);
Console.WriteLine(st.Capture());
Console.WriteLine(st.Capture());
For getting the storage space number I used a min heap approach, PriorityQueue. This works in O(log n) time, removal and insertion both.
I used 2 BiMaps, self-created data structures, for storing the mapping between UIN, color and storage space number. These BiMaps used internally a HashMap and an array of size N.
In first BiMap(BiMap1), a HashMap<color, Set<StorageSpace>> stores the mapping of color to the list of storage spaces's. And a String array String[] colorSpace which stores the color at the storage space index.
In the Second BiMap(BiMap2), a HashMap<UIN, storageSpace> stores the mapping between UIN and storageSpace. And a string arrayString[] uinSpace` stores the UIN at the storage space index.
Querying is straight forward with this approach:
When the input is color, show all the UIN associated with this color.
Get the List of storage spaces from BiMap1, for these spaces use the array in BiMap2 to get the corresponding UIN's.
When the input is color, show all the numbers where these packages are placed(storage space number). Use BiMap1's HashMap to get the list.
Show where an item with a given UIN is placed, i.e. storage space number. Use BiMap2 to get the values from the HashMap.
Now when we are given a storage space to remove, both the BiMaps have to be updated. In BiMap1 get the entry from the array, get the corersponding Set, and remove the space number from this set. From BiMap2 get the UIN from the array, remove it and also remove it from the HashMap.
For both the BiMaps the removal and the insert operations are O(1). And the Min heap works in O(Log n), hence the total time complexity is O(Log N)

Java program for lucky draw

There are 5 people and 5 things. 5 people are
Mg Mg, Su Su, Zaw Zaw, Kyaw Kyaw, Mya Mya
5 things are
Iphone, mp3, tv and two thank you
I want each person to get a thing randomly like lucky draw every time I run this in Java. I can't get the same people to get the same thing if I run the second time. I am not sure if I can use the array method for names or things to store them.
Can you tell me how to do that? Can I get the code if possible?
I've already tried this but have no idea how to continue.
ArrayList<String> name = new ArrayList<String>();
name.add("Mg Mg");
name.add("Su Su");
name.add("Zaw Zaw");
name.add("Kyaw Kyaw");
name.add("Mya Mya");
ArrayList<String> name = new ArrayList<String>();
name.add("Mg Mg");
name.add("Su Su");
name.add("Zaw Zaw");
name.add("Kyaw Kyaw");
name.add("Mya Mya");
ArrayList<String> items = new ArrayList<String>();
items.add("Iphone");
items.add("mp3");
items.add("tv ");
items.add("Iphone4");
items.add("Iphone5");
int rand = (int) (Math.random() * 5);
Every time you have to generate Random number, you have to save rand number into DB/file. And check that new generated no is in file or not. If it is not there then use that number for mapping and it is there then generate again the number.

Categories