I have been working on a custom object detection model where I have a set of images and those need to be identified in another image and in real-time using a phone camera. I am wondering whether it is possible to create a custom model without having to use YOLO or other pre-defined models? As well as if I train the custom model in Python using Tensorflow, Keras, is it possible to export that model to use in Java for an android application?
I have heard of usingCustomVision from Azure, but I have to label each image manually, rather than just using the filename of the image, which becomes time consuming when having a large dataset.
Any advice will be highly appreciated.
Related
I am working on facial expression recognition using deep learning algorithm i.e CNN, to identify user's emotions like happy, sad, anger etc. I have trained and tested it in python using pre-trained VGG-16 model altering top 3 layers to train my test images,To speed up the training process i have used Tensorflow. The test accuracy is 62%. I have saved architecture and weights of my model in train_model.h5 file.
Now i have to implement it on Android phone. For that i have used Tensorflow-Lite as it is suitable for android phone. So i had converted my .h5 file in .tflite file using Tensorflow lite converter method.
This is what i did for converting:
from tensorflow.contrib import lite
converter=lite.TFLiteConverter.from_keras_model_file
("train_model.h5")
tflite_model = converter.convert()
open ("model.tflite" , "wb") .write(tflite_model)
I successfully got the tflite file.
Coming to android part i have chose Java language to load tflite file and predict the emotions of new image. I have gone through the example of "image classification" given in the Tensorflow-lite website, but i am confused how to use it. I don't know how to read tflite and use it to predict the output of new image and display the result on android app. Please help me with some good resources with explanation
Here is a good blog post on how to use an image classification TFLite model on Android:
https://medium.com/tensorflow/using-tensorflow-lite-on-android-9bbc9cb7d69d
How you'll run inferences depends largely on how the model was built and what inputs it expects. If the approach in the blog post above doesn't work, you'll have to manually compose the tensor to feed to the model. The code in this codelab does just that.
Another option to consider is the face detection API in ML Kit. It does some of what you are looking for (though not all of it) by detecting the curvature of the smile.
I want to build a local application with interactive maps(ie regional maps with data being displayed when we click on a region). An example pic is given below. I have to embed data like news events and other things as needed in the interactive map. I know a little Java and Python. How should I proceed with my ideas to create my project?enter image description here
Firstly decide for which platform and OS you want to code. If you are going to code for Android or IOS you could simply use the google maps API and add custom markers to it. The same would be sufficient for an Web-Application, while for PC‘s I would not suggest this approach
I want to work on making an android app by integrating OpenCV with android Studio. I have a set of 2D hardcopy card images that i want to save as templates with in the app. Then, using the app, when i place my camera on any of the cards, the app should search the directory which contain the templates and look for match and provide feedback if a match is found. If anyone can guide on how to achieve this, it will be highly appreciated.
Also, if not OpenCV, then which SDK or tool should be preferred ?
The question is a general one, so the answer will be general as well, and will make assumptions about what you'd like to accomplish with your application.
Android Studio with OpenCV is probably a reasonable stack to use.
Presuming the library has more than a trivial number of images, you'll probably want to extract matching information for each image in your library in an offline process (on your code-development machine). So for instance, you would configure your development machine with a SQLite database driver and OpenCV, construct a program that extracts feature points and saves those to your a SQLite database. That file can then be loaded into the Android APK assets, and it would be ready upon the application's first use. The application would take frames from the camera and compare those with each item in the database, looking for matches.
This is my question i want you to use the API hosted at http://212.88.98.116:4050/. This is a simple json API showing some live data. i want to create android program that presents a graphical representation(Bar,line graph) of the data using android java language.Please i need your help just 5 hours from now.
You probably should use a third party library for this.
MPAndroid is quite popural :
http://code.tutsplus.com/tutorials/add-charts-to-your-android-app-using-mpandroidchart--cms-23335
Download the data from your code and use this simple API to create a bar graph.
I was wondering how can anyone make graphics such as this one https://play.google.com/store/apps/details?id=com.surpax.ledflashlight.panel in android. By graphics iam referring to the interface. Did he use the eclipse or another tool?
Thanks in advance.
Probably used Eclipse for the programming, and created the images separately. The images are saved in multiple resolution and used automatically based on the resolution of the device. The images are easily added to the project by referring to them in the XML.