I am working currently on a project using RTC (Rational Team Concert) through Plain Java API.
I have a online-repository on which I have a main stream named Sample_ROOT. From this stream I want to create another one named Sample_Client_Application. The Sample_Client_Application should have all components of Sample_ROOT and whenever changes are committed in Sample_ROOT, they have to be pushed into Sample_Client_Application.
I tried to implement all the process described above in eclipse with RTC Plain Java API with the code below.
/*
parentStream : represents Sample_ROOT
appStream : represents DTO of Sample_Client_Application
projectStreams is a map of projects with their workspaces : HashMap<String, List<IWorkspaceHandle>>
*/
public IWorkspaceConnection createAppInTeamAreas(String parentStream, StreamDTO appStream)
throws TeamRepositoryException {
monitor.println("Getting Workspace");
IWorkspaceManager workspaceManager = SCMPlatform.getWorkspaceManager(repository);
IWorkspaceConnection workspaceConnection = null;
Optional<IWorkspaceHandle> handle = projectStreams.get(appStream.getProjectName()).stream().filter(xHandle -> {
IWorkspace workspace = null;
try {
workspace = (IWorkspace) repository.itemManager().fetchCompleteItem(xHandle, IItemManager.DEFAULT,
null);
} catch (TeamRepositoryException e) {
e.printStackTrace();
}
return workspace != null && workspace.getName() == parentStream;
}).findFirst();
if (handle.isPresent()) {
IWorkspace parentWorkSpace = null;
try {
parentWorkSpace = (IWorkspace) repository.itemManager().fetchCompleteItem(handle.get(),
IItemManager.DEFAULT, null);
} catch (TeamRepositoryException e) {
e.printStackTrace();
return null;
}
IWorkspaceConnection parentConn = workspaceManager.getWorkspaceConnection(handle.get(), monitor);
List<IComponentHandle> parentComps = (List<IComponentHandle>) parentConn.getComponents();
IWorkspaceConnection childConn = null;
IItemManager itemManager = repository.itemManager();
for (IComponentHandle parentCompHandle : parentComps) {
IItemHandle itemHandle = (IItemHandle) parentCompHandle;
IComponent parentComp = (IComponent) itemManager.fetchCompleteItem(itemHandle, IItemManager.DEFAULT,
monitor);
monitor.println(String.format("Component name : %s", parentComp.getName()));
childConn = createAndAddSycrhronizedComponents(parentWorkSpace, parentComp, parentConn, appStream,
childConn);
}
workspaceConnection = childConn;
}
return workspaceConnection;
}
private IWorkspaceConnection createAndAddSycrhronizedComponents(IWorkspace parentStream, IComponent parentComp,
IWorkspaceConnection parentConnection, StreamDTO childStream, IWorkspaceConnection childConnection) {
IWorkspaceManager workspaceManager = SCMPlatform.getWorkspaceManager(repository);
IWorkspaceConnection workspaceConnection = null;
try {
if (childConnection == null) {
workspaceConnection = workspaceManager.createWorkspace(repository.loggedInContributor(),
childStream.getName(), childStream.getDescription(), monitor);
} else {
workspaceConnection = childConnection;
//Set flow for the newly created repository
IFlowTable flowTarget=workspaceConnection.getFlowTable().getWorkingCopy();
flowTarget.addDeliverFlow(parentStream, repository.getId(), repository.getRepositoryURI(), parentConnection.getComponents(), "Scoped");
flowTarget.addAcceptFlow(parentStream, repository.getId(), repository.getRepositoryURI(), parentConnection.getComponents(), "Scoped");
flowTarget.setCurrent(flowTarget.getDeliverFlow(parentStream));
flowTarget.setCurrent(flowTarget.getAcceptFlow(parentStream));
flowTarget.setDefault(flowTarget.getDeliverFlow(parentStream));
flowTarget.setDefault(flowTarget.getAcceptFlow(parentStream));
workspaceConnection.setFlowTable(flowTarget, monitor);
//Add the components to the workspace
List<Object> componentsToBeAdded=new ArrayList<>();
componentsToBeAdded.add(workspaceConnection.componentOpFactory().addComponent(parentComp, true));
workspaceConnection.applyComponentOperations(componentsToBeAdded, monitor);
//Accept stream changes to local repository
IChangeHistorySyncReport streamChangeHistorySyncReport=workspaceConnection.compareTo(parentConnection, WorkspaceComparisonFlags.INCLUDE_BASELINE_INFO, Collections.EMPTY_LIST, null);
workspaceConnection.accept(AcceptFlags.DEFAULT, parentConnection, streamChangeHistorySyncReport, streamChangeHistorySyncReport.incomingBaselines(parentComp), streamChangeHistorySyncReport.incomingChangeSets(parentComp), monitor);
}
} catch (TeamRepositoryException e) {
e.printStackTrace();
}
return workspaceConnection;
}
Sample_Client_Application is created successfully but it has no components. Can anyone help me Please ?
Related
I am trying to write a custom launch configuration while running a plugin project as an eclipse application. I have to run the plugin with limited dependencies. Is it possible to override methods in org.eclipse.pde.launching.EclipseApplicationLaunchConfiguration ? If yes then how do I do it ?
You can't easily override the methods in EclipseApplicationLaunchConfiguration. That would require writing a new launch configuration - probably by using the org.eclipse.debug.core.launchConfigurationTypes extension point to define a new launch type.
EclipseApplicationLaunchConfiguration always uses the settings from the current entry in the 'Eclipse Application' section of the 'Run Configurations'. You can always edit the run configuration to change the dependencies or create another run configuration with different dependencies.
To write a custom configuration file
Extend the class org.eclipse.jdt.junit.launcher.JUnitLaunchShortcut
Override the method createLaunchConfiguration
Invoking super.createLaunchConfiguration(element) will return a ILaunchConfigurationWorkingCopy
Use the copy and set your own attributes that have to be modified
Attributes can be found in IPDELauncherConstants
Eclipse by default runs all the projects found in the workspace. This behavior can be modified by using the configuration created and overriding it with custom configuration.
public class LaunchShortcut extends JUnitLaunchShortcut {
class PluginModelNameBuffer {
private List<String> nameList;
PluginModelNameBuffer() {
super();
this.nameList = new ArrayList<>();
}
void add(final IPluginModelBase model) {
this.nameList.add(getPluginName(model));
}
private String getPluginName(final IPluginModelBase model) {
IPluginBase base = model.getPluginBase();
String id = base.getId();
StringBuilder buffer = new StringBuilder(id);
ModelEntry entry = PluginRegistry.findEntry(id);
if ((entry != null) && (entry.getActiveModels().length > 1)) {
buffer.append('*');
buffer.append(model.getPluginBase().getVersion());
}
return buffer.toString();
}
#Override
public String toString() {
Collections.sort(this.nameList);
StringBuilder result = new StringBuilder();
for (String name : this.nameList) {
if (result.length() > 0) {
result.append(',');
}
result.append(name);
}
if (result.length() == 0) {
return null;
}
return result.toString();
}
}
#Override
protected ILaunchConfigurationWorkingCopy createLaunchConfiguration(final IJavaElement element)
throws CoreException {
ILaunchConfigurationWorkingCopy configuration = super.createLaunchConfiguration(element);
configuration.setAttribute(IJavaLaunchConfigurationConstants.ATTR_VM_ARGUMENTS, "memory");
configuration.setAttribute(IPDELauncherConstants.USE_PRODUCT, false);
configuration.setAttribute(IPDELauncherConstants.USE_DEFAULT, false);
configuration.setAttribute(IPDELauncherConstants.AUTOMATIC_ADD, false);
addDependencies(configuration);
return configuration;
}
#SuppressWarnings("restriction")
private void addDependencies(final ILaunchConfigurationWorkingCopy configuration) throws CoreException {
PluginModelNameBuffer wBuffer = new PluginModelNameBuffer();
PluginModelNameBuffer tBuffer = new PluginModelNameBuffer();
Set<IPluginModelBase> addedModels = new HashSet<>();
String projectName = configuration.getAttribute(IJavaLaunchConfigurationConstants.ATTR_PROJECT_NAME, "");
IProject project = ResourcesPlugin.getWorkspace().getRoot().getProject(projectName);
IPluginModelBase model = PluginRegistry.findModel(project);
wBuffer.add(model);
addedModels.add(model);
IPluginModelBase[] externalModels = PluginRegistry.getExternalModels();
for (IPluginModelBase externalModel : externalModels) {
String id = externalModel.getPluginBase().getId();
if (id != null) {
switch (id) {
case "org.eclipse.ui.ide.application":
case "org.eclipse.equinox.ds":
case "org.eclipse.equinox.event":
tBuffer.add(externalModel);
addedModels.add(externalModel);
break;
default:
break;
}
}
}
TreeSet<String> checkedWorkspace = new TreeSet<>();
IPluginModelBase[] workspaceModels = PluginRegistry.getWorkspaceModels();
for (IPluginModelBase workspaceModel : workspaceModels) {
checkedWorkspace.add(workspaceModel.getPluginBase().getId());
}
EclipsePluginValidationOperation eclipsePluginValidationOperation = new EclipsePluginValidationOperation(
configuration);
eclipsePluginValidationOperation.run(null);
while (eclipsePluginValidationOperation.hasErrors()) {
Set<String> additionalIds = DependencyManager.getDependencies(addedModels.toArray(), true, null);
if (additionalIds.isEmpty()) {
break;
}
additionalIds.stream().map(PluginRegistry::findEntry).filter(Objects::nonNull).map(ModelEntry::getModel)
.forEach(addedModels::add);
for (String id : additionalIds) {
IPluginModelBase plugin = findPlugin(id);
if (checkedWorkspace.contains(plugin.getPluginBase().getId())
&& (!plugin.getPluginBase().getId().endsWith("tests"))) {
wBuffer.add(plugin);
} else {
tBuffer.add(plugin);
}
}
eclipsePluginValidationOperation.run(null);
}
configuration.setAttribute(IPDELauncherConstants.SELECTED_WORKSPACE_PLUGINS, wBuffer.toString());
configuration.setAttribute(IPDELauncherConstants.SELECTED_TARGET_PLUGINS, tBuffer.toString());
}
protected IPluginModelBase findPlugin(final String id) {
ModelEntry entry = PluginRegistry.findEntry(id);
if (entry != null) {
return entry.getModel();
}
return null;
}
}
Hi I'm trying to make a PACS server using Java. dcm4che appears to be quite popular. But I'm unable to find any good examples about it.
As a starting point I inspected dcmqrscp and it successfully stores a DICOM image. But I cannot manage to handle a C-MOVE call. Here's my CMove handler. It finds requested the DICOM file adds a URL and other stuff, it doesn't throw any exception yet client doesn't receive any files.
private final class CMoveSCPImpl extends BasicCMoveSCP {
private final String[] qrLevels;
private final QueryRetrieveLevel rootLevel;
public CMoveSCPImpl(String sopClass, String... qrLevels) {
super(sopClass);
this.qrLevels = qrLevels;
this.rootLevel = QueryRetrieveLevel.valueOf(qrLevels[0]);
}
#Override
protected RetrieveTask calculateMatches(Association as, PresentationContext pc, final Attributes rq, Attributes keys) throws DicomServiceException {
QueryRetrieveLevel level = QueryRetrieveLevel.valueOf(keys, qrLevels);
try {
level.validateRetrieveKeys(keys, rootLevel, relational(as, rq));
} catch (Exception e) {
e.printStackTrace();
}
String moveDest = rq.getString(Tag.MoveDestination);
final Connection remote = new Connection("reciverAE",as.getSocket().getInetAddress().getHostAddress(), 11113);
if (remote == null)
throw new DicomServiceException(Status.MoveDestinationUnknown, "Move Destination: " + moveDest + " unknown");
List<T> matches = DcmQRSCP.this.calculateMatches(keys);
if (matches.isEmpty())
return null;
AAssociateRQ aarq;
Association storeas = null;
try {
aarq = makeAAssociateRQ(as.getLocalAET(), moveDest, matches);
storeas = openStoreAssociation(as, remote, aarq);
} catch (Exception e) {
e.printStackTrace();
}
BasicRetrieveTask<T> retrieveTask = null;
retrieveTask = new BasicRetrieveTask<T>(Dimse.C_MOVE_RQ, as, pc, rq, matches, storeas, new BasicCStoreSCU<T>());
retrieveTask.setSendPendingRSPInterval(getSendPendingCMoveInterval());
return retrieveTask;
}
private Association openStoreAssociation(Association as, Connection remote, AAssociateRQ aarq)
throws DicomServiceException {
try {
return as.getApplicationEntity().connect(as.getConnection(),
remote, aarq);
} catch (Exception e) {
throw new DicomServiceException(
Status.UnableToPerformSubOperations, e);
}
}
private AAssociateRQ makeAAssociateRQ(String callingAET,
String calledAET, List<T> matches) {
AAssociateRQ aarq = new AAssociateRQ();
aarq.setCalledAET(calledAET);
aarq.setCallingAET(callingAET);
for (InstanceLocator match : matches) {
if (aarq.addPresentationContextFor(match.cuid, match.tsuid)) {
if (!UID.ExplicitVRLittleEndian.equals(match.tsuid))
aarq.addPresentationContextFor(match.cuid,
UID.ExplicitVRLittleEndian);
if (!UID.ImplicitVRLittleEndian.equals(match.tsuid))
aarq.addPresentationContextFor(match.cuid,
UID.ImplicitVRLittleEndian);
}
}
return aarq;
}
private boolean relational(Association as, Attributes rq) {
String cuid = rq.getString(Tag.AffectedSOPClassUID);
ExtendedNegotiation extNeg = as.getAAssociateAC().getExtNegotiationFor(cuid);
return QueryOption.toOptions(extNeg).contains(
QueryOption.RELATIONAL);
}
}
I added the code below to send a DICOM file as a response:
String cuid = rq.getString(Tag.AffectedSOPClassUID);
String iuid = rq.getString(Tag.AffectedSOPInstanceUID);
String tsuid = pc.getTransferSyntax();
try {
DcmQRSCP.this.as=as;
File f = new File("D:\\dcmqrscpTestDCMDir\\1.2.840.113619.2.30.1.1762295590.1623.978668949.886\\1.2.840.113619.2.30.1.1762295590.1623.978668949.887\\1.2.840.113619.2.30.1.1762295590.1623.978668949.888");
FileInputStream in = new FileInputStream(f);
InputStreamDataWriter data = new InputStreamDataWriter(in);
// !1! as.cmove(cuid,1,keys,tsuid,"STORESCU");
as.cstore(cuid,iuid,1,data,tsuid,rspHandlerFactory.createDimseRSPHandler(f));
} catch (Exception e) {
e.printStackTrace();
}
Throws this exception
org.dcm4che3.net.NoRoleSelectionException: No Role Selection for SOP Class 1.2.840.10008.5.1.4.1.2.2.2 - Study Root Query/Retrieve Information Model - MOVE as SCU negotiated
You should add a role to the application instance like:
applicationEntity.addTransferCapability(
new TransferCapability(null, "*", TransferCapability.Role.SCP, "*"));
I want to upload a file to an existing Google Drive folder.
I am using this how to upload an image from my android app to a specific folder on google drive to get the folder name but not sure how to implement it (smokybob's answer)
//Search by name and type folder
String qStr = "mimeType = 'application/vnd.google-apps.folder' and title = 'myFolder'";
//Get the list of Folders
FileList fList=service.files().list().setQ(qStr).execute();
//Check that the result is one folder
File folder;
if (fList.getItems().lenght==0){
folder=fList.getItems()[0];
}
//Create the insert request is as in the sample
File file = service.files().insert(body, mediaContent);
//set the parent
file.setParents(Arrays.asList(newParentReference().setId(folder.getFolderId())));
//execute the request
file.execute();
I am getting cannot resolve symbol errors for FileList, body, mediacontent.
I am getting cannot resolve method for .files, getitems(), setparents, newparentsreference, and execute.
Class:GetFile.java https://github.com/googledrive/android-demos/blob/master/src/com/google/android/gms/drive/sample/demo/CreateFileActivity.java
public class GetFile extends UploadDrive {
private static final String TAG = "CreateFileActivity";
#Override
public void onConnected(Bundle connectionHint) {
super.onConnected(connectionHint);
// create new contents resource
Drive.DriveApi.newDriveContents(getGoogleApiClient())
.setResultCallback(driveContentsCallback);
}
final private ResultCallback<DriveContentsResult> driveContentsCallback = new
ResultCallback<DriveContentsResult>() {
#Override
public void onResult(DriveContentsResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Error while trying to create new file contents");
return;
}
final DriveContents driveContents = result.getDriveContents();
// Perform I/O off the UI thread.
new Thread() {
#Override
public void run() {
// write content to DriveContents
OutputStream outputStream = driveContents.getOutputStream();
Writer writer = new OutputStreamWriter(outputStream);
try {
writer.write(MainActivity.driveText); //what is the problem?
writer.close();
} catch (IOException e) {
Log.e(TAG, e.getMessage());
}
MetadataChangeSet changeSet = new MetadataChangeSet.Builder()
.setTitle("New file")
.setMimeType("text/plain")
.setStarred(true).build();
// create a file on root folder
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFile(getGoogleApiClient(), changeSet, driveContents)
.setResultCallback(fileCallback);
}
}.start();
}
};
final private ResultCallback<DriveFileResult> fileCallback = new
ResultCallback<DriveFileResult>() {
#Override
public void onResult(DriveFileResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Error while trying to create the file");
return;
}
showMessage("Created a file with content: " + result.getDriveFile().getDriveId());
}
};
}
When a folder is created under GDAA, it produces a DriveId.
/**************************************************************************
* create file/folder in GOODrive
* #param prnId parent's ID, (null or "root") for root
* #param titl file name
* #param mime file mime type
* #param file file (with content) to create (optional, if null, create folder)
* #return file id / null on fail
*/
static String create(String prnId, String titl, String mime, java.io.File file) {
DriveId dId = null;
if (mGAC != null && mGAC.isConnected() && titl != null) try {
DriveFolder pFldr = (prnId == null || prnId.equalsIgnoreCase("root")) ?
Drive.DriveApi.getRootFolder(mGAC):
Drive.DriveApi.getFolder(mGAC, DriveId.decodeFromString(prnId));
if (pFldr == null) return null; //----------------->>>
MetadataChangeSet meta;
if (file != null) { // create file
if (mime != null) { // file must have mime
DriveContentsResult r1 = Drive.DriveApi.newDriveContents(mGAC).await();
if (r1 == null || !r1.getStatus().isSuccess()) return null; //-------->>>
meta = new MetadataChangeSet.Builder().setTitle(titl).setMimeType(mime).build();
DriveFileResult r2 = pFldr.createFile(mGAC, meta, r1.getDriveContents()).await();
DriveFile dFil = r2 != null && r2.getStatus().isSuccess() ? r2.getDriveFile() : null;
if (dFil == null) return null; //---------->>>
r1 = dFil.open(mGAC, DriveFile.MODE_WRITE_ONLY, null).await();
if ((r1 != null) && (r1.getStatus().isSuccess())) try {
Status stts = file2Cont(r1.getDriveContents(), file).commit(mGAC, meta).await();
if ((stts != null) && stts.isSuccess()) {
MetadataResult r3 = dFil.getMetadata(mGAC).await();
if (r3 != null && r3.getStatus().isSuccess()) {
dId = r3.getMetadata().getDriveId();
}
}
} catch (Exception e) {
UT.le(e);
}
}
} else {
meta = new MetadataChangeSet.Builder().setTitle(titl).setMimeType(UT.MIME_FLDR).build();
DriveFolderResult r1 = pFldr.createFolder(mGAC, meta).await();
DriveFolder dFld = (r1 != null) && r1.getStatus().isSuccess() ? r1.getDriveFolder() : null;
if (dFld != null) {
MetadataResult r2 = dFld.getMetadata(mGAC).await();
if ((r2 != null) && r2.getStatus().isSuccess()) {
dId = r2.getMetadata().getDriveId();
}
}
}
} catch (Exception e) { UT.le(e); }
return dId == null ? null : dId.encodeToString();
}
(must be run on non-UI thread)
This ID is used in subsequent calls as a "parent ID". If you have questions about unresolved methods, please refer to this GitHub project. BTW (as I mentioned before), it does everything you're trying to accomplish (creates folders, creates text files, reads contents back, deletes files/folders, ...)
Good Luck
I try to implement the function like following:
public List<RevCommit> createFileHistoryFromTag(File file, String fromTag,
String toTag) {
Iterable<RevCommit> commits = null;
try {
List<Ref> refs = git.tagList().call();
ObjectId fromTagId = null;
ObjectId toTagId = null;
for (Ref refTag : refs) {
if (refTag.getName().equals(fromTag)) {
fromTagId = refTag.getObjectId();
} else if (refTag.getName().equals(toTag)) {
toTagId = refTag.getObjectId();
}
}
if (fromTagId == null) {
throw new RuntimeException("Couldn't resolve fromTag: "
+ fromTag + " Make sure you gave an existing ref");
}
if (toTagId == null) {
throw new RuntimeException("Couldn't resolve toCommit: "
+ toTag + " Make sure you gave an existing ref");
}
commits = getCommitListFromTagForFile(file.getAbsolutePath()
.substring(1), fromTagId, toTagId);
} catch (GitAPIException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return Lists.newArrayList(commits);
}
and the geCommitListFromTagForFile like following:
private RevCommitList<RevCommit> getCommitListFromTagForFile(String path,
ObjectId start, ObjectId end) throws IOException {
RevWalk revisionWalk = buildRevisionWalk(path, start, end);
final RevCommitList<RevCommit> list = new RevCommitList<>();
list.source(revisionWalk);
list.fillTo(Integer.MAX_VALUE);
return list;
}
finally buildRevisionWalk function
private RevWalk buildRevisionWalk(String path, ObjectId start, ObjectId end)
throws IOException {
Config config = new Config(git.getRepository().getConfig());
config.setString("diff", null, "renames", "copies");
config.setInt("diff", null, "renameLimit", Integer.MAX_VALUE);
DiffConfig diffConfig = config.get(DiffConfig.KEY);
final RevWalk revisionWalk = new RevWalk(git.getRepository());
revisionWalk.setTreeFilter(FollowFilter.create(path, diffConfig));
revisionWalk.markStart(revisionWalk.parseCommit(start));
if (end != null) {
revisionWalk.markUninteresting(revisionWalk.parseCommit(end));
}
return revisionWalk;
}
But it won't return any commit message, if one file between two tags is changed.
Basically I want the functionality like
git diff v.4.32b759 v4.32b760 --some/file/path
and get the commit message from this tag
I cant change the mapping. Can anybody help me to find the bug in my code?
I have found this standard way to change the mapping according to several tutorials. But when i'm try to call the mapping structure there just appear a blank mapping structure after manuall mapping creation.
But after inserting some data there appear the mapping specification because ES is using of course the default one. To be more specific see the code below.
public class ElasticTest {
private String dbname = "ElasticSearch";
private String index = "indextest";
private String type = "table";
private Client client = null;
private Node node = null;
public ElasticTest(){
this.node = nodeBuilder().local(true).node();
this.client = node.client();
if(isIndexExist(index)){
deleteIndex(this.client, index);
createIndex(index);
}
else{
createIndex(index);
}
System.out.println("mapping structure before data insertion");
getMappings();
System.out.println("----------------------------------------");
createData();
System.out.println("mapping structure after data insertion");
getMappings();
}
public void getMappings() {
ClusterState clusterState = client.admin().cluster().prepareState()
.setFilterIndices(index).execute().actionGet().getState();
IndexMetaData inMetaData = clusterState.getMetaData().index(index);
MappingMetaData metad = inMetaData.mapping(type);
if (metad != null) {
try {
String structure = metad.getSourceAsMap().toString();
System.out.println(structure);
} catch (IOException e) {
e.printStackTrace();
}
}
}
private void createIndex(String index) {
XContentBuilder typemapping = buildJsonMappings();
String mappingstring = null;
try {
mappingstring = buildJsonMappings().string();
} catch (IOException e1) {
e1.printStackTrace();
}
client.admin().indices().create(new CreateIndexRequest(index)
.mapping(type, typemapping)).actionGet();
//try put mapping after index creation
/*
* PutMappingResponse response = null; try { response =
* client.admin().indices() .preparePutMapping(index) .setType(type)
* .setSource(typemapping.string()) .execute().actionGet(); } catch
* (ElasticSearchException e) { e.printStackTrace(); } catch
* (IOException e) { e.printStackTrace(); }
*/
}
private void deleteIndex(Client client, String index) {
try {
DeleteIndexResponse delete = client.admin().indices()
.delete(new DeleteIndexRequest(index)).actionGet();
if (!delete.isAcknowledged()) {
} else {
}
} catch (Exception e) {
}
}
private XContentBuilder buildJsonMappings(){
XContentBuilder builder = null;
try {
builder = XContentFactory.jsonBuilder();
builder.startObject()
.startObject("properties")
.startObject("ATTR1")
.field("type", "string")
.field("store", "yes")
.field("index", "analyzed")
.endObject()
.endObject()
.endObject();
} catch (IOException e) {
e.printStackTrace();
}
return builder;
}
private boolean isIndexExist(String index) {
ActionFuture<IndicesExistsResponse> exists = client.admin().indices()
.exists(new IndicesExistsRequest(index));
IndicesExistsResponse actionGet = exists.actionGet();
return actionGet.isExists();
}
private void createData(){
System.out.println("Data creation");
IndexResponse response=null;
for (int i=0;i<10;i++){
Map<String, Object> json = new HashMap<String, Object>();
json.put("ATTR1", "new value" + i);
response = this.client.prepareIndex(index, type)
.setSource(json)
.setOperationThreaded(false)
.execute()
.actionGet();
}
String _index = response.getIndex();
String _type = response.getType();
long _version = response.getVersion();
System.out.println("Index : "+_index+" Type : "+_type+" Version : "+_version);
System.out.println("----------------------------------");
}
public static void main(String[] args)
{
new ElasticTest();
}
}
I just wanna change the property of ATTR1 field to analyzed to ensure fast queries.
What im doing wrong? I also tried to create the mapping after index creation but it leads to the same affect.
Ok i found the answer by my own. On the type level i had to wrap the "properties" with the type name. E.g:
"type1" : {
"properties" : {
.....
}
}
See the following code:
private XContentBuilder getMappingsByJson(){
XContentBuilder builder = null;
try {
builder = XContentFactory.jsonBuilder().startObject().startObject(type).startObject("properties");
for(int i = 1; i<5; i++){
builder.startObject("ATTR" + i)
.field("type", "integer")
.field("store", "yes")
.field("index", "analyzed")
.endObject();
}
builder.endObject().endObject().endObject();
}
catch (IOException e) {
e.printStackTrace();
}
return builder;
}
It creates mappings for the attributes ATTR1 - ATTR4. Now it is possible to define mapping for Example a list of different attributes dynamically. Hope it helps someone else.