Is it posssible to sign JavaApplicationStub with intstall4j? - java

I'm trying to create a macOS Single bundle archive for my Java program. But you know MacOs have some new security rule. So i subscribe for an Developer ID Application for generate the archive.
When i launch the build after uploading the archive it come back with this response from Apple :
{
"logFormatVersion": 1,
"jobId": "X",
"status": "Invalid",
"statusSummary": "Archive contains critical validation errors",
"statusCode": 4000,
"archiveFilename": "XXX.dmg",
"uploadDate": "2021-02-11T15:54:19Z",
"sha256": "",
"ticketContents": null,
"issues": [
{
"severity": "error",
"code": null,
"path": "XXX.dmg/XXX.app/Contents/MacOS/JavaApplicationStub",
"message": "The signature of the binary is invalid.",
"docUrl": null,
"architecture": "x86_64"
}
]
}
So i try everything but i don't know how to sign the JavaApplication with install4j.
I'm sorry if i done some error in my english writing.
I'm trying to learn ^ ^ .
Thank you

Related

how to import java packages in a java core Jupyter Notebook

I'm trying to execute java code from a github repository in google colab,I know that google colab is for python by default so I installed the ijava library on to the Jupyter notebook using the commands:
!wget https://github.com/SpencerPark/IJava/releases/download/v1.3.0/ijava-1.3.0.zip
!unzip ijava-1.3.0.zip
!python install.py --sys-prefix
after I changed the Jupyter notebook configurations as follows:
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "example.ipynb",
"provenance": []
},
"kernelspec": {
"name": "python3"--->"java",
"display_name": "python3"--->"java"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NzKT3-VExE4i"
},
"outputs": [],
"source": [
""
]
}
]
}
After these changes I was able to run java in that Jupyter notebook, so I put this ipynb file inside my java netbeans project in the same package as my main file but when I try to import a package from that project I get an error that the package isn't defined, so how can I import packages from my java github repo inside the ipynb file?

VS Code starts debugging in integrated terminal instead of debug console for java

Can somebody help me with this. I am getting output in terminal not in debug console.
Here is my Launch json file.
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "java",
"name": "Debug (Launch) - Current File",
"request": "launch",
"mainClass": "${file}"
}
]
}
Add "console": "internalConsole" in launch.json, which specifies VS Code debug console (input stream not supported) to launch the program.
More information about debugging, please view Launch args in Java.

Different S3 object listing when uploading via Java SDK and AWS Console

I have a bucket in S3 with the following structure and contents:
javaFolderA/
└── javaFolderB/
└── javaFile.tmp
consoleFolderA/
└── consoleFolderB/
└── consoleFile.tmp
The java* folders and file were uploaded via the Java SDK:
final File file = new File("C:\\javaFolderA\\javaFolderB\\javaFile.tmp");
client.putObject("testbucket", "javaFolderA/javaFolderB/javaFile.tmp", file);
The console* folders and file were created/uploaded from the web console (Clicking the "+ Create folder" button for each folder, then uploading the file with public read permissions).
In the web console, after clicking to create a new bucket, the following message is shown:
When you create a folder, S3 console creates an object with the above name appended by suffix "/" and that object is displayed as a folder in the S3 console.
So, as expected, with the folders and files above, we get 3 objects created in the bucket with the following keys:
consoleFolderA/
consoleFolderA/consoleFolderB/
consoleFolderA/consoleFolderB/consoleFile.tmp
Tthe result of the SDK upload is a single object with the key: javaFolderA/javaFolderB/javaFile.tmp. This makes sense, as we are only putting a single object, not three. However, this results in inconsistencies when listing the contents of a bucket. Even though there is only one actual file in each directory, listing the contents returns multiple for the console scenario.
My question is why is this the case, and how can I achieve consistent behavior? There doesn't seem to be a way to "upload a directory" via the SDK (In quotes because I know there aren't actually folders/directories).
From the CLI I can verify the number of objects and their keys:
C:\Users\avojak>aws s3api list-objects --bucket testbucket
{
"Contents": [
{
"LastModified": "2018-01-02T22:43:55.000Z",
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
"StorageClass": "STANDARD",
"Key": "consoleFolderA/",
"Owner": {
"DisplayName": "foo.bar",
"ID": "2c401638471162eda7a3b48e41dfb9261d9022b56ce6b00c0ecf544b3e99ca93"
},
"Size": 0
},
{
"LastModified": "2018-01-02T22:44:02.000Z",
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
"StorageClass": "STANDARD",
"Key": "consoleFolderA/consoleFolderB/",
"Owner": {
"DisplayName": "foo.bar",
"ID": "2c401638471162eda7a3b48e41dfb9261d9022b56ce6b00c0ecf544b3e99ca93"
},
"Size": 0
},
{
"LastModified": "2018-01-02T22:44:16.000Z",
"ETag": "\"968fe74fc49094990b0b5c42fc94de19\"",
"StorageClass": "STANDARD",
"Key": "consoleFolderA/consoleFolderB/consoleFile.tmp",
"Owner": {
"DisplayName": "foo.bar",
"ID": "2c401638471162eda7a3b48e41dfb9261d9022b56ce6b00c0ecf544b3e99ca93"
},
"Size": 69014
},
{
"LastModified": "2018-01-02T22:53:13.000Z",
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
"StorageClass": "STANDARD",
"Key": "javaFolderA/javaFolderB/javaFile.tmp",
"Owner": {
"DisplayName": "foo.bar",
"ID": "2c401638471162eda7a3b48e41dfb9261d9022b56ce6b00c0ecf544b3e99ca93"
},
"Size": 0
}
]
}
If you prefer the console implementation then you need to emulate it. That means that your SDK client needs to create the intermediate 'folders', when necessary. You can do this by creating zero-sized objects whose key ends in forward-slash (if that's your 'folder' separator).
The AWS console behaves this way, allowing you to create 'folders', because many AWS console users are more comfortable with the notion of folders and files than they are with objects (and keys).
It's rare, in my opinion, to need to do this, however. Your SDK clients should be implemented to handle both the presence and absence of these 'folders'. More info here.

Apache Ignite on on DC/OS marathon (or any other java app)

I've been trying to configure Apache Ignite on DC/OS (1.8.7) marathon using the official docs at http://apacheignite.gridgain.org/docs/mesos-deployment but short of some hacks I haven't been able to get it to work following the docs. One of the core reasons appear to be that the cmd
"cmd": "java -jar ignite-mesos-1.8.0.jar"
will through an error "sh: java: command not found". This would indicate that java is not in the path but on the marathon hosts I've validated that java is in fact accessible on the path for my regular user at least.
I suspect that somehow java needs to be added to the path of mesos-container that is trying to run the cmd but I've been unable to find any documentation on how to set the path or default environment variables (ignite-mesos spawns tasks that need JAVA_HOME set as well, which is also missing in the tasks) in the containers that get created. For reference my marathon.json file is below...
{
"id": "/ignition",
"cmd": "java -jar ignite-mesos-1.8.0.jar",
"args": null,
"user": null,
"env": {
"IGNITE_MEMORY_PER_NODE": "2048",
"IGNITE_NODE_COUNT": "3",
"IGNITE_VERSION": "1.8.0",
"MESOS_MASTER_URL": "zk://master.mesos:2181/mesos",
"IGNITE_RUN_CPU_PER_NODE": "0.1"
},
"instances": 0,
"cpus": 0.25,
"mem": 2048,
"disk": 0,
"gpus": 0,
"executor": null,
"constraints": null,
"fetch": [
{
"uri": "http://SERVER_HERE/ignite-mesos-1.8.0.jar"
}
],
"storeUrls": null,
"backoffSeconds": 1,
"backoffFactor": 1.15,
"maxLaunchDelaySeconds": 3600,
"container": null,
"healthChecks": null,
"readinessChecks": null,
"dependencies": null,
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
},
"labels": {
"HAPROXY_GROUP": "external"
},
"acceptedResourceRoles": null,
"ipAddress": null,
"residency": null,
"secrets": null,
"taskKillGracePeriodSeconds": null,
"portDefinitions": [
{
"protocol": "tcp",
"port": 10108
}
],
"requirePorts": false
}
Ignite seems to expect a JDK 1.7/1.8 installation on each agent node, and the JAVA_HOME environment variable set accordingly.
Unfortunately, the Mesos framework doesn't seem to be well-maintained, as it still uses Mesos 0.22 libraries.

How to run "ant update" in Hybris with only "Update Running System"?

I'm trying to run ant update from the command line after building my Hybris project but it runs Update Running System, Clear hMC Configuration from Database, Create essential data and Localize type (please refer the following image):
Is there a way to run ant update command from command line so that it will only select "Update Running System"?
What parameter can I pass with ant update to only run "Update Running System" and not any other options?
You can use the command -DconfigFile=<your file> :
Example:
ant updatesystem -Dtenant=<my tenant> -DconfigFile=path/to/my/config.json
And here is an example of the config.json
{
"init": "Go",
"initmethod": "update",
"clearhmc": "true",
"essential": "true",
"localizetypes": "true",
"solrfacetsearch_sample": "true",
"hmc_sample": "true",
"solrfacetsearchhmc_sample": "true",
"customerreview_sample": "true",
"voucher_sample": "true",
"promotions_sample": "true",
"basecommerce_sample": "true",
"cms2_sample": "true",
"cms2lib_sample": "true",
"ticketsystem_sample": "true",
"payment_sample": "true",
"btg_sample": "true",
"platformhmc_sample": "true",
"commerceservices_sample": "true",
"commercewebservicescommons_sample": "true",
"acceleratorservices_sample": "true",
"acceleratorcms_sample": "true",
"yacceleratorfulfilmentprocess_sample": "true",
"yacceleratorcore_sample": "true",
....
"electronicsstore_importCoreData": [
"yes"
],
"electronicsstore_importSampleData": [
"yes"
],
"electronicsstore_activateSolrCronJobs": [
"yes"
],
"yacceleratortest_createTestData": [
"yes"
],
"yacceleratorcockpits_importCustomReports": [
"yes"
]
}
As you can see, it's not so easy to implement this file. As suggested in Initializing and Updating SAP Hybris Commerce, go to the HAC once, do your configuration in the webpage and click on Dump configuration. It will give you the generated json file.
Try with ant ant updatesystem .
To see the list of possible commands(targets) you can write ant -p . There you can find more about the commands.
ant updatesystem [-Dtenant=tenantID -DdryRun=true|false
-DtypeSystemOnly=true|false -DconfigFile=PATH_TO_CONFIG_FILE]
You can also use this command to update the system without importing any impexes.
ant updatesystem -DtypeSystemOnly=true

Categories