Google Cloud Client Library for Java: insertInstance() results in an UnavailableException - java

I am trying to create an instance using the Cloud API Client Library for Java. I have worked my way through the different Builder classes, using the "Equivalent REST" button in the console to see what items I might be missing, but now the call to insertInstance() is returning an UnavailableException. Checking in the console I see nothing was created. Who can help me find what I did wrong? All the URLs seem to be correct: when they are not I get a helpful exception with detailed errors.
The SETUP_NO_BOT constant is the text of my setup script, which creates a user, copies files from a bucket, that sort of stuff. That should not be the problem however, because then I would expect a running VM with errors in the startup logs.
All the examples I could find either used the older com.google.api.services packages or the com.google.cloud.compute.deprecated packages, and neither use a comparable class structure.
If I use the same code with a mistake in the URLs (now produced by getUrl() calls) I get a nice error return. This version consistently returns a 503.
private Instance createInstance() {
logger.debug("createInstance(): Creating boot disk based on \"{}\".", getBaseImage());
AttachedDiskInitializeParams diskParams;
if (ifSet(getBaseImageProject())) {
diskParams = AttachedDiskInitializeParams.newBuilder()
.setSourceImage(getUrl(getBaseImageProject(), "images", getBaseImage()))
.setDiskSizeGb("10")
.setDiskType(getUrl(context, "diskTypes", "pd-standard"))
.build();
}
else {
diskParams = AttachedDiskInitializeParams.newBuilder()
.setSourceImage(getUrl("images", getBaseImage()))
.setDiskSizeGb("10")
.setDiskType(getUrl(context, "diskTypes", "pd-standard"))
.build();
}
AttachedDisk disk = AttachedDisk.newBuilder()
.setType("PERSISTENT")
.setBoot(true)
.setMode("READ_WRITE")
.setAutoDelete(true)
.setDeviceName(getName())
.setInitializeParams(diskParams)
.build();
logger.debug("createInstance(): Creating network interface to network \"{}\".", getNetwork());
AccessConfig accessConfig = AccessConfig.newBuilder()
.setName("External NAT")
.setType("ONE_TO_ONE_NAT")
.setNetworkTier("PREMIUM")
.build();
NetworkInterface netIntf = NetworkInterface.newBuilder()
.setNetwork(getUrl(context, "networks", getNetwork()))
.setSubnetwork(getRegionalUrl(context, "subnetworks", "default"))
.addAccessConfigs(accessConfig)
.build();
logger.debug("createInstane(): Creating startup script metadata.");
Items metadataItems = Items.newBuilder()
.setKey("startup-script")
.setValue(SETUP_NO_BOT)
.build();
Metadata metadata = Metadata.newBuilder()
.addItems(metadataItems)
.build();
logger.debug("createInstance(): Create the instance...");
Instance inst = Instance.newBuilder()
.setName(getName())
.setZone(getUrl(context, "zones", context.getZone()))
.setMachineType(getZonalUrl(context, "machineTypes", getMachineType()))
.addDisks(disk)
.setCanIpForward(false)
.addNetworkInterfaces(netIntf)
.setMetadata(metadata)
.build();
return gceUtil.createInstance(inst) ? gceUtil.getInstance(getName()) : null;
}
createInstance is very simple:
/**
* Create an {#link Instance}.
*/
public boolean createInstance(Instance instance) {
logger.debug("createInstance(): project = \"{}\", zone = \"{}\".", getProjectName(), getZoneName());
Operation response = instClient.insertInstance(ProjectZoneName.of(getProjectName(), getZoneName()), instance);
return isHttp2xx(response);
}

Related

Use Webclient with custom HttpMessageReader to synchronously read responses

Problem
I have defined a CustomHttpMessageReader (which implements HttpMessageReader<CustomClass>), which is able to read a multipart response from a server and converts the received parts into an object of a specific class. The CustomHttpMessageReader uses internally the DefaultPartHttpMessageReader to actually read/parse the multipart responses.
The CustomHttpMessageReader accumulates the parts read by the DefaultReader and converts them into the desired class CustomClass.
I've created a CustomHttpMessageConverter that does the same thing for a RestTemplate, but I struggle to do the same for a WebClient.
I always get the following Exception:
block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-nio-2
java.lang.IllegalStateException: block()/blockFirst()/blockLast() are blocking, which is not supported in thread reactor-http-nio-2
at reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:83)
at reactor.core.publisher.Flux.blockFirst(Flux.java:2600)
at com.company.project.deserializer.multipart.CustomHttpMessageReader.readMultipartData(CustomHttpMessageReader.java:116)
at com.company.project.deserializer.multipart.CustomHttpMessageReader.readMono(CustomHttpMessageReader.java:101)
at org.springframework.web.reactive.function.BodyExtractors.lambda$readToMono$14(BodyExtractors.java:211)
at java.base/java.util.Optional.orElseGet(Optional.java:369)
...
Mind you, I'm not interested in running WebClient asynchronously. I'm only future proofing my application because RestTemplate is apparently only in maintenance mode and the folks at Pivotal/Spring suggest using WebClient instead.
What I Tried
As I understand, there are threads that are not allowed to be blocked, namely the netty-nio one in the exception. I tried removing netty from my dependencies, so that I can rely solely on Tomcat. That however doesn't seem to help, as I get another exception, explaining me, that no suitable HttpConnector exists (exception thrown by the WebClient.Builder)
No suitable default ClientHttpConnector found
java.lang.IllegalStateException: No suitable default ClientHttpConnector found
at org.springframework.web.reactive.function.client.DefaultWebClientBuilder.initConnector(DefaultWebClientBuilder.java:297)
at org.springframework.web.reactive.function.client.DefaultWebClientBuilder.build(DefaultWebClientBuilder.java:266)
at com.company.project.RestClientUsingWebClient.getWebclient(RestClientUsingWebClient.java:160)
I've tried my code executed in a unit test as well, as starting a whole Spring context. The result is unfortunately the same.
Setup
To provide a bit more details, the following are snippets from the Classes mentioned earlier. The classes are not shown fully in order to understand better what is going on. All necessary methods are implemented (like e.g. canRead() in the Reader).
CustomHttpMessageReader
I also included in the class the usage of CustomPart (in addition to CustomClass) just to show, that the content of the Part is also read i.e. blocked.
public class CustomHttpMessageReader implements HttpMessageReader<CustomClass> {
private final DefaultPartHttpMessageReader defaultPartHttpMessageReader = new DefaultPartHttpMessageReader();
#Override
public Flux<CustomClass> read(final ResolvableType elementType, final ReactiveHttpInputMessage message,
final Map<String, Object> hints) {
return Flux.merge(readMono(elementType, message, hints));
}
#Override
public Mono<CustomClass> readMono(final ResolvableType elementType, final ReactiveHttpInputMessage message,
final Map<String, Object> hints) {
final List<CustomPart> customParts = readMultipartData(message);
return convertToCustomClass(customParts);
}
private List<CustomPart> readMultipartData(final ReactiveHttpInputMessage message) {
final ResolvableType resolvableType = ResolvableType.forClass(byte[].class);
return Optional.ofNullable(
defaultPartHttpMessageReader.read(resolvableType, message, Map.of())
.buffer()
.blockFirst()) // <- EXCEPTION IS THROWN HERE!
.orElse(new ArrayList<>())
.stream()
.map(part -> {
final byte[] content = Optional.ofNullable(part.content().blockFirst()) //<- HERE IS ANOTHER BLOCK
.map(DataBuffer::asByteBuffer)
.map(ByteBuffer::array)
.orElse(new byte[]{});
// Here we cherry pick some header fields
return new CustomPart(content, someHeaderFields);
}).collect(Collectors.toList());
}
}
Usage of WebClient
class RestClientUsingWebClient {
/**
* The "Main" Method for our purposes
*/
public Optional<CustomClass> getResource(final String baseUrl, final String id) {
final WebClient webclient = getWebclient(baseUrl);
//curl -X GET "http://BASE_URL/id" -H "accept: multipart/form-data"
return webclient.get()
.uri(uriBuilder -> uriBuilder.path(id).build()).retrieve()
.toEntity(CustomClass.class)
.onErrorResume(NotFound.class, e -> Mono.empty())
.blockOptional() // <- HERE IS ANOTHER BLOCK
.map(ResponseEntity::getBody);
}
//This exists also as a Bean definition
private WebClient getWebclient(final String baseUrl) {
final ExchangeStrategies exchangeStrategies = ExchangeStrategies.builder()
.codecs(codecs -> {
codecs.defaultCodecs().maxInMemorySize(16 * 1024 * 1024);
codecs.customCodecs().register(new CustomHttpMessageReader()); // <- Our custom reader
})
.build();
return WebClient.builder()
.baseUrl(baseUrl)
.exchangeStrategies(exchangeStrategies)
.build();
}
}
Usage of build.gradle
For the sake of completion, here is what I think is the relevant part of my build.gradle
plugins {
id 'org.springframework.boot' version '2.7.2'
id 'io.spring.dependency-management' version '1.0.13.RELEASE'
...
}
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-actuator'
implementation 'org.springframework.boot:spring-boot-starter-web' // <- This
implementation 'org.springframework.boot:spring-boot-starter-webflux'
// What I tried:
// implementation ('org.springframework.boot:spring-boot-starter-webflux'){
// exclude group: 'org.springframework.boot', module: 'spring-boot-starter-reactor-netty'
//}
...
}
if we look in the stacktrace that you provided we see these 3 lines
at reactor.core.publisher.Flux.blockFirst(Flux.java:2600)
at com.company.project.deserializer.multipart.CustomHttpMessageReader.readMultipartData(CustomHttpMessageReader.java:116)
at com.company.project.deserializer.multipart.CustomHttpMessageReader.readMono(CustomHttpMessageReader.java:101)
They should be read from bottom to top. So what do they tell us?
The bottom line tells us that the function readMono on the line 101 in the class CustomHttpMessageReader.javawas called first.
That function then called the function readMultipartData on line 116 in the class CustomHttpMessageReader(same class as above)
Then the function blockFirst was called on line 2600 in the class Flux.
Thats your blocking call.
So we can tell that there is a blocking call in the function readMultipartData.
So why cant we block in the function? well if we look in the API for the interface that function is overriding HttpMessageReader we can se that the function returns a Mono<T> which means that the function is an async function.
And if it is async and we block we might get very very bad performance.
This interface is used within the Spring WebClient which is a fully async client.
You can use it in a non-async application, but that means you can block outside of the WebClient but internally, it needs to operate completely async if you want it to be as efficient as possible.
So the bottom line is that you should not block in any function that returns a Mono or a Flux.

Jira REST API with proxy (Java)

I would like to use the Jira REST Client API for Java in an application that needs to go through a proxy to access the desired Jira instance. Unfortunately I didn't find a way to set it when using the given factory from that library:
JiraRestClientFactory factory = new AsynchronousJiraRestClientFactory();
String authentication = Base64.getEncoder().encodeToString("username:password".toBytes());
return factory.createWithAuthenticationHandler(URI.create(JIRA_URL), new BasicAuthenticationHandler(authentication));
How can we use the Jira API and set a proxy ?
The only solution I found on the internet was to set it with system parameters (see solution 1 below). Unfortunately that did not fit my requirements as in the company I work for, there are multiple proxies and depending on the service to call, it has to use another proxy configuration. In that case, I cannot set the system properties without destroying all calls to other services that would need another proxy.
Nevertheless, I was able to find a way to set it by re-implementing some classes (see solution 2).
Important limitation: the proxy server must not ask for credentials.
Context
Maybe as context before, I created a class containing proxy configuration:
#Data
#AllArgsConstructor
public class ProxyConfiguration {
public static final Pattern PROXY_PATTERN = Pattern.compile("(https?):\\/\\/(.*):(\\d+)");
private String scheme;
private String host;
private Integer port;
public static ProxyConfiguration fromPath(String path) {
Matcher matcher = PROXY_PATTERN.matcher(path);
if (matcher.find()) {
return new ProxyConfiguration(matcher.group(1), matcher.group(2), toInt(matcher.group(3)));
}
return null;
}
public String getPath() {
return scheme + "://" + host + ":" + port;
}
}
Set system properties for proxy
Call the following method with your proxy configuration at the start of the application or before using the Jira REST API:
public static void configureProxy(ProxyConfiguration proxy) {
if (proxy != null) {
System.getProperties().setProperty("http.proxyHost", proxy.getHost());
System.getProperties().setProperty("http.proxyPort", proxy.getPort().toString());
System.getProperties().setProperty("https.proxyHost", proxy.getHost());
System.getProperties().setProperty("https.proxyPort", proxy.getPort().toString());
}
}
Re-implement AsynchronousHttpClientFactory
Unfortunately, as this class has many private inner classes and methods, you will have to do an ugly copy paste and change the following code to give the wanted proxy configuration:
public DisposableHttpClient createClient(URI serverUri, ProxyConfiguration proxy, AuthenticationHandler authenticationHandler) {
HttpClientOptions options = new HttpClientOptions();
if (proxy != null) {
options.setProxyOptions(new ProxyOptions.ProxyOptionsBuilder()
.withProxy(HTTP, new Host(proxy.getHost(), proxy.getPort()))
.withProxy(HTTPS, new Host(proxy.getHost(), proxy.getPort()))
.build());
}
DefaultHttpClientFactory<?> defaultHttpClientFactory = ...
}
You can then use it (in the following example, my re-implementation of AsynchronousHttpClientFactory is called AtlassianHttpClientFactory):
URI url = URI.create(JIRA_URL);
String authentication = Base64.getEncoder().encodeToString("username:password".toBytes());
DisposableHttpClient client = new AtlassianHttpClientFactory().createClient(url, proxy, new BasicAuthenticationHandler(authentication));
return new AsynchronousJiraRestClient(url, client);
Note that after all those problems, I also decided to write a Jira client library supporting authentication, proxy, multiple HTTP clients and working asynchronously with CompletableFuture.

How to resolve cleartext not permitted in aosp

I know cleartext been disabled by default by android. May I know where exactly I can enable in aosp instead of adding in all packages with network config files?
Where I can permit by adding the below line?
cleartextTrafficPermitted="true
external/okhttp/android/main/java/com/squareup/okttp/Handler
public static OkUrlFactory createHttpOkUrlFactory(Proxy proxy) {
OkHttpClient client = new OkHttpClient();
// Explicitly set the timeouts to infinity.
client.setConnectTimeout(0, TimeUnit.MILLISECONDS);
client.setReadTimeout(0, TimeUnit.MILLISECONDS);
client.setWriteTimeout(0, TimeUnit.MILLISECONDS);
// Set the default (same protocol) redirect behavior. The default can be overridden for
// each instance using HttpURLConnection.setInstanceFollowRedirects().
client.setFollowRedirects(HttpURLConnection.getFollowRedirects());
// Do not permit http -> https and https -> http redirects.
client.setFollowSslRedirects(false);
// Permit cleartext traffic only (this is a handler for HTTP, not for HTTPS).
client.setConnectionSpecs(CLEARTEXT_ONLY);
// When we do not set the Proxy explicitly OkHttp picks up a ProxySelector using
// ProxySelector.getDefault().
if (proxy != null) {
client.setProxy(proxy);
}
// OkHttp requires that we explicitly set the response cache.
OkUrlFactory okUrlFactory = new OkUrlFactory(client);
// Use the installed NetworkSecurityPolicy to determine which requests are permitted over
// http.
OkUrlFactories.setUrlFilter(okUrlFactory, CLEARTEXT_FILTER);
ResponseCache responseCache = ResponseCache.getDefault();
if (responseCache != null) {
AndroidInternal.setResponseCache(okUrlFactory, responseCache);
}
return okUrlFactory;
}
private static final class CleartextURLFilter implements URLFilter {
#Override
public void checkURLPermitted(URL url) throws IOException {
String host = url.getHost();
if (!NetworkSecurityPolicy.getInstance().isCleartextTrafficPermitted(host)) {
throw new IOException("Cleartext HTTP traffic to " + host + " not permitted");
}
}
}
In any apps if I use http, I get error as Cleartext HTTP traffic to 124.60.5.6 not permitted";
So instead of changing in apps, is it possible to change in aosp?
Seems like its enough if you do
builder.setCleartextTrafficPermitted(true);
in line 189 seems sufficient since you are using older applications which probably doesn't have any network config and only uses default ones.
source: https://android.googlesource.com/platform/frameworks/base.git/+/refs/heads/master/core/java/android/security/net/config/NetworkSecurityConfig.java#189
Old Answer
I hope you have done your homework on the implications on bypassing a security feature. That being said, the class responsible for the exception is in framework with package android.security.net.config and class responsible is NetworkSecurityConfig.
As of writing this answer, the static builder class has a property boolean mCleartextTrafficPermittedSet which is set to false by default. You might have to default it to true which makes the method getEffectiveCleartextTrafficPermitted() in the NetworkSecurityConfig class return mCleartextTrafficPermitted which in return returns DEFAULT_CLEARTEXT_TRAFFIC_PERMITTED which is by default set to true
The flow would be
getEffectiveCleartextTrafficPermitted() returns mCleartextTrafficPermitted returns DEFAULT_CLEARTEXT_TRAFFIC_PERMITTED returns true by default.
If this is all confusing, call setCleartextTrafficPermitted(true) on the builder whenever the builder is created.
The source for the class is available here: https://android.googlesource.com/platform/frameworks/base.git/+/refs/heads/master/core/java/android/security/net/config/NetworkSecurityConfig.java
Note: I have not tried this and merely gone through the source and inferred the above. You are welcome to try and correct me if something is wrong.
Edit by updating from #Shadow:
In NetworkSecurityConfig, change the boolean variable from true to false.
//public static final boolean DEFAULT_CLEARTEXT_TRAFFIC_PERMITTED = true;
public static final boolean DEFAULT_CLEARTEXT_TRAFFIC_PERMITTED = false;
Also in ManifestConfigSource, comment the below line,
/*boolean usesCleartextTraffic =
(mApplicationInfo.flags & ApplicationInfo.FLAG_USES_CLEARTEXT_TRAFFIC) != 0
&& mApplicationInfo.targetSandboxVersion < 2;*/
and directly apply as usesCleartextTraffic as true.
boolean usesCleartextTraffic =true;
You need to go to AndroidManifest.xml and add
<application
android:usesCleartextTraffic="true"
android:networkSecurityConfig="#xml/network_security_config"
....
</application>
I strongly advise that you create the network_security_config to only allow your domain and subdomain. Here is a quick tutorial

Where is the correct place to initialize an embedded database in a DropWizard application?

I am building a DropWizard based application that will have an embedded Derby database.
Where in the Dropwizard framework would be the appropriate place to test if the database exists and if not create it.
Right now I am configuring the database in the DataSourceFactory in the .yml file that is provided by the dropwizard-db module and that is not available until the run() method is called.
I am using Guice as well in this application, so solutions involving Guice will be accepted as well.
Is there an earlier more appropriate place to test for and create the database?
as asked, I am going to provide my solution. Backstory, I am using guicey (https://github.com/xvik/dropwizard-guicey) which in my humble opinion is a fantastic framework. I use that for integrating with guice, however I expect most implementations will be similar and can be adopted. In addition to this, I also use liquibase for database checking and consistency.
Firstly, during initialisation, I am creating a bundle that does my verification for me. This bundle is a guicey concept. It will automatically be run during guice initialisation. This bundle looks like this:
/**
* Verifying all changelog files separately before application startup.
*
* Will log roll forward and roll back SQL if needed
*
* #author artur
*
*/
public class DBChangelogVerifier extends ComparableGuiceyBundle {
private static final String ID = "BUNDLEID";
private static final Logger log = Logger.getLogger(DBChangelogVerifier.class);
private List<LiquibaseConfiguration> configs = new ArrayList<>();
public void addConfig(LiquibaseConfiguration configuration) {
this.configs.add(configuration);
}
/**
* Attempts to verify all changelog definitions with the provided datasource
* #param ds
*/
public void verify(DataSource ds) {
boolean throwException = false;
Contexts contexts = new Contexts("");
for(LiquibaseConfiguration c : configs) {
try(Connection con = ds.getConnection()) {
Database db = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(new JdbcConnection(con));
db.setDatabaseChangeLogLockTableName(c.changeLogLockTableName());
db.setDatabaseChangeLogTableName(c.changeLogTableName());
Liquibase liquibase = new ShureviewNonCreationLiquibase(c.liquibaseResource(), new ClassLoaderResourceAccessor(), db);
liquibase.getLog();
liquibase.validate();
List<ChangeSet> listUnrunChangeSets = liquibase.listUnrunChangeSets(contexts, new LabelExpression());
if(!listUnrunChangeSets.isEmpty()) {
StringWriter writer = new StringWriter();
liquibase.update(contexts, writer);
liquibase.futureRollbackSQL(writer);
log.warn(writer.toString());
throwException = true;
}
} catch (SQLException | LiquibaseException e) {
throw new RuntimeException("Failed to verify database.", e);
}
}
if(throwException){
throw new RuntimeException("Unrun changesets in chengelog.");
}
}
/**
* Using init to process and validate to avoid starting the application in case of errors.
*/
#Override
public void initialize(GuiceyBootstrap bootstrap) {
Configuration configuration = bootstrap.configuration();
if(configuration instanceof DatasourceConfiguration ) {
DatasourceConfiguration dsConf = (DatasourceConfiguration) configuration;
ManagedDataSource ds = dsConf.getDatasourceFactory().build(bootstrap.environment().metrics(), "MyDataSource");
verify(ds);
}
}
#Override
public String getId() {
return ID;
}
}
Note that ComparableGuiceBundle is an interface I added so I can have an order in the bundles and their init functions.
This bundle will automatically be initialised by guicey and the init method will be called, supplying me with a datasource. In the init (the same thread) I am calling verify. This means, that if verification fails, the startup of my application fails and it will refuse to finish starting.
In my startup code, I simply add this bundle to the Guicey configuration so that Guice can be aware of it:
// add all bundles to the bundles variable including the Liquibase bundle.
// registers guice with dropwizard
bootstrap.addBundle(GuiceBundle.<EngineConfigurationImpl>builder()
.enableAutoConfig("my.package")
.searchCommands(true)
.bundles(bundles.toArray( new GuiceyBundle[0]))
.modules(getConfigurationModule(), new CoreModule())
.build()
);
That is all I need to do. Guicey takes care of the rest. During application startup it will initialise all bundles passed to it. Due to it being comparable, the bundle verifying my database is the first one and will be executed first. Only if that bundle successfully starts up will the other ones start up as well.
For the liquibase part:
public void verify(DataSource ds) {
boolean throwException = false;
Contexts contexts = new Contexts("");
for(LiquibaseConfiguration c : configs) {
try(Connection con = ds.getConnection()) {
Database db = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(new JdbcConnection(con));
db.setDatabaseChangeLogLockTableName(c.changeLogLockTableName());
db.setDatabaseChangeLogTableName(c.changeLogTableName());
Liquibase liquibase = new ShureviewNonCreationLiquibase(c.liquibaseResource(), new ClassLoaderResourceAccessor(), db);
liquibase.getLog();
liquibase.validate();
List<ChangeSet> listUnrunChangeSets = liquibase.listUnrunChangeSets(contexts, new LabelExpression());
if(!listUnrunChangeSets.isEmpty()) {
StringWriter writer = new StringWriter();
liquibase.update(contexts, writer);
liquibase.futureRollbackSQL(writer);
log.warn(writer.toString());
throwException = true;
}
} catch (SQLException | LiquibaseException e) {
throw new RuntimeException("Failed to verify database.", e);
}
}
if(throwException){
throw new RuntimeException("Unrun changesets in chengelog.");
}
}
As you can see from my setup, I can have multiple changelog configurations that can be checked. In my startup code I look them up and add them to the bundle.
Liquibase will choose the correct database for you. If no database is available it will error. If the connection isn't up, it will error.
If it finds unran changesets, it will print out roll forward and rollback SQL. If the md5sum isn't correct, it will print that. In any case, if the database isn't consistent with the changesets, it will refuse to start up.
Now in my case, I do not want liquibase to create anything. It is a pure validation process. However liquibase does give you the option to run all changesets, create tables etc. You can read about it in the docs. It's fairly straight forward.
This approach pretty much integrates liquibase with a normal startup, rather than using the database commands with dropwizard to execute them manually.
I hope that helps, let me know if you have any questions.
Artur

send client info to ejbs

I need to add certain functionality to an existing ejb projects.
Specifically - the client info, such as IP addres, login credentials (who is connected) and client application name
My bean is a stateless, so I worry there is an issue with such an approach..
My client code currently has the following:
private static MySession getmySession() throws RemoteException {
if(mySession != null) return mySession; //mySession is a private variable
try {
Properties h = new Properties();
h.put(Context.INITIAL_CONTEXT_FACTORY, contextFactory);
h.put(Context.PROVIDER_URL, serverUrl ); //t3://localhost
InitialContext ctx = new InitialContext(h);
mySessionHome home = (mySessionHome) ctx.lookup( "mySessionEJB" );
mySession = home.create();
return mySession;
} catch(NamingException ne) {
throw new RemoteException(ne.getMessage());
} catch(CreateException ce) {
throw new RemoteException(ce.getMessage());
}
}
Ideally, I would like my 'mySession' know about the client at the point it is returned.
If that may not be possible,
I would like to send a client info at the time a particular method of MySession is called.
Somewhere in this code
public static List getAllMembers() throws RemoteException, CatalogException
{
getMySession();
List list = mySession.getAllMembers() );
return list;
}
There are quite many such methods, so this is less desirable. but I will take it if it solves the task.
At the end of the day, when "getAllMembers()" executes on the server, I want to know particular info of which client has called it. (there can be many different, including webservices)
Thanks
First thing - what are you doing with the client information? If you're planning to use it for auditing, this sounds like a perfect use for Interceptors!
The EJB way to access user information is via the user's Principal, and there's no problem using this in a stateless bean. You may find that this doesn't get all the information you would like - this answer suggests getting the user IP isn't entirely supported.

Categories