In my J2ee based application which is deployed on Oracle Weblogic 11g AS, request comes from two Oracle HTTP Web Server. One is meant for Intranet and other is for Internet users. I want to figure out from whether request is coming from Internet web server or Intranet web server. Basis this, access of application being restricted.
Can we add some request header at Oracle HTTP Web Server side which can be check in servlet once request will reach to Application server?
Idea is that we'll add request header in both the web servers with different value. Once request will reach to application server, we'll check the value of this header and identify from which web server request is coming from. Accordingly access right will be provided to the users accessing the application from internet or intranet.
Please suggest if any other solution can meet the requirement.
You could separate the traffic using networks channel. (http://docs.oracle.com/cd/E23943_01/web.1111/e13701/network.htm)
For instance register a new HTTP channel for the WLS Managed Server and point one HTTP server to this new port.
Then you could implement a Weblogic filter the achieve the expected behavior. (http://weblogic-wonders.com/weblogic/2011/03/03/weblogic-connection-filters/)
Related
I'm trying to configure the WSO2 API Manager. (version - v4.0.0)
When I try to create REST API and point to the endpoints I"m getting a Connection error message for the given endpoints. I have hosted the API Manager and the back end services on the same server(backend services are running on the tomcat application on the same server in port 8080)
API Manager Log produces the following message :
ERROR {org.wso2.carbon.apimgt.rest.api.publisher.v1.impl.ApisApiServiceImpl} - Error occurred while sending the HEAD request to the given endpoint url: org.apache.commons.httpclient.ConnectTimeoutException: The host did not accept the connection within timeout of 4000 ms
would really like to what has caused the issue.
P.S: I can access the backend services directly without any connection issues using a REST client.
It's difficult to answer the question without knowing the exact details of your deployment and the backend. But let me try. Here is what I think is happening. As you can clearly see, the error is a connection timeout The host did not accept the connection within timeout of 4000 ms.
Let me explain what happens when you click on the Check Endpoint Status button. When you click on the Check Endpoint Status button, the Browser is not directly sending a request to the Backend to validate it. The Backend URL will be passed to the APIM Server, and the Server will perform the validation by sending an HTTP HEAD request to the BE service.
So there can be two causes. First may be your backend doesn't know how to handle a HEAD request which is preventing it from accepting the request. But given the error indicated it's a network issue, I doubt it even reached the BE.
The second one is, that your Backend is not accessible from the place API Manager is running. If you are running API Manager on Server A and trying to access API Manager via browser from Server B(Local Machine). Although you can access the BE from Server B may be from Server A it's not accessible. When I say BE is not accessible from API Manager server, it means it's not accessible with the same URL that was used in API Manager. It doesn't really matter if it runs in the same Server if you are using a different DNS other than localhost to access it. So go to the server API Manager is running and send a request using the same URL that was used in API Manager and see whether it's accessible from there.
First try doing a curl request by login into the server where APIM is running (not from your local machine). Maybe due to some firewall rules within the server, the hostname given in the URL may not be accessible. Also, try sending a HEAD request as well. You might be able to get some idea why this is happening
I want to create a web application, which is divided into two part one is client and another is server.
Client:
Client part is on the shared server.
Client is the GWT Application which only use to display data (containing only ui elements and ui events).
Client application is used by server to view and present it's own data.
Server:
The server is the simple java web service (restlet).
The server is reside behind the firewall.
The server contains actual data.
There are N number of servers.
Server does not contains any view if server wants ro view data it will use the gwt client application.
Every server uses same gwt application to view it's own data.
Note :
Client does not contains any address of the server. server will send the request to view it's data.
There is no firewall inbound exception on server firewall to access server data from out side client
I need to communicate client and server through firewall, Is there any architecture or design pattern to implement this type of application?
I don't think that the firewall can bring new restrictions to a GWT application compared with other types of applications (clients).
In case you have the GWT client on one server which makes calls to a different server you might have some issues due to same origin restriction.
This can be resolved in several ways:
- your GWT application has a server-side part which calls the other servers. And your GWT client makes normal RPC / JSON calls to the GWT server side (on the same server).
- in case you want to make directly the call on the different server from your GWT client you can use JSONP or the restygwt library.
We have a number of Jetty http(s) servers, all behind different firewalls. The http servers are at customer sites (not under our control). Opening ports in the firewalls at these sites is not an option. Right now, these servers only serve JSON documents in response to REST requests.
We have web clients that need to interact with a given http server based on URL parameter or header value.
This seems like a straightforward proxy server situation - except for the firewall.
The approach that I'm currently trying is this:
Have a centralized proxy server (also Jetty based) that listens for inbound registration requests from the remote http servers. The registration request will take the form of a Websocket connection, which will be kept alive as long at the remote HTTP server is available. On registration, the Proxy Server will capture the websocket connection and map it to a resource identifier.
The web client will connect the proxy server, and include the resource identifier in the URL or header.
The proxy server will determine the appropriate Websocket to use, then pass the request on to the HTTP server. So the request and response will travel over the Websocket. Once the response is received, it will be returned to the web client.
So this is all well and good in theory - what I'm trying to figure out is:
a) is there a better way to achieve this?
b) What's the best way to set up Jetty to do the proxying on the HTTP Server end of the pipe?
I suppose that I could use Jetty's HttpClient, but what I really want to do is just pull the HTTP bytes from the websocket and pipe them directly into the Jetty connector. It doesn't seem to make sense to parse everything out. I suppose that I could open a regular socket connection on localhost, grab the bytes from the websocket, and do it that way - but it seems silly to route through the OS like that (I'm already operating inside the HTTP Server's Jetty environment).
It sure seems like this is the sort of problem that may have already been solved... Maybe by using a custom jetty Connection that works on WebSockets instead of TCP/IP sockets?
Update: as I've been playing with this, it seems like another tricky problem is how to handle request/response behavior (and ideally support muxing over the websocket channel). One potential resource that I've found is the WAMP sub-protocol for websockets: http://wamp.ws/
In case anyone else is looking for an answer to this one - RESTEasy has a mocking framework that can be used to invoke the REST functionality without running through a full servlet container: http://docs.jboss.org/resteasy/docs/2.0.0.GA/userguide/html_single/index.html#RESTEasy_Server-side_Mock_Framework
This, combined with WAMP, appears to do what I'm looking for.
I know the scope of this question is very large and its not appropriate to ask it here. But I don't know where to go.
I have a web application (client) + a web application (server). Both are working on tomcat on two different ports.
Now, I want the client to send and receive data to/from server using HTTPS/SSL, or in better terms, using a secured connection.
Need some guidance/clarity for this. Some questions that I have are:
Should I change some settings in TOMCAT so that my server runs on HTTPS ?
Should I make changes to client as well ?
How do I establish the connection via HTTPS ?
How do I know that data is transferred over HTTPS ?
u should state the content of the jsp page to be "contentType/https"
I am developing a flex application with flex 4.1 sdk and java backend (runs on Glassfish 3.1 via http). For security reasons I decided to move my authentication process to https until a session id is obtained. Therefore I changed the filter settings to use ssl for login and logout pages(just two pages due to performance reasons. The data-size sent to client is large and I do not want to slow down the system). Glassfish forwarded these pages to 8181 port (which is HTTPS port). Everything is ok for the java part. However flex defines the 8181 port as a different domain and then problems arise. Due to flash's same-origin policy it cannot load the secured content. Normally a crossdomain.xml is the solution but I am accessing content of the same domain through a different port. What will be the solution ?
Probably not the best solution but create a subdomain that maps to 8181 and put a crossdomain.xml that will allow access from root domain.