I have been tasked with writing an application that lets users place calls to Cisco Unified Callmanager 8.6. The contact list will not be provided by the UCM. It will be provided elsewhere.
I find both the documentation and examples provided by Cisco to be lacking and undesirable. I also find the lack of working examples from third parties to be lacking.
My hope is that someone else has done something similar to this before me.
The application gets the numbers to call from a database, it then lets the user click on the contact he or she wants to call. The number of the destination should then be sent to the phone. Basically, in stead of having to dial a number, the application sends the destination to the phone or UCM and the user takes over at this point.
Looking at Cisco's makecall.java, and using it, it seems to be simple to actually place a call using this API.
I have started out by using the example found at http://blog.nominet.org.uk/tech/2008/01/25/experiments-with-jtapi-part-1-making-a-call/ but I believe this piece of code to be insufficient to place a call. I may however be wrong.
Could anyone point me in the right direction here, as I believe my specifications are simple and should be easy to implement. If more information is needed, I will be happy to provide it.
This is months ago, but it still might help you somewhat. I was able to create a test scenario:
protected CiscoJtapiPeer peer;
protected CiscoProvider provider;
// ...
peer = (CiscoJtapiPeer) JtapiPeerFactory.getJtapiPeer(null);
provider = (CiscoProvider) peer.getProvider(cucmURL);
/* cucmURL has the format:
"192.168.0.20;login=myuser;passwd=mypasswd"
whereas the username is an Application User in Cisco Unified Communications
Manager. On my system, it has the following permissions. I don't know whether all
of them are required:
Standard AXL Users
Standard Audit Users
Standard CCM End Users
Standard CCM Phone Administration
Standard CCM Phone and Users Administration
Standard CCM Read Only
Standard CCM Super Users
Standard CTI Allow Call Monitoring
Standard CTI Allow Call Park Monitoring
Standard CTI Allow Control of All Devices
Standard CTI Allow Control of Phone supporting Connected Xfer and...
Standard CTI Enabled
Standard CTI Secure Connection
Standard RealtimeAndTraceCollection
Standard TabSyncUser
You then add an observer to the provider in order to know when the provider
object is read for further interaction. You'll receive a "ProvInServiceEv" Event in the event list.
*/
provider.addObserver(providerObserver);
/* Wait until the event has come up */
// Create a sample call:
CiscoTerminal term = provider.createTerminal("your_sep_id_here");
Call call = provider.createCall();
call.connect(term, term.getAddresses()[0], "your_phone_number_to_call");
term is used as "source" from which the call is started. term.getAddresses()[0] just gets the first phone number associated with the "source" phone. "your_phone_number_to_call" is then called.
Another info: It does not work the other way round: You cannot call provider.getAddress("phonenumber") first, because somehow the phone numbers aren't loaded by the provider class before any terminal is connected to it.
This was tested on a CUCM 8.6.2 and Java 7.
I used this code in my project, works correctly:
final Condition inService = new Condition();
provider.addObserver(new ProviderObserver() {
public void providerChangedEvent(ProvEv[] eventList) {
if (eventList == null) {
return;
}
for (int i = 0; i < eventList.length; ++i) {
if (eventList[i] instanceof ProvInServiceEv) {
inService.set();
}
}
}
});
inService.waitTrue();
Address srcAddr = provider.getAddress(src);
co = new CallObserver() {
public void callChangedEvent(CallEv[] eventList) {
}
};
srcAddr.addCallObserver(co);
call = provider.createCall();
call.connect(srcAddr.getTerminals()[0], srcAddr, dst);
src - phone which you are calling from
dest - phone which you are calling to
Related
I'm working with Pupil Labs, a huge open source for eye/pupil tracking. The entire code is written in Python. The so-called Pupil Remote is based on ZeroMQ.
If I start running the Filter Messages everything is fine. For my purposes I need to "translate" Filter Messages into Java because I created an Android app, which should call a client, which has the purpose to serve as the python client.
Here's what I've done so far:
import android.annotation.SuppressLint;
import org.zeromq.ZMQ;
import java.nio.charset.Charset;
import static java.lang.Thread.sleep;
public class ZeroMQClient {
#SuppressLint("NewApi")
public static void requestGazeData() {
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket subscriber = context.socket(ZMQ.SUB);
System.out.println("Connecting to server...");
subscriber.connect("tcp://xxx.x.x.x:50020");
System.out.println("Connected");
String gaze = "gaze";
subscriber.subscribe(gaze.getBytes(Charset.forName("UTF-8")));
while (true) {
String msg = subscriber.recvStr();
System.out.println(msg);
subscriber.close();
context.term();
}
}
}
Now as you can expect, why I'm asking you, nothing happens, I don't receive any data from the Pupil Labs server. I oriented myself on this post, but unfortunately, it didn't work out for me. Also the IP-Address and port are the same as on the server. It works neither locally nor remotely.
Happy about any answer, since I stuck at this.
Due to the correct establishment in terms of my implementation the actual issue was the firewall, which just blocked the connection. By posting my solution I'm hopefully able to help future visitors of this question.
The final solution, after having debugged the root-cause issue is below
Happy about having answer, you have to set a subscription Policy:
ZeroMQ expects each SUB-side to first explicitly say, what this SUB-side wants to receive from PUB ( Yes, what it to subscribes to ).
Like your mailbox will never get newspapers in, without first subscribing to any. :o)
So setup an empty string "" in the subscriber and you are done:
// String filterPermitANY = ""; // WAS AN EXAMPLE TO TEST
// subscriber.subscribe( filterPermitANY.getBytes() );// IF PUB.send()-s ANY
String gaze = "gaze"; // WAS ON TOPIC
subscriber.subscribe( gaze.getBytes() ); //
Voilá.
Having zero-warranty what python version is running on the opposite side, tweaking may take place for string-representation matching...
( Also recommended to setup LINGER to 1, that prevents from hanging terminations
and preferably it is the best time to turn the process
into using a non-blocking .poll() + .recv( ..., ZMQ_DONTWAIT ) in a soft-realtime maintained event-loop )
[ 1 ] We have got confirmed the Android/ZeroMQ side is working fine
if the PUB-side was mocked by a plain python-PUB infinite-sender and the Android-SUB was subscribed to String filterPermitANY ="";
This makes the above claim "It's an issue from the android side" actually void if not misleading.
[ 2 ] Next comes the question why it still does not work?
And the answer is: because the above designed code does not follow the published principles, how to connect and use the Pupil Labs API.
A careful reader will notice that the Pupil Labs API is not connected by the SUB-side ( be it an Android or python or whatever else implementation of such a peer ) on a port :50020, but on another port, which is first asked about via another dialogue, held over an REQ/REP-formal communication archetype ( lines 13/14/15+19 ).
Epilogue
Knocking on a wrong door will never make the intended interview happen.
One first has to ask onto which door to knock next, so as to get the Pupil Labs API into the game.
I would like to forward emails from my Lotus Notes inbox to my gmail account.
Lotus Notes rules and agents are disabled on our server, so I developed external application for that.
I am using document.send method and mail successfully arrives to my gmail box.
The only problem is that often the email also duplicated in my Lotus Notes inbox.
I just found that the reason of that is "CC" and "BCC" fields, which I don't clean up,
however, I am looking for the way to forward email as it is - which means keep original CC and BCC and TO fields - exactly on the same way as it is done by forwarding agent.
I am using "IBM Notes 9" on Windows 7 64 bit.
I've prepared a code sample that demonstrates what I am doing.
package com.example;
import lotus.domino.*;
public class TestMailForwarder {
public static void main(String[] args) throws NotesException {
NotesThread.sinitThread();
try {
Session notesSession = NotesFactory.createSession(
(String) null, (String) null, Consts.NOTES_PASSWORD);
DbDirectory dir = notesSession.getDbDirectory(Consts.NOTES_SERVER);
Database mailDb = dir.openDatabaseByReplicaID(Consts.MAILDB_REPLICA_ID);
forwardAllEmails(mailDb);
} finally {
NotesThread.stermThread();
}
}
private static void forwardAllEmails(Database mailDb) throws NotesException {
View inbox = mailDb.getView("$Inbox");
//noinspection LoopStatementThatDoesntLoop
for (Document document = inbox.getFirstDocument();
null != document;
document = inbox.getNextDocument(document)) {
document.send(Consts.GMAIL_ADDRESS);
break;
}
}
}
Instead of trying to send the messages to your GMail, why not upload them using Gmail's IMAP interface. You would require to get the message as MIME content - which probably they are already for external incoming eMails and then push them to GMail.
I don't have a ready code sample, just one for the opposite pulling GMail into Notes, but you should be able to use that as a starting point.
A code sample for the MIME conversion is in an IBM Technote.
Hope that helps
You can't do a transparent forward with code running at the client level. Pure SMTP systems do it by preserving the RFC-822 header content while altering the RFC-821 RCPT TO data. Domino does not give client-level code independent control over these. It just uses the SendTo, CopyTo, and BlindCopyTo items. (There are some tricks that mail management and archiving vendors play in order to do things like this, but they require special changes to the Domino server's router configuration, and software on the other end as well.
Another way of accomplishing this (in response to the question you asked in your comment) would be to have your Java code make a direct connection to the gmail SMTP servers. I'm not sure how easy it is. A comment on this question states that the Java Mail API allows you to control the RCPT TO separately from the RFC822 headers, but I've not looked into the specifics other than taking note that there's an SMTPTransport class -- which is where I'd look for anything related to RFC-821 protocol. The bigger issue is that you will have to take control of converting messages into MIME format. With Notes mail, you may have a mix of Notes rich text and MIME. Theres a convertToMIME method in Notes 8.5.1 and above, but this will only convert the message body. You'll have to deal with any header content separately. (I'm not really up to speed on Notes 9, but AFAIK even though there is functionality in the client to create a .EML file when you drag a message to the desktop, there's no API there to do that for you.)
Finally, I've found a ready solution: AWESYNC.MAIL.
It is a commercial software but it does exactly what I need.
Here is what I am trying to do:
Add a special button to attach files to Notes "New message" window. If files were attached using this button, when email sent, they should be uploaded to the server and link to them added to the email.
My question - is it possible (and how) to capture "send mail" event in the plugin for Lotus Notus?
I don't know how an Eclipse plugin would do this. Furthermore, since Notes can be used off-line -- when it would be impossible to upload files to a server -- it would be better to have code running on the Domino server intercept the mail messages and perform the upload.
Most products that hook mail operations on the server use the Lotus Notes C API's Extension Manager functions to hook the EM_BEFORE notification for the EM_NSFNOTEUPDATE event and check whether the NSFNoteUpdate operation occurred within the server's mail.box files, and then check whether the the message requires special processing (i.e., in your case that would be by looking for a special NotesItem that your button code has inserted into the message). The usual coding method for this is to immediately change the status of the message to put it on hold, preventing the Domino router from attempting to send the message while your code is still working on it. Many products actually have two components - the EM hook DLL and a separate server task that receives a signal from the hook DLL, processes the message, and then releases it from on hold status. This approach keeps your code from tying up router threads while processing large files.
(Note: Newer versions of the Domino server have the ability to use OSGI plugins written in Java instead of using the Notes C API for operations like this. I've not looked into the details of how this might work for operations that process mail messages. )
I sort of figured it out. There is a very nice extension point provided in 8.5 - "com.ibm.notes.mailsend.MailSendAttachmentsDialog", that is specifically exists for custom handling of attachments. You can see it in plugin.xml, in IBM\Lotus\Notes\framework\shared\eclipse\plugins\com.ibm.notes.mailsend_8.5.*.jar.
The only problem is - it handles just attachments and does not have access to anything else. So if somebody figured how to get subject line and the message text from there, please reply.
Update: got it.
NotesUIElement elem = (new NotesUIWorkspace()).getCurrentElement();
if (elem instanceof NotesUIDocument) {
NotesUIDocument doc = ((NotesUIDocument) elem);
String to = doc.getField("EnterSendTo").getText();
String cc = doc.getField("EnterCopyTo").getText();
String bcc = doc.getField("EnterBlindCopyTo").getText();
String subject = doc.getField("Subject").getText();
String body = doc.getField("Body").getText();
....
}
I'm currently developing some web services in Java (& JPA with MySQL connection) that are being triggered by an SAP System.
To simplify my problem I'm referring the two crucial entities as BlogEntry and Comment. A BlogEntry can have multiple Comments. A Comment always belongs to exactly one BlogEntry.
So I have three Services (which I can't and don't want to redefine, since they're defined by the WSDL I exported from SAP and used parallel to communicate with other Systems): CreateBlogEntry, CreateComment, CreateCommentForUpcomingBlogEntry
They are being properly triggered and there's absolutely no problem with CreateBlogEntry or CreateComment when they're called seperately.
But: The service CreateCommentForUpcomingBlogEntry sends the Comment and a "foreign key" to identify the "upcoming" BlogEntry. Internally it also calls CreateBlogEntry to create the actual BlogEntry. These two services are - due to their asynchronous nature - concurring.
So I have two options:
create a dummy BlogEntry and connect the Comment to it & update the BlogEntry, once CreateBlogEntry "arrives"
wait for CreateBlogEntry and connect the Comment afterwards to the new BlogEntry
Currently I'm trying the former but once both services are fully executed, I end up with two BlogEntries. One of them only has the ID delivered by CreateCommentForUpcomingBlogEntry but it is properly connected to the Comment (more the other way round). The other BlogEntry has all the other information (such as postDate or body), but the Comment isn't connected to it.
Here's the code snippet of the service implementation CreateCommentForUpcomingBlogEntry:
#EJB
private BlogEntryFacade blogEntryFacade;
#EJB
private CommentFacade commentFacade;
...
List<BlogEntry> blogEntries = blogEntryFacade.findById(request.getComment().getBlogEntryId().getValue());
BlogEntry persistBlogEntry;
if (blogEntries.isEmpty()) {
persistBlogEntry = new BlogEntry();
persistBlogEntry.setId(request.getComment().getBlogEntryId().getValue());
blogEntryFacade.create(persistBlogEntry);
} else {
persistBlogEntry = blogEntries.get(0);
}
Comment persistComment = new Comment();
persistComment.setId(request.getComment().getID().getValue());
persistComment.setBody(request.getComment().getBody().getValue());
/*
set other properties
*/
persistComment.setBlogEntry(persistBlogEntry);
commentFacade.create(persistComment);
...
And here's the code snippet of the implementation CreateBlogEntry:
#EJB
private BlogEntryFacade blogEntryFacade;
...
List<BlogEntry> blogEntries = blogEntryFacade.findById(request.getBlogEntry().getId().getValue());
BlogEntry persistBlogEntry;
Boolean update = false;
if (blogEntries.isEmpty()) {
persistBlogEntry = new BlogEntry();
} else {
persistBlogEntry = blogEntries.get(0);
update = true;
}
persistBlogEntry.setId(request.getBlogEntry().getId().getValue());
persistBlogEntry.setBody(request.getBlogEntry().getBody().getValue());
/*
set other properties
*/
if (update) {
blogEntryFacade.edit(persistBlogEntry);
} else {
blogEntryFacade.create(persistBlogEntry);
}
...
This is some fiddling that fails to make things happen as supposed.
Sadly I haven't found a method to synchronize these simultaneous service calls. I could let the CreateCommentForUpcomingBlogEntry sleep for a few seconds but I don't think that's the proper way to do it.
Can I force each instance of my facades and their respective EntityManagers to reload their datasets? Can I put my requests in some sort of queue that is being emptied based on certain conditions?
So: What's the best pracice to make it wait for the BlogEntry to exist?
Thanks in advance,
David
Info:
GlassFish Server 3.1.2
EclipseLink, version: Eclipse Persistence Services - 2.3.2.v20111125-r10461
If you are sure you are getting a CreateBlogEntry call, queue the CreateCommentForUpcomingBlogEntry calls and dequeue and process them once you receive the CreateBlogEntry call.
Since you are on an application server, for queues, you can probably use JMS queues that autoflush to storage or use the DB cache engine (Ehcache ?), in case you receive a lot of calls or want to provide a recovery mechanism across restarts.
Im working on oauth 1 Sparklr and Tonr sample apps and I'm trying to create a two-legged call. Hipoteticly the only thing you're supposed to do is change the Consumer Details Service from (Im ommiting the igoogle consumer info to simplify):
<oauth:consumer-details-service id="consumerDetails">
<oauth:consumer name="Tonr.com" key="tonr-consumer-key" secret="SHHHHH!!!!!!!!!!"
resourceName="Your Photos" resourceDescription="Your photos that you have uploaded to sparklr.com."/>
</oauth:consumer-details-service>
to:
<oauth:consumer-details-service id="consumerDetails">
<oauth:consumer name="Tonr.com" key="tonr-consumer-key" secret="SHHHHH!!!!!!!!!!"
resourceName="Your Photos" resourceDescription="Your photos that you have uploaded to sparklr.com."
requiredToObtainAuthenticatedToken="false" authorities="ROLE_CONSUMER"/>
</oauth:consumer-details-service>
That's adding requiredToObtainAuthenticatedToken and authorities which will cause the consumer to be trusted and therefore all the validation process is skipped.
However I still get the login and confirmation screen from the Sparklr app. The current state of the official documentation is pretty precarious considering that the project is being absorbed by Spring so its filled up with broken links and ambiguous instructions. As far as I've understood, no changes are required on the client code so I'm basically running out of ideas. I have found people actually claiming that Spring-Oauth clients doesn't support 2-legged access (which I found hard to believe)
The only way I have found to do it was by creating my own ConsumerSupport:
private OAuthConsumerSupport createConsumerSupport() {
CoreOAuthConsumerSupport consumerSupport = new CoreOAuthConsumerSupport();
consumerSupport.setStreamHandlerFactory(new DefaultOAuthURLStreamHandlerFactory());
consumerSupport.setProtectedResourceDetailsService(new ProtectedResourceDetailsService() {
public ProtectedResourceDetails loadProtectedResourceDetailsById(
String id) throws IllegalArgumentException {
SignatureSecret secret = new SharedConsumerSecret(
CONSUMER_SECRET);
BaseProtectedResourceDetails result = new BaseProtectedResourceDetails();
result.setConsumerKey(CONSUMER_KEY);
result.setSharedSecret(secret);
result.setSignatureMethod(SIGNATURE_METHOD);
result.setUse10a(true);
result.setRequestTokenURL(SERVER_URL_OAUTH_REQUEST);
result.setAccessTokenURL(SERVER_URL_OAUTH_ACCESS);
return result;
}
});
return consumerSupport;
}
and then reading the protected resource:
consumerSupport.readProtectedResource(url, accessToken, "GET");
Has someone actually managed to make this work without boiler-plate code?