I'm trying to find a way to use the JTApi to get missed and completed calls from the phone. I know that I could write this code myself and capture them in the callobserver, but I specifically want it to come from the PBX/Phone.
Is this possible?
Cisco JTAPI does not provide access to historical call records, nor is their a programmatic way to query the phone device directly. For 'real-time' call history, you would need to implement full-time call observation and record the call meta-data into your own database.
Historical call records are available via CUCM's 'Call Detail Records' function: https://developer.cisco.com/site/sxml/discover/overview/cdr/
These CDRs are sent from supporting phones to CUCM at the end of every call, and are collected/stored on the CUCM Publisher every 1 minute (by default) as CSV formatted flat files.
There are two main mechanisms for accessing CDRs:
FTP/SSH-FTP delivery: up to three destinations can be configured in the CUCM Serviceability admin pages, where CDR files will be delivered per the configured interval:
CDRonDemand SOAP API: available CDR filenames for a time period (up to one hour) can be listed, and individual files requested for FTP/SSH-FTP delivery to a specified location (i.e. the application host). The service/WSDL is available on the CUCM Publisher at: https://:8443/realtimeservice2/services/CDRonDemandService?wsdl
Example of get_file_list request:
<!--CDRonDemand API - get_file_list - Request (datetime format is in UTC time)-->
<soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:soap="http://schemas.cisco.com/ast/soap/">
<soapenv:Header/>
<soapenv:Body>
<soap:get_file_list>
<soap:in0>201409121600</soap:in0>
<soap:in1>201409121700</soap:in1>
<soap:in2>true</soap:in2>
</soap:get_file_list>
</soapenv:Body>
</soapenv:Envelope>
Example of get_file request:
<!--CDRonDemand API - get_file - Request-->
<soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:CDRonDemand">
<soapenv:Header/>
<soapenv:Body>
<urn:get_file soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/">
<in0>sftp-server.server.com</in0>
<in1>user</in1>
<in2>password</in2>
<in3>/tmp</in3>
<in4>cdr_StandAloneCluster_01_201409121628_189</in4>
<in5>true</in5>
</urn:get_file>
</soapenv:Body>
</soapenv:Envelope>
More details on application access to CDRs can be found here: https://developer.cisco.com/site/sxml/
Related
I have a container (music show) with 19 tracks in it and 1 item for recommendations section for this show. So totaly there are 20 items. But if I am adding this show to playlist, only tracks will be processed and playlsit will contain 19 tracks. Sonos controller is working fine with it, but Test Suite fails with checking total items after adding show to playlist with message
FAIL The seed playlist and newly created playlist should have the same
quantity of items inside. (expected 19 != actual 20)
As the result Test Suite fails with 1 error. Is it O.K. to send test suite report with such fail? Or you will deny new service with such fail?
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ns1="http://www.sonos.com/Services/1.1">
<SOAP-ENV:Body>
<ns1:getMetadataResponse>
<ns1:getMetadataResult>
<ns1:index>0</ns1:index>
<ns1:count>20</ns1:count>
<ns1:total>20</ns1:total>
<ns1:mediaCollection>
<ns1:id>CONTAINER:RECOMMENDATIONS:594</ns1:id>
<ns1:itemType>collection</ns1:itemType>
<ns1:displayType>grid</ns1:displayType>
<ns1:title>Recommendations</ns1:title>
<ns1:canPlay>false</ns1:canPlay>
<ns1:canAddToFavorites>false</ns1:canAddToFavorites>
</ns1:mediaCollection>
<ns1:mediaMetadata>
<ns1:id>TRACK:11422:594</ns1:id>
<ns1:itemType>track</ns1:itemType>
<ns1:displayType>list</ns1:displayType>
<ns1:title>He Ain't Give You None</ns1:title>
<ns1:summary>The Radiators</ns1:summary>
<ns1:mimeType>audio/mp3</ns1:mimeType>
<ns1:trackMetadata>
<ns1:artist>The Radiators</ns1:artist>
<ns1:duration>531</ns1:duration>
<ns1:rating>0</ns1:rating>
<ns1:canPlay>true</ns1:canPlay>
<ns1:canSkip>true</ns1:canSkip>
</ns1:trackMetadata>
</ns1:mediaMetadata>
<ns1:mediaMetadata>
<ns1:id>TRACK:58012:594</ns1:id>
<ns1:itemType>track</ns1:itemType>
<ns1:displayType>list</ns1:displayType>
<ns1:title>Alimony</ns1:title>
<ns1:summary>The Radiators</ns1:summary>
<ns1:mimeType>audio/mp3</ns1:mimeType>
<ns1:trackMetadata>
<ns1:artist>The Radiators</ns1:artist>
<ns1:duration>632</ns1:duration>
<ns1:rating>0</ns1:rating>
<ns1:canPlay>true</ns1:canPlay>
<ns1:canSkip>true</ns1:canSkip>
</ns1:trackMetadata>
</ns1:mediaMetadata>
[MORE ITEMS HERE]
</ns1:getMetadataResult>
</ns1:getMetadataResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
I am taking some liberties in answering your question since it seems to be directed at Sonos, so apologies in advance. It is very unlikely that your service will be rejected for a bug that happens to be inside Sonos test suite.
So my two cents, go for it and submit your service.
I believe that the issue here is that a playlist is defined as container specifically full of track elements (http://musicpartners.sonos.com/node/286) and your playlist contains a collection. This is why this is failing to generate the correct count in the test.
I am an Apigee newbie.
I am trying to understand the Spike Arrest policy.
I am looking at this documentation:
http://apigee.com/docs/api-services/content/shield-apis-using-spikearrest
http://apigee.com/docs/api-services/content/policy-attachment-and-enforcement
The one thing I cannot understand for certain is if, when the Spike Arrest Policy is applied to an ApiProxy, whether the rate limit is applied per Key/Client Dev Application, or is it shared between all Keys/Client Dev Applications?
For example if we have the following config:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<SpikeArrest async="false" continueOnError="false" enabled="true" name="spikearrest-1">
<DisplayName>SpikeArrest-1</DisplayName>
<FaultRules/>
<Properties/>
<Identifier ref="request.header.some-header-name"/>
<MessageWeight ref="request.header.weight"/>
<Rate>50ps</Rate>
</SpikeArrest>
And Client Dev Apps:
1. DevApp1
2. DevApp2
Is the 50ps rate limit shared between DevApp1 and DevApp2, or do DevApp1 and DevApp2 get 50ps rate limit each?
Thanks,
You can use any of the predefined variables:
http://apigee.com/docs/api-services/api/variables-reference
The variable that is probably the most commonly used for Spike Arrest is client.ip.
Edge will make all elements of a request message available. If your clients are adding a client_id (aka API key) to a request as a query parameter, for example api.call.com?client_id=u34r8ur, then you would set the variable in your Spike Arrest Identifier to be:
<Identifier ref="request.queryparam.client_id"/>
Or if it is in an HTTP header:
<Identifier ref="request.header.client_id"/>
Hope that helps!
Its per app identified by your identifier.
i´m playing arround with the HIGHRISE API, and they understood the meaning of rest, its pretty cool and at some points gracefully forgivingly, but
has anybody any idea why the xml i PUT is not accepted ?
here is some relevant logging :
2014-02-23 00:00:04] app.INFO: Updating:Person:Highrise-API = people/11834527375.xml [] []
[2014-02-23 00:00:04] app.INFO: request body is :
<?xml version="1.0" encoding="UTF-8"?>
<person>
<first-name><![CDATA[Johnny]]></first-name>
<last-name><![CDATA[B. Good]]></last-name>
<visible-to><![CDATA[Everyone]]></visible-to>
<subject_datas type="array">
<subject_data>
<subject_field_id type="integer"><![CDATA[43212]]></subject_field_id>
<value><![CDATA[dsa328394OOKD323H]]></value>
</subject_data>
<subject_data>
<subject_field_id type="integer"><![CDATA[470259]]></subject_field_id>
<value><![CDATA[provider://w184071823/fmdks/2032]]></value>
</subject_data>
<subject_data>
<subject_field_id type="integer"><![CDATA[469130]]></subject_field_id>
<value><![CDATA[CORE]]></value>
</subject_data>
<subject_data>
<subject_field_id type="integer"><![CDATA[469132]]></subject_field_id>
<value><![CDATA[Way too cool]]></value>
</subject_data>
</subject_datas>
<contact-data>
<phone-numbers>
<phone-number type="array">
<number><![CDATA[081 6418273]]></number>
<location><![CDATA[Work]]></location>
</phone-number>
</phone-numbers>
<addresses type="array">
<address>
<city><![CDATA[New York City]]></city>
<country><![CDATA[US]]></country>
<state><![CDATA[New York]]></state>
<street><![CDATA[Siplingerstreet 11]]></street>
<zip><![CDATA[87527]]></zip>
<location><![CDATA[Work]]></location>
</address>
</addresses>
</contact-data>
</person>
[] []
[2014-02-23 00:00:04] app.INFO: request set [] []
[2014-02-23 00:00:04] app.ERROR: Guzzle/3.8.1 curl/7.28.1 PHP/5.4.10 - [2014-02-22T23:00:04+00:00] "PUT /people/11834527375.xml HTTP/1.1" 422 103 [] []
[2014-02-23 00:00:04] app.INFO: Caught client-error-exception in HighriseService updatePerson(): exception 'Guzzle\Http\Exception\ClientErrorResponseException' with message 'Client error response
[status code] 422
[reason phrase] Unprocessable Entity
i dont see the error :/
I´m very sure the subject_field_id´s are correct and those custom fields are set
Posting e.g saving that xml works, i saw from the response that are fields were set,
only thing i can guess is, that i´m trying to PUT a version where nothing has changed,
is that the problem ?
because my code only checks if that person exists at all and if so update it instead of creating
You should get back some XML in the body of the response. It should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<errors>
<error>Phone number '555-555-5555' has already been taken</error>
</errors>
If you include the id for the existing phone number in your PUT request, then we know that you want to update the existing address, rather than adding a new one: https://github.com/basecamp/highrise-api/blob/master/sections/people.md#update-person
Contact data and Subject data that include an id will be updated, data that doesn’t will be assumed to be new and created from scratch. To remove a piece of data, prefix its id with a minus sign (e.g. -1).
My objective is to list user's transactions (both sales and purchases).
I am using GetOrders and specifying a time range to and the call executes successfully but returns 0 transactions, whereas the user I am querying for has multiple purchases on their account.
Let me get a bit more specific. Here is the code that I am using:
<GetOrdersRequest xmlns="urn:ebay:apis:eBLBaseComponents">
<RequesterCredentials>
<eBayAuthToken>......</eBayAuthToken>
</RequesterCredentials>
<CreateTimeFrom>2009-04-05T05:02:03</CreateTimeFrom>
<CreateTimeTo>2011-12-23T00:02:44</CreateTimeTo>
</GetOrdersRequest>
And even using the API test tool (Hence, the problem is not language specific) it delivers 0 results:
<GetOrdersResponse xmlns="urn:ebay:apis:eBLBaseComponents">
<Timestamp>2011-12-23T00:05:32.753Z</Timestamp>
<Ack>Success</Ack>
<Version>753</Version>
<Build>E753_CORE_BUNDLED_14214525_R1</Build>
<PaginationResult>
<TotalNumberOfPages>0</TotalNumberOfPages>
<TotalNumberOfEntries>0</TotalNumberOfEntries>
</PaginationResult>
<HasMoreOrders>false</HasMoreOrders>
<OrderArray />
<OrdersPerPage>100</OrdersPerPage>
<PageNumber>1</PageNumber>
<ReturnedOrderCountActual>0</ReturnedOrderCountActual>
</GetOrdersResponse>`
The user I am querying for has 2 recent purchases dated at:
12/08/11
11/18/11
What am I missing here? I am supplying the time range and the call executes properly, yet it finds 0 results. I'd very much appreciate your help.
Try including the OrderRole, ie. Buyer or Seller and OrderStatus of either Active or Completed. Something like the following will return completed orders for items purchased by the caller.
<GetOrdersRequest xmlns="urn:ebay:apis:eBLBaseComponents">
<DetailLevel>ReturnAll</DetailLevel>
<MessageID>cff8bc1c-0475-4d64-a8a5-02757aafd937</MessageID>
<Version>747</Version>
<CreateTimeFrom>2012-01-07T14:05:24.6353866Z</CreateTimeFrom>
<CreateTimeTo>2012-02-07T14:05:24.6353866Z</CreateTimeTo>
<OrderRole>Buyer</OrderRole>
<OrderStatus>Completed</OrderStatus>
</GetOrdersRequest>
For more details, have a look here.
The maximum date range that may be specified is 30 days
I am currently using Glassfish v2.1 and I have set up a queue to send and receive messages from with Sesion beans and MDBs respectively. However, I have noticed that I can send only a maximum of 1000 messages to the queue. Is there any reason why I cannot send more than 1000 messages to the queue? I do have a "developer" profile setup for the glassfish domain. Could that be the reason? Or is there some resource configuration setting that I need to modify?
I have setup the sun-resources.xml configuration properties as follows:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE resources PUBLIC "-//Sun Microsystems, Inc.//DTD Application Server 9.0 Resource Definitions //EN" "http://www.sun.com/software/appserver/dtds/sun-resources_1_3.dtd">
<resources>
<admin-object-resource
enabled="true"
jndi-name="jms/UpdateQueue"
object-type="user"
res-adapter="jmsra"
res-type="javax.jms.Queue">
<description/>
<property name="Name" value="UpdatePhysicalQueue"/>
</admin-object-resource>
<connector-resource
enabled="true" jndi-name="jms/UpdateQueueFactory"
object-type="user"
pool-name="jms/UpdateQueueFactoryPool">
<description/>
</connector-resource>
<connector-connection-pool
associate-with-thread="false"
connection-creation-retry-attempts="0"
connection-creation-retry-interval-in-seconds="10"
connection-definition-name="javax.jms.QueueConnectionFactory"
connection-leak-reclaim="false"
connection-leak-timeout-in-seconds="0"
fail-all-connections="false"
idle-timeout-in-seconds="300"
is-connection-validation-required="false"
lazy-connection-association="false"
lazy-connection-enlistment="false"
match-connections="true"
max-connection-usage-count="0"
max-pool-size="32"
max-wait-time-in-millis="60000"
name="jms/UpdateFactoryPool"
pool-resize-quantity="2"
resource-adapter-name="jmsra"
steady-pool-size="8"
validate-atmost-once-period-in-seconds="0"/>
</resources>
Hmm .. further investigation revealed the following in the imq logs:
[17/Nov/2009:10:27:57 CST] ERROR sendMessage: Sending message failed. Connection ID: 427038234214377984:
com.sun.messaging.jmq.jmsserver.util.BrokerException: transaction failed: [B4303]: The maximum number of messages [1,000] that the producer can process in a single transaction (TID=427038234364096768) has been exceeded. Please either limit the # of messages per transaction or increase the imq.transaction.producer.maxNumMsgs property.
So what would I do if I needed to send more than 5000 messages at a time?
What I am trying to do is to read all the records in a table and update a particular field of each record based on the corresponding value of that record in a legacy table to which I have only read only access. This table has more than 10k records in it. As of now, I am sequentially going through each record in a for loop, getting the corresponding record from the legacy table, comparing the field values, updating the record if necessary and adding corresponding new records in other tables.
However, I was hoping to improve performance by processing all the records asynchronously. To do that I was thinking of sending each record info as a separate message and hence requiring so many messages.
To configure OpenMQ and set artitrary broker properties, have a look at this blog post.
But actually, I wouldn't advice to increase the imq.transaction.producer.maxNumMsgs property, at least not above the value recommended in the documentation:
The maximum number of messages that a producer can process in a single transaction. It is recommended that the value be less than 5000 to prevent the exhausting of resources.
If you need to send more messages, consider doing it in several transactions.