What is the difference between the work of JTAPI and CCXML ? Why do we need JTAPI if CCXML is already there? - jtapi

What is the difference between the work of JTAPI and CCXML ? Why do we need JTAPI if CCXML is already there ??

These are two different mechanisms used in different scenarios.
JTAPI is a Java interface, and designed for Computer-Telephony-Integration (CTI). The PABX vendor still need to provide the implementation to this interface as it applies to it's environment. With JTAPI the Java application would normally connect to the CTI component of the PABX.
In addition to the normal telephony functions like making calls, transferring calls etc, JTAPI can also be used to listen for events in the PABX related to certain calls. For example, recording systems, they need to listen for incoming calls to specific groups of telephones and record the audio streams, and stop recording when the call is dropped, or follow the call as it is transferred.
CCXML will more likely be used together with VoiceXML on IVR or self service systems. This will allow the self service platform to make outbound calls.
Some PABX vendors might use JTAPI to implement outbound dialer systems, some would use CCXML. This really depends on the internal architecture of the PABX system, as the components implementing JTAPI and CCXML respectively are intended to fulfill two different requirements, and not all environment/customers will have access to one or the other.

Related

Apache Ignite Client

I wanted to understand the pro's/cons of using a client node within a cluster vs a external thin client. Ofcourse the thin client will be less chatty Vs a client node and hence less n/w interactions. Changes in the cluster topology(nodes adding/removing) would not affect the client, while it directly affects the client node.
All these make me wonder will a thin client always be a better option or are then other cases where having a client node makes much more sense.
If Apache/Gridgain has any documentation/links around this. That would do too.
TIA
I think there won't be any thick client in future major releases; it will be superseded by a thin one instead because of a single protocol and lightweight design.
At the moment, a thick client still has some features advantage:
faster and better discovery and communication (topology changes)
peer class loading
near caching
advanced compute capabilities
events listening
full data structures support/distributed locking
etc
The feature parity list is constantly shrinking, but it's also worth mentioning that some features might be available for a particular platform only. For example, in .NET thin client, but not in Java one.
You have mentioned the cons already - being a cluster-wide citizen implies the same obligation a server node has. I.e. a good network channel and participation in all global events.
That means in some cases a thick client might not be deployed and working as expected. Usually it's about NAT, private networks, firewalls, and so on.
In general, I'd say if your task could be implemented by a thin client, use it. If a required feature/API is not yet available, consider using a thick one. For example, if you need something like a health-check for your application running every minute, you definitely would like to have a thin client for that task and not to trigger PME.
Thick clients are aware of all nodes, data distribution, and are more efficient in most cases, use them if your deployment allows for it. Plus, thick clients support all of the GridGain APIs.
Thin clients are lightweight(similar to a jdbc driver), connect to the cluster via binary protocol with a well-defined message format, support a limited set of APIs, and allow for support of multiple languages: Java, .NET, C++, Python, Node.JS, and PHP are supported out of the box.
see docs on thin/thick clients differences
Also take a look at capabilities of thin clients.
This section explains how to choose a client.
For example, a thick client serves as a reducer for queries, thereby you avoid an extra hop(from server to thin client), and lessening the cluster load when executing a query on a partitioned cache.
A Thick client could also directly participate in compute jobs, usually it is used as a reducer, whereas a thin client just submits a job to the cluster.
A thick client could also receive event notifications.
Thick clients could reconnect more reliably(because they know the current
cluster state) if the cluster topology changes.

Micro-service architecture in .NET Core: pattern or library for services to call each other

I am implementing a micro-service architecture for the first time.
Some of my services (.NET Core Web APIs) need to communicate with each other through HTTP requests. For that purpose, I am injecting a wrapper around HttpClient.
But I suspect that I am reinventing the wheel. Among micro-service practitioners, is there a pattern or even a third-party library to solve this problem?
In a micro-service architecture, the most important thing is a clear separation of concerns and application boundaries. Imagine a simple setup, with Product and Price micro services
An important concept is each service is master of data, and owns its own database. In this example,
a client of the 'Product' service will make an HTTP call to the Product API.
the product API will make a call to the Price API to get prices for the products
the product API therefore depends on the Price API to create a response
These are the synchronous parts of the process, generally achieved through HTTP calls across boundaries. You'll also have asynchronous parts of your solution, in this example,
the Price API publishes an event to a bus whenever a price is changed
the product API publishes an event whenever a product is created
There may be one or more subscribers to these events, that will respond and probably call an API to retrieve the changed data.
The critical parts of this are clearly defining your API and message contracts, understanding if things will be async or sync, having the right level of telemetry across the entire architecture to track and understand distributed system behaviour, and keeping everything as independently buildable/testable/deployable components.
First and foremost, if you're not using containers, start, along with orchestration (both natively supported in Visual Studio, assuming you have Docker, etc. actually installed). Among the many benefits, you can reference your services via hostname without having to worry about ports and different locations for different environments.
As far as actual communication goes. There's not really a magic solution here. HttpClient is what you use, of course, and generally, yes, you want to have a wrapper around that to abstract away the low-level HTTP communication stuff, so the rest of your code can simply call simple methods on that wrapper.
If you aren't using IHttpClientFactory, start. If you already have a wrapper class, you're halfway there, and with that, not only do you get efficient management of HttpMessageHandlers so you don't exhaust your server's connection pool, but you can also use the Polly integration to handle transient HTTP errors and even do retry policies, circuit breakers, etc. for your microservice connections.
Finally, there is the Refit library which can make things a tad more straight-forward. I find it to have more use with huge third-party APIs like Facebook, Google, etc., though. Since microservices should by design be simple, you're probably not saving much code over just having your own wrapper class. Regardless, the way it works is that you define an interface that represents the API, and then Refit uses that to actually make appropriate requests. It's kind of like a wrapper class for free, but you still need to create the interface.

What is the use of multiple control endpoints (non-EP0)?

I learned on OSDev wiki that Endpoint 0 is the default control pipe, allowing for bi-directional control transfers. This is used for device configuration, e.g. to retrieve device descriptors. The USB 2.0 spec explains this more thorougly in section 5.5 Control Transfers.
There are also a limited amount of endpoints available (2 for low-speed, 15 for full- and high-speed devices). Somewhere in the USB 2.0 spec, I have read that there must be at least one control pipe. This implies that there may be multiple control endpoints, but what is the use of it? Do you know any particular USB device or class that has an EP configured as control pipe?
Later, I found this in the spec, section 10.1.2 Control Mechanisms:
A particular USB device may allow the use of additional message pipes
to transfer device-specific control information. These pipes use the
same communications protocol as the default pipe, but the information
transferred is specific to the USB device and is not standardized by
the USB Specification.
If I understand it correctly, this means that non-EP0 cannot be used to configure the device (say, a standard request such as GET_DESCRIPTOR). But the setup/data/status stages seem still to be available ("[..] use the same communications protocol [..]"). Is this correct? Or is the use of standard/class requests forbidden for non-EP0?
Background: while working on an emulated USB device in QEMU, the need for a USB monitor for debugging purposes appeared. During inspection of the QEMU core USB code, I noticed that it only processed control commands for EP0. Other endpoints would be treated as data. There are some virtual devices (host-libusb) that always reject control transfers for those other endpoints. Hence the question whether this is the correct behavior or not (and if valid, whether there exist devices that really implement this).
As far as I can tell, there is no use for a non-EP0 control endpoint. I have developed several products that use custom control transfers on endpoint 0 as the main way to send device-specific requests and I have not encountered any fundamental problems with doing that.
If you did make a non-EP0 control endpoint I think your understanding is correct; you wouldn't be able to use it for standard requests but you would be able to use it for custom requests and the transaction sequences would be the same as on EP0.

Does DDS have a Broker?

I've been trying to read up on the DDS standard, and OpenSplice in particular and I'm left wondering about the architecture.
Does DDS require that a broker be running, or any particular daemon to manage message exchange and coordination between different parties?
If I just launch a single process publishing data for a topic, and launch another process subscribing for the same topic, is this sufficient? Is there any reason one might need another process running?
In the alternative, does it use UDP multicasting to have some sort of automated discovery between publishers and subscribers?
In general, I'm trying to contrast this to traditional queue architectures such as MQ Series or EMS.
I'd really appreciate it if anybody could help shed some light on this.
Thanks,
Faheem
DDS doesn't have a central broker, it uses a multicast based discovery protocol. OpenSplice has a model with a service for each node, but that is an implementation detail, if you check for example RTI DDS, they don't have that.
DDS specification is designed so that implementations are not required to have any central daemons. But of course, it's a choice of implementation.
Implementations like RTI DDS, MilSOFT DDS and CoreDX DDS have decentralized architectures, which are peer-to-peer and does not need any daemons. (Discovery is done with multicast in LAN networks). This design has many advantages, like fault tolerance, low latency and good scalability. And also it makes really easy to use the middleware, since there's no need to administer daemons. You just run the publishers and subscribers and the rest is automatically handled by DDS.
OpenSplice DDS used to require daemon services running on each node, but they have added a new feature in v6 so that you don't need daemons anymore. (They still support the daemon option).
OpenDDS is also peer-to-peer, but it needs a central daemon running for discovery as far as I know.
Think its indeed good to differentiate between a 'centralized broker' architecture (where that broker could be/become a single-point of failure) and a service/daemon on each machine that manages the traffic-flows based on DDS-QoS's such as importance (DDS:transport-priority) and urgency (DDS: latency-budget).
Its interesting to notice that most people think its absolutely necessary to have a (real-time) process-scheduler on a machine that manages the CPU as a critical/shared resource (based on timeslicing, priority-classes etc.) yet that when it comes to DDS, which is all about distributing information (rather than processing of application-code), people find it often 'strange' that a 'network-scheduler' would come in 'handy' (the least) that manages the network(-interface) as a shared-resource and schedules traffic (based on QoS-policy driven 'packing' and utilization of multiple traffic-shaped priority-lanes).
And this is exactly what OpenSplice does when utilizing its (optional) federated-architecture mode where multiple applications that run on a single-machine can share data using a shared-memory segment and where there's a networking-service (daemon) for each physical network-interface that schedules the in- and out-bound traffic based on its actual QoS policies w.r.t. urgency and importance. The fact that such a service has 'access' to all nodal information also facilitates combining different samples from different topics from different applications into (potentially large) UDP-frames, maybe even exploiting some of the available latency-budget for this 'packing' and thus allowing to properly balance between efficiency (throughput) and determinism (latency/jitter). End-to-End determinism is furthermore facilitated by scheduling the traffic over pre-configured traffic-shaped 'priority-lanes' with 'private' Rx/Tx threads and DIFSERV settings.
So having a network-scheduling-daemon per node certainly has some advantages (also as it decouples the network from faulty-applications that could be either 'over-productive' i.e. blowing up the system or 'under-reactive' causing system-wide retransmissions .. an aspect thats often forgotten when arguing over the fact that a 'network-scheduling-daemon' could be viewed as a 'single-point-of-failure' where as the 'other view' could be that without any arbitration, any 'standalone' application that directly talks to the wire could be viewed as a potential system-thread when it starts misbehaving as described above for ANY reason.
Anyhow .. its always a controversial discussion, thats why OpenSplice DDS (as of v6) supports both deployment modes: federated and non-federated (also called 'standalone' or 'single process').
Hope this is somewhat helpful.

Verify WCF interface is the same between client and server applications

We've got a Windows service that is connected to various client applications via a duplex WCF channel. The client and server applications are installed on different machines, in different locations, potentially at widely different times, and by different people. In addition, the client can be pointed at a different machine running the same Windows service at startup.
Going forward, we know that the interface between the client and the server applications will likely evolve. The application in the field will be administered by local IT personnel, and we have no real control over what version of either of these applications will be installed when/where or which will be connecting to the other. Since these are installed at various physical locations and by different people, there's a high likely that either the client or server application will be out of date compared to the other.
Since we can't control what versions of the applications in the field are trying to connect to each other, I'd like to be able to verify that the contracts between the client application and the server application are compatible.
Some things I'm looking for (may not be able to realistically get them all):
I don't think I care if the server's interface is newer or older, as long as the server's interface is a super-set of the client's
I want to use something other than an "interface version number". Any developer-kept version number will eventually be forgotten about or missed.
I'd like to use a computed interface comparison if that's possible
How can I do this? Any ideas on how to go about this would be greatly appreciated.
Seems like this is a case of designing your service for versioning. WCF has very good versioning capabilities and extension points. Here are a couple of good MSDN articles on versioning the service contract and more specifically the data contracts. For backward and "forward" compatible versioning look at this article on using the IExtensibleDataObject interface.
If the server's endpoint has metadata publishing enabled, you can programmatically inspect an endpoint's interface by using the MetadataResolver class. This class lets you retrieve the metadata from the server endpoint, and in your case, you would be interested in the ContractDescription which contains the list of all operations. You could then compare the list of operations to your client proxy's endpoint operations.
Of course now, comparing the lists of operations would need to be implemented, you could simply compare the operations names and fail if one of the client's operations is not found within the server's operations. This would not necessarily cover all incompatiblities, ex. request/response schema changes.
I have not tried implementing any of this by the way, so it's more of a theoretical view of your problem. If you don't want to fiddle with the framework, you could implement a custom operation that would return the list of operation names. This would be of minimal effort but is less standards-compliant.