Difference between Mule Connectors and Transports - mule

I'm trying to evaluate the set of out-of-the-box transports provided by Mule, and compare it with the offerings from e.g. ServiceMix and OpenESB.
On Mule's homepage, I find a list of supported transports at:
http://www.mulesoft.org/documentation/display/current/Transports+Reference
However i also find a list of Connectors at:
http://www.mulesoft.org/connectors
There seem to be at least some overlap between these lists, but some technologies are listed as transports, and not as connectors, for example there is a Quartz transport, but not a connector.
So the question is: What exactly is the difference between a Mule Transport and a Mule Connector, and why is e.g. Quartz a transport and not a connector?

Transports are targeted towards a way of transporting data, i.e. a protocol like HTTP or reading/writing files. These are general concepts and the other party behind such a data channel can be anything, a pure data sink or a party with whom data can be exchanged, own company or other.
Connectors are made for using specific APIs, e.g. those from salesforce.com or facebook. Usually, choosing a connector also determines how the data will be transferred in the end, e.g. HTTP.
From mulesoft.org:
Connectors function like endpoints by sending and receiving data over
a transport. However, while endpoints are generic for a widely-used
protocol (such as JDBC, FTP, HTTP, POP3, etc) each connector is built
to optimize the connection with a specific third-party API, such as
Salesforce or Twitter.

Message sources (inbound or outbound) in Mule make use of transports to carry messages from application to application in the Mule framework. Transports implement message channels and provide consistent connectivity to an underlying data source or message channel. Whenever there is a message source in Mule, there is a corresponding transport working in the background to establish and maintain communication. For example, the HTTP transport handles messages sent to an HTTP endpoint in Mule via HTTP protocol.
The heart of a transport is the connector which maintains the configuration and state for the transport. In other words, connectors contain nearly all the connectivity details that Mule needs to actually connect with another system or application.

Related

Which option is more suitable for microservice? GRPC or Message Brokers like RabbitMQ

I want develop a project in microservice structure.
I have to use php/laravel and nodejs/nestjs
What is the best connection method between my microservices. I read about RabbitMQ and NATS messaging
and also GRPC
Which option is more suitable for microservice?
and why?
Thanks in advance
The technologies address different needs.
gRPC is a mechanism by which a client invokes methods on remote (although they needn't be) server. The client is tightly-coupled (often through load-balancers) with servers that implement the methods.
E.g. I (client) call Starbucks (service) and order (method) a coffee.
gRPC is an alternative to REST, GraphQL, and other mechanisms used to connect clients with servers though some form of API.
Message brokers (e g NATS, Rabbit) provide a higher-level abstraction in which a client sends messages to an intermediate service called a broker (this could be done using gRPC) and the broker may queue messages and either ship them directly to services (push) or wait for a service to check its subscription (pull).
E.g. I (client) post a classified ad on some site (broker). Multiple people may see my ad (subscriber) and offer to buy (method) the items from me. Some software robot may subscribe too and contact me offering to transport or insure the things I'm selling. Someone else may be monitoring sales of widgets on the site in order to determine whether there's a market for opening a store to sell these widgets etc.
With the broker, the client may never know which servers implement the functionality (and vice versa). This is a loosely-coupled mechanism in which services may be added and removed independently of the client.
If you need a synchronous response on 1:1 service call use gRPC
If you don't care which service will consume messages (asynchronous & no tight coupling between services) use RabbitMQ
If you need distributed system to keep events history and reuse later on another service use Kafka
Basically, it comes down to whether you want an Async communication between services or not.
That is when you can decide between real-time communication services (Sync) such as gRPC or RPC & Message Queueing ones (Async) such as RabbitMQ, Kafka or Amazon SQS.
Here are also some good answers by other users:
https://dev.to/hypedvibe_7/what-is-the-purpose-of-using-grpc-and-rabbitmq-in-microservices-c4i#comment-1d43
https://stackoverflow.com/a/63420930/9403963

ServiceStack Messaging API: Can it make a broadcast?

As I have previously mentioned, I am using ServiceStack Messaging API (IMessageQueueClient.Publish) as well as the more low-level IRedisClient.PublishMessage.
I use the Messaging API when I need a specific message/request to be processed by only one instance of a module/service, so even though I might have several modules running that all listens for MyRequest, only one service receives the message and processes it.
I use the IRedisClient.PublishMessage when I do a broadcast, a pub/sub situation, sending a request that everyone should receive that listens on that specific Redis channel.
However, I am in a situation where it would be useful to use the Messaging API, but do a broadcast, so that all instances that are listening to a specific message type, gets the message, not just the one.
(The reason for this is to streamline our usage of Redis and how we subscribe to events/request, but I will not get into details about this now. A little more background on this is here.)
Is there a "broadcast way" for the Messaging API?
No, the purpose of ServiceStack Messaging is simply to invoke ServiceStack Services via MQ. Any other MQ features is outside the purpose & scope of ServiceStack MQ, you'd need to instead develop against the MQ Provider APIs directly to access their broadcast features.
Server Events is a ServiceStack feature that supports broadcasting messages to subscribers of user-defined channels, but its a completely different implementation that serves a different use-case for sending "server push" real-time events over HTTP or gRPC, e.g. it doesn't use MQ brokers and pub/sub messages aren't persistent (i.e. only subscribers at time messages are sent will receive them).

Mule:The difference between the Web Service Consumer and SOAP Connect

Why do we have SOAP connect option while creating a connector when we already have a Web Service Consumer connector? We can configure a wsdl with Web Service Consumer and access a web service. What is the difference between the two options?
"Why do we have SOAP connect option" because MuleSoft want to provide a method for ISV to provide connectors to new and existing endpoints without Mulesoft themselves having to create them. Mulesoft Anypoint Platform success is built on the premise of connecting to anything and therefore SOAP Connect helps this.
Secondly connecting to a using WSDL location for consuming a soap web service involves a developer to know the service pretty well and therefore allowing error and interpretation errors but if you internally create a connector you can reduce implementation time and errors.
Thirdly on WSDL there are often many methods not applicable or and an enterprise does not want to consume and therefore a connector can filter these methods.
Connectors = Re-Use
Web Service Consumer connector = manual process
The Web Service Consumer is an existing connector that you can configure to point to a WSDL location for consuming a soap web service. SOAP Connect is a DevKit wizard that creates an Anypoint Connector that connects to a specific service, which can expose multiple WSDLs of the service.
With Web Service Consumer we have to call each API separately in separate flows. With SOAP Connect, you can package multiple WSDL files and API versions into a single connector, making the process of creating, maintaining and using a connector for SOAP APIs much faster and easier.

Difference Between Endpoint and Connector in Mule

What is the difference between Endpoint And Connectors in Mule . For example , there is http endpoint and there is http connector . Are these terms used interchangeably ?
Connectors:
provide abstract layer over data transport mechanisms
Mule specific
provide connections to external resources(Protocols, DB, 3rd party
API)
Are Operation-based or Endpoint-based.
Endpoints:
Flow level elements
Represents specific usage of a Connector
When you drag a Connector from palette into canvas, an Endpoint is
created
Any Connector can function as an Endpoint.
Endpoints are a type of connector. Connectors in Mule are either endpoint-based or operation-based. Endpoint-based connectors follow either a one-way or request-response exchange pattern and are often (but not always) named and based around a standard data communication protocol, such as FTP, JMS, and SMTP. Operation-based connectors follow an information exchange pattern based on the operation that you select and are often (but not always) named and based around one or more specific third-party APIs.
You shouldn't confused about this, its in mule document
https://docs.mulesoft.com/mule-fundamentals/v/3.7/mule-connectors
https://docs.mulesoft.com/mule-user-guide/v/3.7/endpoint-configuration-reference

Mule API - deploy to a Mule Runtime

I am experimenting with Mule API management these days. What I come to know is we can deploy our API to one of these:
A Mule Runtime
An API Gateway
In the documentation, it is said that we should go with option 1 when we want to separate out the implementation of your API from the orchestration. What does it mean?
Can any one please explain in detail?
Policy management from API Platform and analytics generation can be achieved only by using a correctly configured API Gateway, which is a superset of Mule EE (current version is API Gateway 2.1.0 which contains Mule EE 3.7.2).
Depending on your architecture you may have different solutions.
For example:
Proxy running on API Gateway, implementation API running somewhere
else (eg. Mule EE/CE, Tomcat, cobol server, etc)
Proxy and implementation API running on the same API Gateway
Implementation API
managed directly from API Platform without using the autogenerated
proxies.
HTH :-)
Not exactly sure what they mean there, because on this page: https://developer.mulesoft.com/docs/display/current/API+Gateway they also mention this:
Note that the API Gateway, because it acts as an orchestration layer
for services and APIs implemented elsewhere, is technology-agnostic.
You can proxy non-Mule services or APIs of any kind, as long as they
expose HTTP/HTTPS, VM, Jetty, or APIkit Router endpoints. You can also
proxy APIs that you design and build with API Designer and APIkit to
the API Gateway to separate the orchestration from the implementation
of those APIs.
So both methods technically allow you to separate API from orchestration, as your API gateway application could simply proxy another Mule application elsewhere that performs the orchestration. But my understanding of the two options are:
The API gateway is a limited offering that allows you to use a subset of Mule's connectors, transports and modules such as ApiKit and HTTP, it allows you to expose and API then use http to connect to whatever backend systems you want as a proxy and perform the orchestration in the API layer.
By using the Mule runtime operation, it gives you much more flexibility and allows you to compose as many applications as you want using the full range of connectors etc. and separate out the different aspects of your applications into as many layers as you want as separately deployable entities that you can deploy to on-premise standalone instances or Cloudhub etc.
#Ryan answer is more or less on the mark, however if you do choose the Mule ESB offering you will loose out on the API Management and governance functionality that API gateway provides OOTB.
These include
Lets you enforce runtime policies and collect data for analytics
Applies policies to APIs or endpoints around security, throttling,
rate limiting, and more
Extends PingFederate to serve as identity management and OAuth
provider for your APIs
Lets you require or restrict certain behaviors in a few simple steps
Lets you add or remove policies at runtime with no API downtime
Manages access to your API by issuing contract keys
Monitors the API to confirm it is meeting all contract terms
Ensures compliance with service level agreements (SLAs)
In my opinion go with API Gateway/Manager if your API will be consumed my third party developers with whom you might not have too many interactions (think public API's) else Mule ESB should be good.
You should be able to migrate from Mule ESB to API Manager (and vice versa) also easily if you need to, so I do not think you will get locked into your decision
PS: Content copied from here