I'm trying to send a message to the WCF-WSHttp adapter with a dynamic
send port from an orchestration, but BizTalk seems to always be
reverting back to the HTTP Adapter.
According to the docs that I've been able to find, I should just need
to set the transport type from my expression shape to get BizTalk to
use the WCF-WSHttp adapter, and I AM, but it still seems to be
reverting. Below is an example of my expression shape that's setting
the properties (as you can see, I've tried both
Microsoft.XLANGs.BaseTypes.TransportType and
BTS.OutboundTransportType):
Body(BTS.OutboundTransportType) = "WCF-WSHttp";
SendMessagePort(Microsoft.XLANGs.BaseTypes.Address) =
System.String.Format("{0}/Accept{1}", "http://myserver/myservice/
myservice.svc/Accept{0}", messageInfo.MessageType);
SendMessagePort(Microsoft.XLANGs.BaseTypes.TransportType) = "WCF-
WSHttp";
Probably are Craig :-)
When using a dynamic send port, BizTalk uses the "scheme" part of the url to decide which adapter to use.
When your url starts with "Http://" or "Https://" BizTalk would always use the HTTP adapter.
Similarly url's begining with ftp:// will use the FTP adapter.
Same works for custom adapaters as well - when you install the adapter's configuration you register the moniker to use; for example - the open source Scheduled Task adapter uses schedule:// (I believe).
Using dynamic send ports with WCF is slightly more involved than most other adapaters because of the various configuration that's required but you can find detailed explanation here, just scroll down to the "Dynamic Send Ports" section about half way down.
I ended up resolving my issue, but am still unsure of the reasoning for the behavior I saw.
The Expression shape mentioned in the question was located inside of an Atomic Scope. Once the Orchestration exited the scope containing the Expression shape, the Transport Type was reset back to its original value. Moving the Expression out of the atomic scope resolved the issue, in that the TransportType was set correctly.
Related
Lemme come straight into this.
Well, I have implemented Nifi to localhost. It's working well and everything seems to be perfect.
I have made many different flows with headers of course within the cluster as below.
Cluster
When I right click the header and go to "View configuration" go to "Properties" will see as follows.
Processor details
You can see the "Listening Port" that is 10004 and a "hostname" as well. Then there is "Allowed path" as can be seen.
Now If I want to access this specific header I have to hit using 10.0.0.18:10004/spec/transform.
Now the issue is, I have many different headers which are having a different listening port that is assigned by me. NIFI is not allowing me to assign the same port for every flow I make. but I have to assign different port every time I make a new flow. I just want to assign port 10004 to every other flow and just differ them using the "Allowed path".
How come I make this possible. I have to always assign new port to every new flow. Is there a way to do that. Hope you guys understand what am I actually willing to have. Hope to have your answers soon.
Thank you
You can have one HandleHttpRequest at the beginning of your flow listening on port 10004, and set the "Allowed Paths" property to a regular expression that matches all of the paths you want to support. HandleHttpRequest will add the path as an attribute to each flow file named "http.context.path", so you could then use a RouteOnAttribute to route each path to a different part of the flow.
As Bryan Bende
but in nifi 1.14.0 that is attribute: http.request.uri
In the past I have been able to apply advice chain handlers on different outbound channel adapters. I am trying to do the same on int-aws:s3-outbound-channel-adapter but its not allowing that. Does this component not allows this behavior. Basically I am interested in finding out when the adapter completes the upload of a file to S3.
<int-aws:s3-outbound-channel-adapter
id="s3-outbound" channel="files" bucket="${s3.bucket}"
multipart-upload-threshold="5192" remote-directory="${s3.remote.dir}"
accessKey="${accessKey}" secretKey="${secretKey}">
THIS DOESNT WORKS - throws an error !!!
<int:request-handler-advice-chain>
</int:request-handler-advice-chain>
</int-aws:s3-outbound-channel-adapter>
Right, that isn't allowed by the XSD. Feel free to raise a JIRA on the matter.
But that doesn't matter that it doesn't work at all.
If you are on Spring Integration 4.x already you can move that <int-aws:s3-outbound-channel-adapter> to the Java & Annotation configuration using #Bean and #ServiceActivator for the AmazonS3MessageHandler.
Where #ServiceActivator has adviceChain attribute to specify bean references to your Advices.
... or you can do that using generic <int:outbound-channel-adapter> and specify AmazonS3MessageHandler as raw <bean> for the ref of the first one.
HTH
Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).
I'm developing a simple HTTPS proxy (written in Python) which receives POST/GET requests/responses, applies some transformation and finally forwards the result to the recipient.
I need to handle chunked-encoded requests/responses in a "streaming" fashion, meaning that as soon as a chunk is received the proxy transforms it and forwards it to the recipient.
Before deciding to support chunked-encoded requests, I've been using mitmproxy http://mitmproxy.org/ and it worked perfectly. Unfortunately, I noticed that it waits until the entire body is received before letting me handle the response/request.
How can I implement a proxy supporting chunked-encoded requests/responses? Has anyone of you ever done something like this?
Thanks
EDIT: MORE INFO ON MY USE CASE
I need to handle POST requests and GET responses.
In the POST request I receive a JSON object and I have to encrypt some of its values.
In the GET response I receive a JSON object and I have to decrypt some of its values.
Till now, the following code has worked perfectly:
def handle_request(self, r):
if(r.method=='POST'):
// encryption of r.get_form_urlencoded()
def handle_response(self, r):
if(r.request.method=='GET'):
// decryption of r.content
How can I do the same thing with single chunks?
EDIT: UPDATES
After evaluating different solutions, I decided to go for Squid (proxy) + ICAP (content adaptation).
I've successfully configured Squid and the performance are just great. Unfortunately, I can't find a suitable ICAP server (in Python, if possible) for doing content adaptation (modification). I thought this one https://github.com/netom/pyicap could do the job but looks like it doesn't read the body of myPOST requests.
Do you guys know a Python ICAP server that I can use together with Squid?
Thanks
The answer below is outdated. You can now pass --stream to mitmproxy, whose behaviour is explained in the mitmproxy documentation.
mitmproxy developer here. This is definitely a feature we want for mitmproxy as well, but it's not that trivial and probably not coming very soon. If you really want to implement that yourself, I can recommend two things:
If you have a very specific use case, you can employ libmproxy.protocol.http.HTTPRequest.from_stream for parsing the header and do the body processing yourself.
If you do not want to modify the request/response body, you may find it sufficient to modify mitmproxy itself. In a nutshell, you would need to read the request/response without content (see 1.), modify it to your needs, pass it to the server and then delegate control to the libmproxy.protocol.tcp (see https://github.com/mitmproxy/mitmproxy/blob/master/libmproxy/proxy/server.py#L169)
If you have further questions, don't hesistate to ask here or on mitmproxy's IRC channel.
Re Comment #1:
You can't take too much out of mitmproxy, but at least you get delegate the header parsing & processing.
# ...accept request, socket.makefile() etc...
req = HTTPRequest.from_stream(client_conn.rfile, include_content=False)
# manually forward to the server (req._assemble_head())
# manually receive response body chunk by chunk and forward it to the server, see
# https://github.com/mitmproxy/netlib/blob/master/netlib/http.py#L98
resp = HTTPResponse.from_stream(server_conn.rfile, include_content=False)
# manually forward headers
# manually process body and forward
That being said, this is a fairly complex topic. Eventually, you're better off hacking that directly into libmproxy.protocol.http.HTTPHandler.
Another option, depending on your use case again: Use mitmproxy, set the conntype to tcp and forward traffic as-is and use regex replacements on the content in libmproxy.protocol.tcp . Probably the easiest way, but the most hacky one.
If you can provide some context, I may guide you further in the right direction.
Re Comment #2:
Before we get to the main part: JSON is a really bad choice for streaming/chunking as long as you don't want to encrypt the complete JSON object and treat it as a single string. You should definitely consider something like tnetstrings if you only want to encrypt parts.
Apart from that, hooking into read_chunk works, but first you need to get to the point where you can actually receive chunks over the line. Then, it's as simple as reading the single chunks, encrypting them and forwarding them.
So i have an orchestration that successfully does everything i need it to. Now, i want to reuse the logic in the orchestration, but with a set of slightly different data sources. Rather than copy and paste the orchestration into another one and having to use a decision tree to choose which orchestration to call, I was thinking of making my calls to SQL a little more dynamic.
For example, lets say i have a stored procedure called spGetUSCust. I have coded the orchestration to call the SQL server via a send/receive port with an operation of GetCust on it. It was generated using a strongly typed method so the response message is of a type spGetUSCustResponse.
I now want to call spGetCACust on the same SQL server. The responding data is in the exact same format (structure) as the US stored proc, but they have different names.
So my question is, can i do this by setting the action on the message that will be going to the port within the code? Since my response is strongly typed, will it cause a problem that the response will be really coming from the CA procedure and not the US one? If so, how do i solve that? I could go with generic responses, but they return XML ANY fields and i need to map these responses for additional use in the orchestration.
In an Message Assignment Shape, you can set the WCF.Action property on the request message like this:
MySQLRequest(WCF.Action)="TypedProcedure/dbo/spGetUSCust";
You can use the same physical Port and will have to remove any content in the Action Mapping section.
However, because the Request Messages are different types, you will need two Orchestration Ports, one US, one CA.