TYK Gateway - transform query parameters from camelCase to snake_case - tyk

how to transform query parameters from camelCase to snake_case in TYK Gateway?
For example,
https://gateway.com?firstName=John&lastName=Doe
to
https://upstream.com?first_name=John&last_name=Doe

Related

Mosaic-Decisions: different types of parameters

What different types of parameters do mosaic decision provide?What is the difference between input, calculated, sql and global variables?
Mosaic has two types of parameters:
1. System Parameters - These parameters are auto-generated by the mosaic and are passed to hooks and schedule for further use. User cannot edit the values of these parameters. Following are the available system parameters:
a. lastSuccessfulRunDate
b. lastRunDate
c. instanceId
d. current Time
e. object Name
f. Username
g. User-defined Parameters
2. User Parameter - These parameters are defined by an user during the configuration of a flow and can be used in flows as per defined or calculated values. User-defined parameters include:
a. Input
b. Calculated
c. SQL
d. Global
Difference between Input, Calculated, SQL and Global -
1. Input Parameter - The Input parameter are the parameters that can be used in any flow nodes, hooks, scheduler and can be edited and deleted as and when required.
2. Calculated Parameter - The calculated parameters are the parameters whose values are calculated based on the expression user define.
3. SQL Parameter - These parameters are used when there is a need to fetch values from records using SQL queries. Mosaic supports Databases like Oracle, SQL, Snowflake, Postgres and SQL Server
4. Global Parameter - These parameters are just as input parameters but these parameters unlike input parameters can be used across the platform. These parameters are defined at Manager Persona and can be added into the flow just like other parameters.
Note - Global Parameters cannot be edited through flows. User will have to edit it through Mosaic Manager and the updated value will get reflected in all the flows the parameter is used.
For more details please refer below link to Mosaic User Help Document -
https://mosaic.ga.lti-mosaic.com/usermanual/parameter.html
Hope this is helpful :)

Multi-Value Prometheus Query Grafana

I'm using Grafana plus Prometheus queries to create dashboards in Grafana for Kubernetes. I take the name of the nodes (3 in this case) in a variable and then I pass this values to other query to extract the IPs of the machines. The values extracted are correct. I have the multi-value option enabled.
The problem comes with the query sum(rate(container_cpu_usage_seconds_total{id="/", instance=~"$ip_test:10250"}[1m])) and more than one IP because it only takes one of them. In other query it works but I think it is possible because the other query has not the :10250 after the variable.
My question, do you know any way to concatenate all the ip:port? E.g.: X.X.X.X:pppp|X.X.X.X:pppp
Try it like this:
sum(rate(container_cpu_usage_seconds_total{id="/", instance=~"($ip_test):10250"}[1m]))
From multiple values formating documentation, Prometheus variables are expanded as regex:
InfluxDB and Prometheus uses regex expressions, so the same variable
would be interpolated as (host1|host2|host3). Every value would also
be regex escaped if not, a value with a regex control character would
break the regex expression.
Therefore your variable ip_test = ['127.0.0.1', '127.0.0.2',...] is supposed to be transformed into: (127\.0\.0\.1|127\.0\.0\.2).
This means that your expression =~$ip_test:10250 should be transformed into =~"(127\.0\.0\.1|127\.0\.0\.2):10250" so you don't need the multiple expansion you are asking for.
The reason it is not working is that either the documentation is incorrect or there is a bug in Grafana (tested with version v6.7.2). From my tests, I suspect, the Prometheus expansion doesn't expand with the enclosing () and you end up with the expression =~"127\.0\.0\.1|127\.0\.0\.2:10250" - which is not what you want.
The workaround is to use the regex notation =~"${ip_test:regex}:10250".

Does Mule support "-"(hyphen) in keys?

Does Mule platform supports the "-" (hyphen) in payload keys when accessing them? If it does support can anyone provide an example or sample source for the same? I have a payload key with that special character and Mule fails to reference it directly.
You cannot reference keys with special values directly as those special values may be reserved characters or functions. What you should do is reference them in between '.
In the example you mentioned, instead of referencing the "sample-data" as #[payload.sample-data.test] you should reference it as #[payload.'sample-data'.test]

Fuzzy matching Informatica vs SQL

We are currently debating whether to implement pairwise matching functions in SQL to perform fuzzy matching on invoice reference numbers, or go down the route of using Informatica.
Informatica is a great solution (so ive heard) however im not familiar with the software.
Has anybody got any experience of its fuzzy match capabilities and the advantages it may offer over building some logic in SQL.
Thanks
Parser transformation can be used in Informatica do the job. Reference Data objects can be created in Informatica which will be used to search your given string. The reference data objects are of the following types - Pattern Sets , Probabilistic Models, Reference Tables , Regex , Token sets.
Pattern Sets - A pattern set contains the logic to identify data patterns for eg separating out initials from the name.
Probabilistic Models - A probabilistic model identifies tokens by the types of information they contain and by their positions in an input string.
A probabilistic model contains the following columns:
An input column that represents the data on the input port. You populate the column with sample data from the input port. The model uses the sample data as reference data in parsing and labeling operations.
One or more label columns that identify the types of information in each input string.
You add the columns to the model, and you assign labels to the tokens in each string. Use the label columns to indicate the correct position of the tokens in the string.
When you use a probabilistic model in a Parser transformation, the Parser writes each input value to an output port based on the label that matches the value. For example, the Parser writes the string "Franklin Delano Roosevelt" to FIRSTNAME, MIDDLENAME, and LASTNAME output ports.
The Parser transformation can infer a match between the input port data values and the model data values even if the port data is not listed in the model. This means that a probabilistic model does not need to list every token in a data set to correctly label or parse the tokens in the data set.
The transformation uses probabilistic or fuzzy logic to identify tokens that match tokens in the probabilistic model. You update the fuzzy logic rules when you compile the probabilistic model.
Reference Table - This is a db table for searching
Here it seems that your data is unstructured and you want to extract meaningful data from it. Informatica DataTransformation(DT) tool is good if your data follows some pattern. It is used with UDT transformation inside Informatica PowerCenter. With DT you can create a parser to parse your data and using serializer you can write it to any form you want, later you can do aggregation and other transformations on that data using Informatica PowerCenter's ETL capabilities.
DT is well known for it's capabilities to parse PDF's, forms and invoices. I hope it can solve the purpose.

SPARQL - Restricting Result Resource to Certain Namespace(s)

Is there a standard way of restricting the results of a SPARQL query to belong to a specific namespace.
Short answer - no there is no standard direct way to do this
Long answer - However yes you can do a limited form of this with the string functions and a FILTER clause. What function you use depends on what version of SPARQL your engine supports.
SPARQL 1.1 Solution
Almost all implementations will these days support SPARQL 1.1 and you can use the STRSTARTS() function like so:
FILTER(STRSTARTS(STR(?var), "http://example.org/ns#"))
This is my preferred approach and should be relatively efficient because it is simple string matching.
SPARQL 1.0 Solution
If you are stuck using an implementation that only supports SPARQL 1.0 you can still do this like so but it uses regular expressions via the REGEX() function so will likely be slower:
FILTER(REGEX(STR(?var), "^http://example\\.org/ns#"))
Regular Expressions and Meta-Characters
Note that for the regular expression we have to escape the meta-character . as otherwise it could match any character e.g. http://exampleXorg/ns#foo would be considered a valid match.
As \ is the escape character for both regular expressions and SPARQL strings it has to be double escaped here in order to get the regular expression to have just \. in it and treat . as a literal character.
Recommendation
If you can use SPARQL 1.1 then do so because using the simpler string functions will be more performant and avoids the need to worry about escaping any meta-characters that you have when using REGEX