Is there a maximum number of transformations, or maximum URL length, when using Cloudinary's URL API? - cloudinary

I try to build my opengraph images with Cloudinary, with a few transformations to add text.
This is what I tried to do:
https://res.cloudinary.com/nho/image/fetch/e_blur:2000,c_crop,ar_1200:630,b_white/e_grayscale/w_1200/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vMjAxNy8xMC9jaHJvbWUtZW1vamktMTI4cHgtbWF4LnBuZw==,h_1.0,w_1.0,fl_relative,c_limit/b_rgb:3d2e68,o_40/w_1120,c_fit,l_text:georgia_80:Chrome%20fails%20showing%20big%20emojis,g_north_west,x_42,y_43,co_black,o_50/w_1120,c_fit,l_text:georgia_80:Chrome%20fails%20showing%20big%20emojis,g_north_west,x_41,y_42,co_black,o_75/w_1120,c_fit,l_text:georgia_80:Chrome%20fails%20showing%20big%20emojis,g_north_west,x_40,y_40,co_white/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vYXNzZXRzL3Bob3RvLWRlLW5pY29sYXMtaG9pemV5LmpwZw==,g_south_west,x_40,y_40,c_fill,w_60,r_max/l_text:georgia_50:nicolas-hoizey.com,g_south_west,x_112,y_40,co_black/l_text:georgia_50:nicolas-hoizey.com,g_south_west,x_111,y_41,co_white/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vYXNzZXRzL2xvZ29zL3R3aXR0ZXIucG5n,g_south_east,x_220,y_40,c_fill,w_50,r_max/l_text:georgia_50:nhoizey,g_south_east,x_40,y_40,co_black/l_text:georgia_50:nhoizey,g_south_east,x_41,y_41,co_white/https://nicolas-hoizey.com/2017/10/chrome-emoji-128px-max.png
I know the syntax is good, but I get a 400 Bad Request error.
If I remove the last transformation (/l_text:georgia_50:nhoizey,g_south_east,x_41,y_41,co_white) that adds text, it works:
https://res.cloudinary.com/nho/image/fetch/e_blur:2000,c_crop,ar_1200:630,b_white/e_grayscale/w_1200/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vMjAxNy8xMC9jaHJvbWUtZW1vamktMTI4cHgtbWF4LnBuZw==,h_1.0,w_1.0,fl_relative,c_limit/b_rgb:3d2e68,o_40/w_1120,c_fit,l_text:georgia_80:Chrome%20fails%20showing%20big%20emojis,g_north_west,x_42,y_43,co_black,o_50/w_1120,c_fit,l_text:georgia_80:Chrome%20fails%20showing%20big%20emojis,g_north_west,x_41,y_42,co_black,o_75/w_1120,c_fit,l_text:georgia_80:Chrome%20fails%20showing%20big%20emojis,g_north_west,x_40,y_40,co_white/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vYXNzZXRzL3Bob3RvLWRlLW5pY29sYXMtaG9pemV5LmpwZw==,g_south_west,x_40,y_40,c_fill,w_60,r_max/l_text:georgia_50:nicolas-hoizey.com,g_south_west,x_112,y_40,co_black/l_text:georgia_50:nicolas-hoizey.com,g_south_west,x_111,y_41,co_white/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vYXNzZXRzL2xvZ29zL3R3aXR0ZXIucG5n,g_south_east,x_220,y_40,c_fill,w_50,r_max/l_text:georgia_50:nhoizey,g_south_east,x_40,y_40,co_black/https://nicolas-hoizey.com/2017/10/chrome-emoji-128px-max.png
Does it mean I stacked too much transformations in one single URL?
UPDATE:
I later tried to reduce the length of the first text (added 3 times to create a shadow), from Chrome%20fails%20showing%20big%20emojis to Chrome%20fails, and it worked too:
https://res.cloudinary.com/nho/image/fetch/e_blur:2000,c_crop,ar_1200:630,b_white/e_grayscale/w_1200/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vMjAxNy8xMC9jaHJvbWUtZW1vamktMTI4cHgtbWF4LnBuZw==,h_1.0,w_1.0,fl_relative,c_limit/b_rgb:3d2e68,o_40/w_1120,c_fit,l_text:georgia_80:Chrome%20fails,g_north_west,x_42,y_43,co_black,o_50/w_1120,c_fit,l_text:georgia_80:Chrome%20fails,g_north_west,x_41,y_42,co_black,o_75/w_1120,c_fit,l_text:georgia_80:Chrome%20fails,g_north_west,x_40,y_40,co_white/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vYXNzZXRzL3Bob3RvLWRlLW5pY29sYXMtaG9pemV5LmpwZw==,g_south_west,x_40,y_40,c_fill,w_60,r_max/l_text:georgia_50:nicolas-hoizey.com,g_south_west,x_112,y_40,co_black/l_text:georgia_50:nicolas-hoizey.com,g_south_west,x_111,y_41,co_white/l_fetch:aHR0cHM6Ly9uaWNvbGFzLWhvaXpleS5jb20vYXNzZXRzL2xvZ29zL3R3aXR0ZXIucG5n,g_south_east,x_220,y_40,c_fill,w_50,r_max/l_text:georgia_50:nhoizey,g_south_east,x_40,y_40,co_black/l_text:georgia_50:nhoizey,g_south_east,x_41,y_41,co_white/https://nicolas-hoizey.com/2017/10/chrome-emoji-128px-max.png
So I suspect the URL length might be the issue.

The issue is not the number of transformations per URL. The issue is that the transformation itself is too long. Please consider using named transformations to shorten it up.

Related

JMeter - get value from href

I am load testing an application that has a link that looks like this:
https://example.com/myapp/table?qid=1434e99d-5b7c-4e74-b64e-c24e9564514d&rsid=5c94ddc7-e2e4-4e69-8547-49572486f4d1
I need to get the dynamic value of the rsid so I can use it later in my script.
So far I have tried using the regex extractor and I am probably doing it wrong.
I have tried things like:
name = myvar
regular expression = rsid=(.*?) # didnt work
regular expression = <a href=".*?rsid=(.*?)"> # didnt work
Template = $1$
I have one extractor set up to get the csrf value and that one works as expected but that is also because the csrf value is in the page source.
The above link is NOT in the page source as far as I can see but it DOES show up when I inspect the link. I dont know if that is obfuscation or something else?
How can I extract the value of the rsid? Is the regular expression extractor the right one to use for this?
Should I be using something else?
Is it just a formula issue?
Thanks in advance.
Try something like:
rsid=[0-9A-Fa-f\-]{36}
the above regular expression should match a GUID-like structure and your rsid seems to be an instance of it.
Demo:
Also be aware of the Boundary Extractor, it's sufficient to specify "left" and "right" boundaries and it will extract everything in-between. In general coming up with "boundaries" is much easier than creating a regular expression, it's more readable and JMeter processes the Boundary Extractors much faster. More information: The Boundary Extractor vs. the Regular Expression Extractor in JMeter

How to filter by tag in Jaeger

When trying to filter by tag, there is a small popup:
I have been looking for logfmt around, but all I can find is key=value format.
My questions are:
Is there a way for something more sophisticated? (starts_with, not equal, contains, etc)
I am trying to filter by url using http.url="http://example.com?bla=bla&foo=bar". I am pretty sure the value exists because I am copy/pasting from my trace. I am getting no results. Do I need to escape characters or do something else for this to work?
I did some research around logfmt as well. Based on the documentation of the original implementation and in the Python implementation of the parser (and respective tests), I would say that it doesn't support anything more sophisticated (like starts_with, not equal, contains). And this is because the output of the parser is a simple dictionary (with no regex involved in the values).
As for the second question, using the same mentioned Python parser, I was able to double-check that your filter looks fine:
from logfmt import parse_line
parse_line('http.url="http://example.com?bla=bla&foo=bar"')
Output:
{'http.url': 'http://example.com?bla=bla&foo=bar'}
This makes me suspect of an issue on the Jaeger side, but this is as far as I could go.

CreateML data analysis stopped

When I attempt to train a CreateML model, I get the following screen after inputting my training data:
Create ML error message
I am then unable to add my test data or train the model. Any ideas on what is going on here?
[EDIT] As mentioned in my comment below, this issue went away when I removed some of my training data. Any newcomers who are running into this issue are encouraged to try some of the solutions below and comment on whether it worked for them. I'm happy to accept an answer if it seems like it's working for people.
This happens when the first picture in the dataset has no label. If you place a labeled photo as the first in the dataset and in the coreML json, you shouldn't get that issue.
Correct:
[{"annotations":[{"label":"Enemy","coordinates":{"y":156,"x":302,"width":26,"height":55}}],"imagefilename":"Enemy1.png"},{"annotations":[{"label":"Enemy","coordinates":{"y":213,"x":300,"width":69,"height":171}}],"imagefilename":"Enemy7.png"},{"annotations":
Incorrect:
[{"annotations":[],"imagefilename":"Enemy_v40.png"},{"annotations":[],"imagefilename":"Enemy_v41.png"},{"annotations":[],"imagefilename":"Enemy_v42.png"},{"annotations":
At the minimum you should check for these 2 situations, which triggered the same generic error for me (data analysis stopped), in the context of an Object Detection Model:
One or more of the image names referenced in annotations.json is incorrect (e.g. typo in image name)
The first entry in annotations.json has an empty annotations array (i.e. an image that does not contain any of the objects to be detected)
If you are using any random Split or something similar, make sure, its parsing the data correctly. you can test this easily by debugging.
I suggest you check to see if your training data is consistent and all entries have all needed values. The error is likely in the section of data you removed.
That would cause the error Nate commented he is seeing when he gets that pop up.
Getting the log would be the next step in any other evaluation.

Error 414 -The requested URL is too large to process

I am using Google Chart API in my Application and generating graph using URL "http://chart.apis.google.com"
I am getting error "The requested URL is too large to process", when I provide large set of parameters to this URL.
What I can do in this situation?
The Google Charts API FAQ offers this advice:
Is there a limit on the URL length for the Google Chart API? What is the maximum URL length?
The maximum length of a URL is not determined by the Google Chart API, but rather by web browser and web server considerations. The longest URL that Google accepts in a chart GET request is 2048 characters in length, after URL-encoding (e.g., | becomes %7C). For POST, this limit is 16K.
If URL length is a problem, here are a few suggestions for shortening your URL:
If you are using a text encoding data format, remove leading zeros from numbers, remove trailing zeros after decimal points, and round or truncate the numbers after decimal points.
If that does not shorten the URL enough, use simple (1 character) or extended (2 character) encoding.
Sample data less frequently; i.e., reduce granularity.
Remove accoutrements and decorations, such as colors, labels, and styles, from your chart.
also found this but there doesn't seem to be an answer/solution.
http://groups.google.com/group/google-chart-api/browse_thread/thread/b47c1588b39d98ce
In case it is a browser error - browsers have their maximum URL length limitations (IE 6/7 has 2,083 limit):
What is the maximum length of a URL in different browsers?
I'm getting HTTP 414 but my URL length is not an issue (it is 1881 characters), and I have tried both GET and POST. My guess is that Google will also return this error when the chart you are requesting is too "expensive" to generate.
A method that worked well for me was to divide all values by 10 or 20 and converting the results to int (no commas), but I kept the numbers on the axis. This way, it's a little less accurate but reduces the amount of characters used in the URL.
Code example:
$newSalesrank = $rank/20;
$rankdata .= intval($newSalesrank);
This solved my problem, I got no more "url too long" errors and it still looks good on my charts - because it still looked the same, the numbers just were simply scaled down.

TSearch2 - dots explosion

Following conversion
SELECT to_tsvector('english', 'Google.com');
returns this:
'google.com':1
Why does TSearch2 engine didn't return something like this?
'google':2, 'com':1
Or how can i make the engine to return the exploded string as i wrote above?
I just need "Google.com" to be foundable by "google".
Unfortunately, there is no quick and easy solution.
Denis is correct in that the parser is recognizing it as a hostname, which is why it doesn't break it up.
There are 3 other things you can do, off the top of my head.
You can disable the host parsing in the database. See postgres documentation for details. E.g. something like ALTER TEXT SEARCH CONFIGURATION your_parser_config
DROP MAPPING FOR url, url_path
You can write your own custom dictionary.
You can pre-parse your data before it's inserted into the database in some manner (maybe splitting all domains before going into the database).
I had a similar issue to you last year and opted for solution (2), above.
My solution was to write a custom dictionary that splits words up on non-word characters. A custom dictionary is a lot easier & quicker to write than a new parser. You still have to write C tho :)
The dictionary I wrote would return something like 'www.facebook.com':4, 'com':3, 'facebook':2, 'www':1' for the 'www.facebook.com' domain (we had a unique-ish scenario, hence the 4 results instead of 3).
The trouble with a custom dictionary is that you will no longer get stemming (ie: www.books.com will come out as www, books and com). I believe there is some work (which may have been completed) to allow chaining of dictionaries which would solve this problem.
First off in case you're not aware, tsearch2 is deprecated in favor of the built-in functionality:
http://www.postgresql.org/docs/9/static/textsearch.html
As for your actual question, google.com gets recognized as a host by the parser:
http://www.postgresql.org/docs/9.0/static/textsearch-parsers.html
If you don't want this to occur, you'll need to pre-process your text accordingly (or use a custom parser).