Magento possible SQL injection? - sql

I've been having a few random search queries coming up in the popular search terms on a Magento site, the site has also been up and down like a Yo-Yo recently.
Could anyone shed some light on some, or all, of these search terms;
1)))) and benchmark(100000000,HEX(999999)) --
1 and benchmark(100000000,HEX(999999)) #
x Content-Length: 0 HTTP/1.1 200 OK Content-Type: text/html Content-Length: 18 <html>saint</html>
saint<!--#echo var="HTTP_USER_AGENT"-->
1 waitfor delay '0:0:6' /*
x;id|
<script>alert('SAINTL2NhdGFsb2dzZWFyY2gvcmVzdWx0L2luZGV4LyBx')</script>
christmas<script>alert("XSS");</script>

These terms are used (among thousands others) to test your site against various vulnerabilities. The presence of these markers doesn't mean your site is vulnerable. It means that your site being probed for that.

Related

How can I configure CloudFront so it costs me a bit less?

I have a very static site, basically HTML and some Javascript on S3. I serve this through Cloudfront. My usage has gone up a bit plus one of my Javascript files is pretty large.
So what can I do to cut down the costs of serving those files? they need have very good uptime as it has thousands of active users all over the world.
This is the usage for yesterday:
Looking at other questions about this it seems like changing headers can help but I thought I already had caching enabled. This is what curl returns if I get one of those files:
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< content-type: text/html
< content-length: 2246
< date: Fri, 03 Apr 2020 20:28:47 GMT
< last-modified: Fri, 03 Apr 2020 15:21:11 GMT
< x-amz-version-id: some string
< etag: "83df2032241b5be7b4c337f0857095fc"
< server: AmazonS3
< x-cache: Miss from cloudfront
< via: 1.1 somestring.cloudfront.net (CloudFront)
< x-amz-cf-pop: some string
< x-amz-cf-id: some string
This is what the cache is configured as on CloudFront:
This is what S3 says when I use curl to query the file:
< HTTP/1.1 200 OK
< x-amz-id-2: some string
< x-amz-request-id: some string
< Date: Fri, 03 Apr 2020 20:27:22 GMT
< x-amz-replication-status: COMPLETED
< Last-Modified: Fri, 03 Apr 2020 15:21:11 GMT
< ETag: "83df2032241b5be7b4c337f0857095fc"
< x-amz-version-id: some string
< Accept-Ranges: bytes
< Content-Type: text/html
< Content-Length: 2246
< Server: AmazonS3
So what can I do? I don't often update the files and when I do I don't mind if it takes a day or two for the change to propagate.
Thanks.
If your goal is to reduce CloudFront costs, then it's worth reviewing how it is charged:
Regional Data Transfer Out to Internet (per GB): From $0.085 to $0.170 (depending upon location of your users)
Regional Data Transfer Out to Origin (per GB): From $0.020 to $0.160 (data going back to your application)
Request Pricing for All HTTP Methods (per 10,000): From $0.0075 to $0.0090
Compare that to Amazon S3:
GET Requests: $0.0004 per 1000
Data Transfer: $0.09 per GB (Also applies for traffic coming from Amazon EC2 instances)
Therefore, some options for you to save money are:
Choose a lower Price Class that restricts which regions send traffic "out". For example, Price Class 100 only sends traffic from USA and Europe, which has lower Data Transfer costs. This will reduce Data Transfer costs for other locations, but will give them a lower quality of service (higher latency).
Stop using CloudFront and serve content directly from S3 and EC2. This will save a bit on requests (about half the price), but Data Transfer would be a similar cost to Price Class 100.
Increase the caching duration for your objects. However, the report is showing 99.9%+ hit rates, so this won't help much.
Configure the objects to persist longer in user's browsers so less requests are made. However, this only works for "repeat traffic" and might not help much. It depends on app usage. (I'm not familiar with this part. It might not work in conjunction with CloudFront. Hopefully other readers can comment.)
Typically, mosts costs are related to the volume of traffic. If you app is popular, those Data Transfer costs will go up.
Take a look at your bills and try to determine which component is leading to most of the costs. Then, it's a trade-off between service to your customers and costs to you. Changing the Price Class might be the best option for now.

Pig script new record

I am working on following mail data in a file.. (data source:infochimps)
Message-ID: <33025919.1075857594206.JavaMail.evans#thyme>
Date: Wed, 13 Dec 2000 13:09:00 -0800 (PST)
From: john.arnold#enron.com
To: slafontaine#globalp.com
Subject: re:spreads
Mime-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-From: John Arnold
X-To: slafontaine#globalp.com # ENRON
X-cc:
X-bcc:
X-Folder: \John_Arnold_Dec2000\Notes Folders\'sent mail
X-Origin: Arnold-J
X-FileName: Jarnold.nsf
saw a lot of the bulls sell summer against length in front to mitigate
margins/absolute position limits/var. as these guys are taking off the
front, they are also buying back summer. el paso large buyer of next winter
today taking off spreads. certainly a reason why the spreads were so strong
on the way up and such a piece now. really the only one left with any risk
premium built in is h/j now. it was trading equivalent of 180 on access,
down 40+ from this morning. certainly if we are entering a period of bearish
................]
I am loading above data as:-
A = load '/root/test/enron_mail/maildir/*/*/*' using PigStorage(':') as (f1:chararray,f2:chararray);
but for the message body I am getting separate tuples as message body includes new lines..
how to consolidate last lines into one ?
I want below part in single tuple as:
saw a lot of the bulls sell summer against length in front to mitigate
margins/absolute position limits/var. as these guys are taking off the
front, they are also buying back summer. el paso large buyer of next winter
today taking off spreads. certainly a reason why the spreads were so strong
on the way up and such a piece now. really the only one left with any risk
premium built in is h/j now. it was trading equivalent of 180 on access,
down 40+ from this morning. certainly if we are entering a period of bearish

GWT-RPC, Apache, Tomcat server data size checking

Following up on this GWT-RPC question (and answer #1) re. field size checking, I would like to know the right way to check pre-deserialization for max data size sent to server, something like if request data size > X then abort the request. Valuing simplicity and based on answer on aforementioned question/answer, I am inclined to believe checking for max overall request size would suffice, finer grained checks (i.e., field level checks) could be deferred to post-deserialization, but I am open to any best-practice suggestion.
Tech stack of interest: GWT-RPC client-server communication with Apache-Tomcat front-end web-server.
I suppose a first step would be to globally limit the size of any request (LimitRequestBody in httpd.conf or/and others?).
Are there finer-grained checks like something that can be set per RPC request? If so where, how? How much security value do finer grain checks bring over one global setting?
To frame the question more specifically with an example, let's suppose we have the two following RPC request signatures on the same servlet:
public void rpc1(A a, B b) throws MyException;
public void rpc2(C c, D d) throws MyException;
Suppose I approximately know the following max sizes:
a: 10 kB
b: 40 kB
c: 1 M B
d: 1 kB
Then I expect the following max sizes:
rpc1: 50 kB
rpc2: 1 MB
In the context of this example, my questions are:
Where/how to configure the max size of any request -- i.e., 1 MB in my above example? I believe it is LimitRequestBody in httpd.conf but not 100% sure whether it is the only parameter for this purpose.
If possible, where/how to configure max size per servlet -- i.e., max size of any rpc in my servlet is 1 MB?
If possible, where/how to configure/check max size per rpc request -- i.e., max rpc1 size is 50 kB and max rpc2 size is 1 MB?
If possible, where/how to configure/check max size per rpc request argument -- i.e., a is 10 kB, b is 40 kB, c is 1 MB, and d is 1 kB. I suspect it makes practical sense to do post-deserialization, doesn't it?
For practical purposes based of cost/benefit, what level of pre-deserialization checking is generally recommended -- 1. global, 2. servlet, 3. rpc, 4. object-argument? Stated differently, what is roughly the cost-complexity on one hand and the added value on the other hand of each of the above pre-deserialization level checks?
Thanks much in advance.
Based on what I have learned since I asked the question, my own answer and strategy until someone can show me better is:
First line of defense and check is Apache's LimitRequestBody set in httpd.conf. It is the overall max for all rpc calls across all servlets.
Second line of defense is servlet pre-deserialization by overriding GWT AbstractRemoteServiceServlet.readContent. For instance, one could do it as shown further below I suppose. This was the heart of what I was fishing for in this question.
Then one can further check each rpc call argument post-deserialization. One could conveniently use the JSR 303 validation both on the server and client side -- see references StackOverflow and gwt r.e. client side.
Example on how to override AbstractRemoteServiceServlet.readContent:
#Override
protected String readContent(HttpServletRequest request) throws ServletException, IOException
{
final int contentLength = request.getContentLength();
// _maxRequestSize should be large enough to be applicable to all rpc calls within this servlet.
if (contentLength > _maxRequestSize)
throw new IOException("Request too large");
final String requestPayload = super.readContent(request);
return requestPayload;
}
See this question in case the max request size if > 2GB.
From a security perspective, this strategy seems quite reasonable to me to control the size of data users send to server.

RavenDB Document Deleted Before Expiration

I am attempting to write a document to RavenDB with an expiration 20 minutes in the future. I am not using the .NET client, just curl. My request looks like this:
PUT /databases/FRUPublic/docs/test/123 HTTP/1.1
Host: ravendev
Connection: close
Accept-encoding: gzip, deflate
Content-Type: application/json
Raven-Entity-Name: tests
Raven-Expiration-Date: 2012-07-31T22:23:00
Content-Length: 14
{"data":"foo"}
In the studio I see my document saved with Raven-Expiration-Date set exactly 20 minutes from Last-Modified, however, within 5 minutes the document is deleted.
I see this same behavior (deleted in 5 minutes) if I increase the expiration date. If I set an expiration date in the past the document deletes immediately.
I am using build 960. Any ideas about what I'm doing wrong?
I specified the time to 10 millionth of a second and now documents are being deleted just as I would expect.
For example:
Raven-Expiration-Date: 2012-07-31T22:23:00.0000000
The date have to be in UTC, and it looks like you are sending local time.

Invalid Flickr API response

I've come across a very puzzling issue with the Flickr API.
Basically, there's certain queries I (and some developer friends) can run which result in broken resultsets.
Basically, what you request, isn't always returned...
Here's a few examples:
Request:
http://api.flickr.com/services/rest/?method=flickr.photos.search&safe_search=1&media=photos&extras=o_dims&per_page=30&page=1&format=json&nojsoncallback=1&api_key=XXXXXXX
Response:
HTTP/1.1 200 OK
Content-Length: 793
Date: Thu, 05 Jan 2012 23:30:56 GMT
P3P: policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV"
Access-Control-Allow-Origin: *
Cache-Control: private
X-Served-By: www71.flickr.mud.yahoo.com
Vary: Accept-Encoding
Connection: close
Content-Type: text/plain; charset=utf-8
{"photos":{"page":1, "pages":19886, "perpage":30, "total":"596560", "photo":[{"id":"6643915631", "owner":"74181952#N00", "secret":"8bc611c556", "server":"7023", "farm":8, "title":"IMG_5642", "ispublic":1, "isfriend":0, "isfamily":0}, {"id":"6643911681", "owner":"7240073#N04", "secret":"34837024f0", "server":"7004", "farm":8, "title":"26 weeks!!", "ispublic":1, "isfriend":0, "isfamily":0, "o_width":"768", "o_height":"1024"}, {"id":"6643919177", "owner":"54899865#N02", "secret":"170d3a336f", "server":"7153", "farm":8, "title":"IMGA0072", "ispublic":1, "isfriend":0, "isfamily":0}, {"id":"6643916265", "owner":"51191328#N06", "secret":"05905197ce", "server":"7034", "farm":8, "title":"IMG_1781", "ispublic":1, "isfriend":0, "isfamily":0, "o_width":"2736", "o_height":"3648"}]}, "stat":"ok"}
Notice there's only 4 images returned, when we asked for 30? (and there's 596560 pics matching)
If I change the perpage count to something different it may work, like right now, if I change it to 3, it'll return 3, but yesterday when I was testing, it only returned 2! and when I changed it to 10 it returned none!?
We've come across another example, this time with image size data:
Request
http://api.flickr.com/services/rest/?method=flickr.interestingness.getList&extras=o_dims&per_page=3&page=1&format=rest&api_key=XXXXXXXXXX
Response
<?xml version="1.0" encoding="utf-8" ?>
<rsp stat="ok">
<photos page="1" pages="167" perpage="3" total="500">
<photo id="6743082503" owner="29789996#N00" secret="7d6a1ab340" server="7165" farm="8" title="Glittering Marina [2]" ispublic="1" isfriend="0" isfamily="0" />
<photo id="6741988715" owner="44789014#N04" secret="ab1528fa9f" server="7009" farm="8" title="Heavy metal warrior" ispublic="1" isfriend="0" isfamily="0" o_width="1200" o_height="1202" />
<photo id="6741320397" owner="54880604#N06" secret="7b3bd8530f" server="7030" farm="8" title="Greetings from below, Village near Can Tho" ispublic="1" isfriend="0" isfamily="0" />
</photos>
</rsp>
Note only one of the images has image size data.
It's a very difficult issue to reproduce as it only happens every now and then, but once you've found a page/pagecount combo that causes an issue, you'll consistently get the incorrect response (I assume it's due to some form of caching).
Has anyone else come across this?
As you can see in my resultset above, there's no error, no warning, just an incorrect response.
Thanks in advance.
Aaron
Huh. I've filed myself a bug; let me look into it. Possibly a pagination bug on our end, or a caching thing as suggested.