In my RavenDB 2.5.2851 database I want to put attachments up to 2GB.
var store = new DocumentStore { Url = "http://127.0.0.1:8080" }.Initialize();
using (var reader = new StreamReader("test.attachment"))
{
var id = "attachments/1";
store.DatabaseCommands.PutAttachment(id, null, reader.BaseStream, null);
}
PutAttachment method started, but after 1,5 minutes the exception thrown: "The request was aborted: The request was cancelled".
I think this exception throws because of attachment size limits.
Is there the limit for attachment size? Can I configure it?
There isn't a limit on attachment size, although big attachments aren't encouraged.
The issue is that you are probably hitting request time / size issues. You need to configure IIS properly.
Note that we never really meant attachments to hold very large values.
Related
I am using jpos 2.1.0 where i am using external packager xml file for iso8583 client. Due to large number of request in two or three days, i encountered "Too Many Files Open" and i have set ulimit -n = 50000. I doubt that the packager files are not been closed properly due to which this limit has been exceeded. Please help me to close the open file properly.
JposLogger logger = new JposLogger(isoLogLocation);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(isoPackagerLocation+iso8583Properties.getPackager());
BaseChannel channel = new ASCIIChannel(iso8583Properties.getServerIp(), Integer.parseInt(iso8583Properties.getServerPort()), customPackager);
logger.jposlogconfig(channel);
try {
channel.setTimeout(45000);
channel.connect();
}catch(Exception ex) {
log4j.error(ex.getMessage());
throw new ConnectIpsException("Unable to establish connection with bank.");
}
log4j.info("Connection established using ASCIIChannel");
ISOMsg m = new ISOMsg();
m.set(0, "1200");
........
m.set(126, "connectIPS");
m.setPackager(customPackager);
log4j.info(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
log4j.info(ISOUtil.hexdump(r.pack()));
String actionCode = (String) r.getValue("39");
channel.disconnect();
return bancsxfr;
}
You know when you open a file, a socket, or a channel, you need to close it, right?
I don't see a finally in your try that would close the channel.
You have a huge leak there.
I got an exception, I never got before when testing my application that uploads a file from ec2 to s3. The content is:
Exception in thread "Thread-1" com.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received. (Service: Amazon S3; Status Code: 400; Error Code: BadDigest; Request ID: 972CB8E04388AB20), S3 Extended Request ID: T7bmFnQ2RlGWlJD+aGYfTy97XZw88pbQrwNB8YCezSjyq6O2joxHRP/6ko+Q2zZeGewkw4x/90k=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1383)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:902)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:607)
at com.amazonaws.http.AmazonHttpClient.doExecute(AmazonHttpClient.java:376)
at com.amazonaws.http.AmazonHttpClient.executeWithTimer(AmazonHttpClient.java:338)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:287)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3676)
at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1439)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInOneChunk(UploadCallable.java:131)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:123)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:139)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
What can I do to fix this bug? I used the same code as before in my application.
I think I have solved my problem. I finally found that some of my files actually changed during the uploading. Because the file is generated by another thread, the uploading and generating is done at the same time. The file can not be generated immediately, and during the generating of a file, it may be uploading at the same time, the file actually changed during the uploading.
The md5 of file is created at the beginning of uploading by the AmazonS3Client, then the whole file is uploaded to the S3, at this time, the file is different from the file uploaded at beginning, so the md5 actually changed. I modified my program to a single-threading program, and the problem never turned up again.
Another reason for having this issue is to run a code such as this (python)
with open(filename, 'r') as fd:
self._bucket1.put_object(Key=key, Body=fd)
self._bucket2.put_object(Key=key, Body=fd)
In this case the file object (fd) is pointing to the end of the file when it reaches line 3, so we will get the "Content MD5" error, in order to avoid it we will need to point the file reader back to the start position in the file
with open(filename, 'r') as fd:
bucket1.put_object(Key=key, Body=fd)
fd.seek(0)
bucket2.put_object(Key=key, Body=fd)
This way we won't get the aforementioned Boto error.
I also ran into this error when I was doing something like this:
InputStream productInputStream = convertImageFileToInputStream(file);
InputStream thumbnailInputStream = generateThumbnail(productInputStream);
String uploadedFileUrl = amazonS3Uploader.uploadToS3(BUCKET_PRODUCTS_IMAGES, productFilename, productInputStream);
String uploadedThumbnailUrl = amazonS3Uploader.uploadToS3(BUCKET_PRODUCTS_IMAGES, productThumbnailFilename, thumbnailInputStream);
The generateThumbnail method was manipulating the productInputStream using a third party library. Because I couldn't modify the third party library, I simply performed the upload first:
InputStream productInputStream = convertImageFileToInputStream(file);
// do this first...
String uploadedFileUrl = amazonS3Uploader.uploadToS3(BUCKET_PRODUCTS_IMAGES, productFilename, productInputStream);
/// and then this...
InputStream thumbnailInputStream = generateThumbnail(productInputStream);
String uploadedThumbnailUrl = amazonS3Uploader.uploadToS3(BUCKET_PRODUCTS_IMAGES, productThumbnailFilename, thumbnailInputStream);
... and added this line inside my generateThumbnail method:
productInputStream.reset();
FWIW, I've managed to find a completely different way of triggering this problem, which requires a different solution.
It turns out that if you decide to assign ObjectMetadata to a PutObjectRequest explicitly, for example to specify a cacheControl setting, or a contentType, then the AWS SDK mutates the ObjectMetadata instance to stash the MD5 that it computes for the put request. This means that if you are putting multiple objects, all of which you think should have the same metadata assigned to them, you still need to create a new ObjectMetadata instance for each and every PutObjectRequest. If you don't do this, then it reuses the MD5 computed from the previous put request and you get the MD5 mismatch error on the second object you try to put.
So, to be explicit, doing something like this will fail on the second iteration:
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("text/html");
for(Put obj: thingsToPut)
{
PutObjectRequest por =
new PutObjectRequest(bucketName, obj.s3Key, obj.file);
por = por.withMetadata(metadata);
PutObjectResult res = s3.putObject(por);
}
You need to do it like this:
for(Put obj: thingsToPut)
{
ObjectMetadata metadata = new ObjectMetadata(); // <<-- New ObjectMetadata every time!
metadata.setContentType("text/html");
PutObjectRequest por =
new PutObjectRequest(bucketName, obj.s3Key, obj.file);
por = por.withMetadata(metadata);
PutObjectResult res = s3.putObject(por);
}
I too ran into this problem. How I solved this:
I have a microservice that processes AWS SQS Messages. Each message would create multiple temporary files that would have to be uploaded to S3.
The issue was that the temporary files were named with fixed names without any salt added to them.
So between two messages, it was possible to rewrite the original file that was to be uploaded.
I fixed it by adding a random salt (this can be a UUID or the current time in millis depending on what you want) to the file names, after which the files were not being over-written and were successfully uploaded to S3.
For me it was that I used ContentLength in the params while executing upload. When it is commented out, it worked just fine.
const params = {
Bucket: "",
ContentType: "application/json",
Key: "filename.json",
// ContentLength: body.length, <--- what I have commented out
Body: body
};
await s3.upload(params).promise();
I am trying to add an event from my local database to the Davical server (in fact, this should apply to any CalDav server, as long as it is compliant with the CalDav protocol)...
From what I could read here, I can send a PUT request to add events contained in a VCALENDAR collection... So here is what I try to do:
try {
// Create the HttpWebRequest object
HttpWebRequest Request = (HttpWebRequest)HttpWebRequest.Create("http://my_caldav_srv/davical.php/user/mycalendar");
// Add the network credentials to the request
Request.Credentials = new NetworkCredential(usr, pwd);
// Specify the method
Request.Method = "PUT";
// some headers - I MAY BE MISSING THINGS HERE???
Request.Headers.Add("Overwrite", "T");
// set the body of the request...
Request.ContentLength = bodyStr.Length;
Stream reqStream = Request.GetRequestStream();
// Write the string to the destination as a text file.
reqStream.Write( Encoding.UTF8.GetBytes(body), 0, body.Length);
// Set the content type header.
Request.ContentType = contentType.Trim();
// Send the method request and get the response from the server.
Response = (HttpWebResponse)Request.GetResponse();
}
catch (Exception e) {
throw new Exception("Caught error: " + e.Message, e);
}
The body I send is actually an emtpy calendar:
BEGIN:VCALENDAR
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
PRODID:-//davical.org//NONSGML AWL Calendar//EN
X-WR-CALNAME:My Calendar
END:VCALENDAR
For a reason I cannot understand, the call with "PUT" returns an error (405) Method Not Allowed. The PUSH returns (500) Internal Server Error, but looking at the debug details, the reason is the same as for the PUT case...
In debugging on the server side, I found out that the reason is that in caldav-PUT-vcalendar.php, the following clause is violated:
$c->readonly_webdav_collections
Well, first, let me mention that with the SAME credentials entered in Lightning, I am able to add/remove events and, on the admin interface, I actually made sure to grant ALL rights to the user. So I'd be surprised it is due to that...
Any help would be most appreciated !
Kind regards,
Nik
OK, I got it....
The reason is that one must put the event to some EVENT adress....
I.e. the "url" is not the collection's address, but the EVENT's address...
So the same code using the following address works:
string url="http://my_server/caldav.php/username/calendarpath/_my_event_id.ics";
Does anybody know if it is possible to insert / delete multiple events at once ???
Im using a C# console APP to send a personalized bulk email but I found the bottleneck that if I use sequential programming I'm only able to send 1 emails per second. I've tried to create a multi-thread app but I am able to send only two emails per second.
How can I do it better?
This is a fragment of the code:
public static void MainProgram(List emails ,string cuerpo_email_en, string cuerpo_email_es)
{
//emails list is populated with 50.000 emails
DateTime timeControllerForSendingEmails = DateTime.Now;
while (emails.Count > 0)
{
if ((DateTime.Now - timeControllerForSendingEmails).TotalSeconds >= 1)
{
timeControllerForSendingEmails = DateTime.Now;
//this method gets a list of 60 emails and remove them from the main list
List<EmailEnt> queuedEmails = GetEmailsQueue(emails, 60));
Send(queuedEmails);
}
}
}
public void Send(List<EmailEnt> queuedEmails)
{
IList<Task> tasks = new List<Task>();
List<string> logLines = new List<string>();
foreach (EmailEnt emailEnt in queuedEmails)
{
string subject = "Hello {name}";
string body = "im the body;
tasks.Add(Task.Factory.StartNew(() =>
{
SendEmail(emailEnt, subject, body);
}));
}
Task.WaitAll(tasks.ToArray());
}
Any chance you are still running in 'sandbox mode'? According to AWS:
When you are in the sandbox, your sending quota is 200 messages per
24-hour period and your maximum sending rate is one message per
second. To increase your sending limits, you need to request
production access. For more information, see Requesting Production
Access to Amazon SES. After you request production access and start
sending emails, you can increase your sending limits further by
following the guidance in the Increasing Your Amazon SES Sending
Limits section.
If not, I use code similar to this to send 15+ emails second (not on SES), and it works fine:
Parallel.ForEach(mailQueue, new ParallelOptions() {MaxDegreeOfParallelism = 7}, itm=>SendEmail(itm));
which perhaps is functionally equivalent to what you are doing already, but I can say for sure it does provide much greater throughput and may be worth a try.
In my WCF service, I try to load a File from MS SQL table which has a FileStream column and I try to pass it as a stream back
responseMsg.DocSqlFileStream = new MemoryStream();
try
{
using (FileStreamDBEntities dbEntity = new FileStreamDBEntities())
{
...
using (TransactionScope x = new TransactionScope())
{
string sqlCmdStr = "SELECT dcraDocFile.PathName() AS InternalPath, GET_FILESTREAM_TRANSACTION_CONTEXT() AS TransactionContext FROM dcraDocument WHERE dcraDocFileID={0}";
var docFileStreamInfo = dbEntity.Database.SqlQuery<DocFileStreamPath>(sqlCmdStr, new object[] { docEntity.dcraDocFileID.ToString() }).First();
SqlFileStream sqlFS = new SqlFileStream(docFileStreamInfo.InternalPath, docFileStreamInfo.TransactionContext, FileAccess.Read);
sqlFS.CopyTo(responseMsg.DocSqlFileStream);
if( responseMsg.DocSqlFileStream.Length > 0 )
responseMsg.DocSqlFileStream.Position = 0;
x.Complete();
}
}
...
I'm wondering whats the best way to pass the SQLFileStream back through a message contract back to take advantage of streaming. Currently I copied the SQLFilEStream to a memory stream because I got an error message in WCF trace which says: Type 'System.Data.SqlTypes.SqlFileStream' cannot be serialized.
In WebApi there is such thing as PushStreamContent it allows delegating all transaction stuff to async lambda, don't know if there is something similar in WCF, but the following approach may be helpful:
http://weblogs.asp.net/andresv/archive/2012/12/12/asynchronous-streaming-in-asp-net-webapi.aspx
You can't stream an SQLFileStream back to the client because it can only be read within the SQL transaction. I think your solution with the MemoryStream is a good way of dealing with the problem.
I had a similar problem and was worried about the large object heap when using a new Memory Stream every time. I came up with the idea of using a temporary file on the disk instead of a memory stream. We are using this solution in several project now and it works really well.
See here for the example code:
https://stackoverflow.com/a/11307324/173711