Is it possible to use adodb stream for large files using pre-allocated chunks? - com

I'm currently using comobject adodb.stream to grab very large files (multiple gb) from archive.org as an authenticated user and consequently receiving an out of memory error at allocation time. Is there a way to use adodb.stream to set the chunk size and append/concatenate to the output file?
Alternative solutions are welcome.
Here I am using AHKscript:
file_save_location := save
Overwrite := True
get_site := "https://archive.org/download/file.zip" ;;multi-gb file
post_site :="https://archive.org/account/login.php"
post_data :="username=" username_str "&password=" password_str "&remember=CHECKED&referer=https://archive.org&action=login&submit=Log in"
WebRequest := ComObjCreate( "WinHttp.WinHttpRequest.5.1" )
WebRequest.Open("POST", post_site)
WebRequest.SetRequestHeader("Content-Type", "application / zip, application / octet - stream""application / zip, application / octet - stream")
WebRequest.SetRequestHeader("Cookie", "test-cookie=1")
WebRequest.SetRequestHeader("Accept-Encoding","gzip,deflate,sdch")
WebRequest.Send(post_data)
WebRequest.Open("HEAD",get_site)
WebRequest.Send()
WebRequest.Open("GET",get_site)
WebRequest.Send()
ADODBObj := ComObjCreate( "ADODB.Stream" )
ADODBObj.Type := 1
ADODBObj.Open()
;;ADODBObj.Position := 0 ;;getting warm
ADODBObj.Write( WebRequest.ResponseBody )
ADODBObj.SaveToFile(file_save_location, Overwrite ? 2:1)
ADODBObj.Close()
ADODBObj:=""
WebRequest:=""

Related

Why won't the BitmapImage for WizardStyle=modern resize in Inno Setup? [duplicate]

Bitmap for Inno Setup WizardImageFile (and WizardSmallImageFile) looks terrible because when Windows 7 has the large system fonts enabled, the Wizard is bigger than usual, but the images are scaled terrible wrong.
Is there a fix?
There is no similar issue if I add own picture somewhere like this:
BitmapImage1.AutoSize := True;
BitmapImage1.Align := alClient;
BitmapImage1.Left := 0;
BitmapImage1.Top := 0;
BitmapImage1.stretch := True;
BitmapImage1.Parent := Splash;
These are bitmap images, they naturally scale badly. You are just lucky that your own images do not look that bad when scaled.
You have to prepare your own set of images for common scaling factors.
Common scaling factors used nowadays are 100%, 125%, 150% and 200%. So you should have four sizes for the images, like:
WizardImage 100.bmp
WizardImage 125.bmp
WizardImage 150.bmp
WizardImage 200.bmp
WizardSmallImage 100.bmp
WizardSmallImage 125.bmp
WizardSmallImage 150.bmp
WizardSmallImage 200.bmp
Inno Setup can automatically select the best version of the image since 5.6.
Just list your versions of the images in the WizardImageFile and WizardSmallImageFile. You can use wildcards:
[Setup]
WizardImageFile=WizardImage *.bmp
WizardImageFile=WizardSmallImage *.bmp
On older versions of Inno Setup (or if your need to customize the selection algorithm or when you have additional custom images in the wizard), you would have to select the images programatically.
The following example does more or less the same what Inno Setup 5.6:
[Setup]
; Use 100% images by default
WizardImageFile=WizardImage 100.bmp
WizardSmallImageFile=WizardSmallImage 100.bmp
[Files]
; Embed all other sizes to the installer
Source: "WizardImage *.bmp"; Excludes: "* 100.bmp"; Flags: dontcopy
Source: "WizardSmallImage *.bmp"; Excludes: "* 100.bmp"; Flags: dontcopy
[Code]
function GetScalingFactor: Integer;
begin
if WizardForm.Font.PixelsPerInch >= 192 then Result := 200
else
if WizardForm.Font.PixelsPerInch >= 144 then Result := 150
else
if WizardForm.Font.PixelsPerInch >= 120 then Result := 125
else Result := 100;
end;
procedure LoadEmbededScaledImage(Image: TBitmapImage; NameBase: string);
var
Name: String;
FileName: String;
begin
Name := Format('%s %d.bmp', [NameBase, GetScalingFactor]);
ExtractTemporaryFile(Name);
FileName := ExpandConstant('{tmp}\' + Name);
Image.Bitmap.LoadFromFile(FileName);
DeleteFile(FileName);
end;
procedure InitializeWizard;
begin
{ If using larger scaling, load the correct size of images }
if GetScalingFactor > 100 then
begin
LoadEmbededScaledImage(WizardForm.WizardBitmapImage, 'WizardImage');
LoadEmbededScaledImage(WizardForm.WizardBitmapImage2, 'WizardImage');
LoadEmbededScaledImage(WizardForm.WizardSmallBitmapImage, 'WizardSmallImage');
end;
end;
You might want to do the same for the SelectDirBitmapImage, the SelectGroupBitmapImage and the PreparingErrorBitmapImage.
See also:
How to detect and "fix" DPI settings with Inno Setup?
Inno Setup Placing image/control on custom page

Delphi 10.2: Using Local SQL with Firedac Memory Tables

How can I use Firedac LocalSQL with FDMemtable? Is there any working example available?
According to the Embarcadero DocWiki I set up a local connection (using SQLite driver), a LocalSQL component and connected some Firedac memory tables to it. Then I connected a FDQuery and try to query the memory tables. But the query always returns "table xyz not known" even if I set an explicit dataset name for the memory table in the localSQL dataset collection.
I suspect that I miss something fundamental that is not contained in the Embarcadero docs. If anyone has ever got this up and running I would be grateful for some tips.
Here is some code I wrote for an answer here a while ago, which is a self-contained example of using LocalSQL, tested in D10.2 (Seattle). It should suffice to get you going. Istr that the key to getting it working was a comment somewhere in the EMBA docs that FD's LocalSQL is based on Sqlite, as you've noted.
procedure TForm3.CopyData2;
begin
DataSource2.DataSet := FDQuery1;
FDConnection1.DriverName := 'SQLite';
FDConnection1.Connected := True;
FDLocalSQL1.Connection := FDConnection1;
FDLocalSQL1.DataSets.Add(FDMemTable1);
FDLocalSQL1.Active := True;
FDQuery1.SQL.Text := 'select * from FDMemTable1 order by ID limit 5';
FDQuery1.Active := True;
FDMemTable1.Close;
FDMemTable1.Data := FDQuery1.Data;
end;
procedure TForm3.FormCreate(Sender: TObject);
var
i : integer;
MS : TMemoryStream;
begin
FDMemTable1.CreateDataSet;
for i := 1 to 10 do
FDMemTable1.InsertRecord([i, 'Row:' + IntToStr(i), 10000 - i]);
FDMemTable1.First;
// Following is to try to reproduce problem loading from stream
// noted by the OP, but works fine
MS := TMemoryStream.Create;
try
FDMemTable1.SaveToStream(MS, sfBinary);
MS.Position := 0;
FDMemTable1.LoadFromStream(MS, sfBinary);
finally
MS.Free;
end;
end;
As you can see, you can refer in the SQL to an existing FireDAC dataset simply by using its component name.

Why we need to setReadLimit(int) in AWS S3 Java client

I am working on AWS Java S3 Library.
This is my code which is uploading the file to s3 using High-Level API of AWS.
ClientConfiguration configuration = new ClientConfiguration();
configuration.setUseGzip(true);
configuration.setConnectionTTL(1000 * 60 * 60);
AmazonS3Client amazonS3Client = new AmazonS3Client(configuration);
TransferManager transferManager = new TransferManager(amazonS3Client);
ObjectMetadata objectMetadata = new ObjectMetadata();
objectMetadata.setContentLength(message.getBodyLength());
objectMetadata.setContentType("image/jpg");
transferManager.getConfiguration().setMultipartUploadThreshold(1024 * 10);
PutObjectRequest request = new PutObjectRequest("test", "/image/test", inputStream, objectMetadata);
request.getRequestClientOptions().setReadLimit(1024 * 10);
request.setSdkClientExecutionTimeout(1000 * 60 * 60);
Upload upload = transferManager.upload(request);
upload.waitForCompletion();
I am trying to upload a large file. It is working properly but sometimes I am getting below error. I have set readLimit as (1024*10).
2019-04-05 06:41:05,679 ERROR [com.demo.AwsS3TransferThread] (Aws-S3-upload) Error in saving File[media/image/osc/54/54ec3f2f-a938-473c-94b7-a55f39aac4a6.png] on S3[demo-test]: com.amazonaws.ResetException: Failed to reset the request input stream; If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.resetRequestInputStream(AmazonHttpClient.java:1221)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1042)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:948)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:661)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:635)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:618)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:586)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:573)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:445)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4041)
at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3041)
at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3026)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadPartsInSeries(UploadCallable.java:255)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInParts(UploadCallable.java:189)
at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:121)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:139)
at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:47)
What is the perpose of readLimit?
How it will usefull?
What should I do to avoid this kind of exception?
After researching on this for 1 week,
I have found that if your uploading file size is less than 48GB then you can set readLimit value 5.01MB.
because AWS dividing file into multiple parts and each part size is value is 5MB(If you have not changed minimum part size value). as per the AWS specs, last part size is less than 5MB. so I have set readLimit 5MB and it solves the issue.
InputStream readLimit purpose:
Marks the current position in this input stream. A subsequent call to the reset method repositions this stream at the last marked position so that subsequent reads re-read the same bytes.Readlimit arguments tells this input stream to allow that many bytes to be read before the mark position gets invalidated. The general contract of mark is that, if the method markSupported returns, the stream somehow remembers all the bytes read after the call to mark and stands ready to supply those same bytes again if and whenever the method reset is called. However, the stream is not required to remember any data at all if more than readLimit bytes are read from the stream before reset is called.

Go agouti testing fill textarea with PhantomJS perfomance issue

I am using agouti with gomega and ginkgo in Go to test an upload form of our application consisting of an textarea which we fill.
This Code works fine for 1500 rows:
upload_externalData := page.Find("#upload_externalData")
buf := bytes.NewBuffer(nil)
f, err := os.Open("./files/external.log")
io.Copy(buf, f)
externalData := string(buf.Bytes())
Expect(upload_externalData.Fill(string(externalData))).Should(Succeed())
When increasing the imported data to the normal 25000 rows, PhantomJS blocks one CPU Core at 100% and nothing else happens.
Is there a way to achieve this?

Lazarus sqldb and Transactions

I'm developing an application (for Win32 and WinCE) in Lazarus using sqldb components for data access.
Remote database is PostgreSQL (but i have the same behaviour with local SQLite).
Connection to PostgreSQL work perfectrly but when I open any Query (also a very simple select) database go in transaction: "idle in transaction".
var
PGConnection:TPQConnection;
PGTransaction:TSQLTransaction;
myQuery:TSQLQuery;
begin
PGConnection := TPQConnection.Create(self);
PGTransaction := TSQLTransaction.Create(self);
myQuery := TSQLQuery.Create(self);
try
PGConnection.HostName := '192.168.1.2';
PGConnection.DatabaseName:='testdb';
PGConnection.UserName:='test';
PGConnection.Password:='test';
PGConnection.Transaction := PGTransaction;
PGConnection.Open;
myQuery.DataBase := PGConnection;
myQuery.SQL.Add('SELECT 1 AS value');
myQuery.Open; // <- this start transaction
ShowMessage(myQuery.FieldByName('value').AsString); // <- db: "idle in transaction"
myQuery.Close; // <- db: "idle in transaction"
PGConnction.Close;
finally
myQuery.Free;
PGConnection.Free;
PGTransaction.Free;
end;
end;
Ok, maybe sqldb work in this way: every query on database start a transaction, so, developer must Commit or Rollback after interrogation. But there is another question: when I commit the Transaction, sqldb close the query and i can't access value retrieved:
var
PGConnection:TPQConnection;
PGTransaction:TSQLTransaction;
myQuery:TSQLQuery;
begin
PGConnection := TPQConnection.Create(self);
PGTransaction := TSQLTransaction.Create(self);
myQuery := TSQLQuery.Create(self);
try
PGConnection.HostName := '192.168.1.2';
PGConnection.DatabaseName:='testdb';
PGConnection.UserName:='test';
PGConnection.Password:='test';
PGConnection.Transaction := PGTransaction;
PGConnection.Open;
myQuery.DataBase := PGConnection;
myQuery.SQL.Add('SELECT 1 AS value');
myQuery.Open; // <- this start transaction
PGConnection.Transaction.Active := False; // <- Close also myQuery
ShowMessage(myQuery.FieldByName('value').AsString); // <- Error: Field "value" not found
myQuery.Close;
PGConnction.Close;
finally
myQuery.Free;
PGConnection.Free;
PGTransaction.Free;
end;
end;
This behavior is a bit boring: I can't use TSQLQuery dataset with dbgrid (since I do not want my database in transaction for too long) so I need to move selected data in Memory Tables.
Is this a bug, I made some mistakes or is a normal operation? There is a way to open a SELECT Query and use it without start a transaction?
This is currently normal behaviour.
I have planned an 'offline' mode where the transaction is closed but the data is kept open.
What you can currently do is save the data to file (using the savetofile method), disconnect, and load the data again from file (using the loadfromfile method)