Script timeout passed, resubmit same file and import will resume error in phpmyadmin - sql

I am trying to upload some big data onto phpmyadmin
I am getting this error where it does partial uploads
Script timeout passed, if you want to finish import, please resubmit same file and import will resume.
I have followed this link where it says change the config on \phpmyadmin\libraries\config.default.php
I cannot see this directory in the phpmyadmin. OS Ubuntu

Go to xampp/phpMyAdmin/libraries/config.default.php
find $cfg['ExecTimeLimit'] = 300; line no 695 and replace $cfg['ExecTimeLimit'] = 0;

i think there were problems with your sql codes, try to surrounding your sql codes (edit with text editor eg. notepad++) with following statement :
SET autocommit=0;
SET unique_checks=0;
SET foreign_key_checks=0;
at the beginning, and
COMMIT;
SET unique_checks=1;
SET foreign_key_checks=1;
at the end.
and you could import your sql file once again.
give it a try.

Related

Write to Oracle concurrent request output / log from a SQLPlus program

I have an Oracle concurrent request that calls a SQLPlus program. The program itself is working correctly, but I would like to add some logging information to the concurrent request output / log in EBS.
I have tried a number of variations of:
set heading off
--set pagesize 0 embedded on
set pagesize 50000
set linesize 32767
set feedback off
set verify off
set term off
set echo off
set newpage none
set serveroutput on
dbms_output.enable(1000000);
--prepare data
EXECUTE program (&1,&2,&3,&4,&5);
--extract data
#"path/file.SQL";
fnd_file.put_line(FND_FILE.LOG,'do some logging here');
fnd_file.put_line(FND_FILE.OUTPUT,'do some logging here');
/
But everything I've tried so far results with either
no logging added to request output or log
no request output whatsoever
errors like:
SP2-0734: unknown command beginning "dbms_outpu..." - rest of line ignored.
and
PLS-00103: Encountered the symbol "ENABLE" when expecting one of the following: := . ( # % ;
Is it possible to write to the request output or log from a SQLPlus script that is called from concurrent manager?
First of all, your SQL*Plus script does not even run without your attempts at logging.
dbms_output enable(...) is missing a dot ('.').
Your anonymous PL/SQL block has no end; statement
#"path/file.SQL` is a SQL*Plus command -- it cannot be embedded in an anonymous PL/SQL block.
Aside from those basic problems, FND_FILE.PUT_LINE is only for PL/SQL concurrent programs. That is, concurrent programs whose executable points to a PL/SQL package procedure and not a .sql file under $APPL_TOP.
For SQL*Plus concurrent programs, i.e., running a .sql file under $APPL_TOP, FND_FILE.PUT_LINE does not work. Instead, your SQL*Plus output is automatically written to the request output. There is no standard way to write to the request log.
If you really need to write to the request log, you could maybe call FND_FILE.PUT_NAMES to cause FND_FILE.PUT_LINE to write to temporary files that you name. Then, knowing the concurrent request ID and the logic Oracle EBS uses to local output and log files, do a FND_FILE.CLOSE and host command to move the custom-named files you specified to the actual locations. That might work.
It'd be much better to redo your concurrent program as a PL/SQL package. Then FND_FILE works just fine. If you know how to call Java from the database, there is very little you can do in a .sql script that you cannot do in a PL/SQL package.
I have not written a .sql concurrent program in years, and I write concurrent programs all the time.
I have resolved this problem. The solution is incredibly simple - and now I'm bent out of shape because it took so long to realize.
Step 1 - SET ECHO ON
Step 2 - PROMPT whatever you want written to concurrent request output
The following sample writes 'Output is written to this folder' to the concurrent request output.
set heading off
--set pagesize 0 embedded on
set pagesize 50000
set linesize 32767
set feedback off
set verify off
set term off
set echo on
set newpage none
set serveroutput on
prompt Output is written to this folder
--prepare data
EXECUTE program (&1,&2,&3,&4,&5);
--extract data
#"path/file.SQL";
/
This is exactly what I was looking for. Maybe this will be useful to someone in another galaxy.
If this is for testing/debugging purposes, you can specify the location of the log and output files with the routine: FND_FILE.PUT_NAMES and as soon as you log all the required information you need to close the file with: FND_FILE.CLOSE
As Matthew mentioned, logging in SQL*Plus executables doesn't work well. If you can't move your code to a PL/SQL Stored Procedure for some reason, a Host script might work for you instead. From there, you can execute SQL, e.g. sqlplus -s $FCP_LOGIN ... and write log information as required.
If you just need to prepare data by PLSQL and then spool it to CSV via SQL, you can use our company's Blitz Report instead, which does this more convenient and is for free for such use. It also uses a Host type executable and calls sqlplus from there.

How to change the max size for file upload on AOLServer/CentOS 6?

We have a portal for our customers that allow them to start new projects directly on our platform. The problem is that we cannot upload documents bigger than 10MO.
Every time I try to upload a file bigger than 10Mo, I have a "The connection was reset" error. After some research it seems that I need to change the max size for uploads but I don't know where to do it.
I'm on CentOS 6.4/RedHat with AOL Server.
Language: TCL.
Anyone has an idea on how to do it?
EDIT
In the end I could solve the problem with the command ns_limits set default -maxupload 500000000.
In your config.tcl, add the following line to the nssock module section:
set max_file_upload_mb 25
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param maxinput [expr {$max_file_upload_mb * 1024 * 1024}]
# ...
It is also advised to constrain the upload times, by setting:
set max_file_upload_min 5
# ...
ns_section ns/server/${server}/module/nssock
# ...
ns_param recvwait [expr {$max_file_upload_min * 60}]
If running on top of nsopenssl, you will have to set those configuration values (maxinput, recvwait) in a different section.
I see that you are running Project Open. As well as setting the maxinput value for AOLserver, as described by mrcalvin, you also need to set 2 parameters in the Site Map:
Attachments package: parameter "MaximumFileSize"
File Storage package: parameter "MaximumFileSize"
These should be set to values in bytes, but not larger than the maxinput value for AOLserver. See the Project Open documentation for more info.
In the case where you are running Project Open using a reverse proxy, check the documentation here for Pound and here for Nginx. Most likely you will need to set a larger file upload limit there too.

In Google collab I get IOPub data rate exceeded

IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
--NotebookApp.iopub_data_rate_limit.
Current values:
NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
NotebookApp.rate_limit_window=3.0 (secs)
An IOPub error usually occurs when you try to print a large amount of data to the console. Check your print statements - if you're trying to print a file that exceeds 10MB, its likely that this caused the error. Try to read smaller portions of the file/data.
I faced this issue while reading a file from Google Drive to Colab.
I used this link https://colab.research.google.com/notebook#fileId=/v2/external/notebooks/io.ipynb
and the problem was in this block of code
# Download the file we just uploaded.
#
# Replace the assignment below with your file ID
# to download a different file.
#
# A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz
file_id = 'target_file_id'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
#Remove this print statement
#print('Downloaded file contents are: {}'.format(downloaded.read()))
I had to remove the last print statement since it exceeded the 10MB limit in the notebook - print('Downloaded file contents are: {}'.format(downloaded.read()))
Your file will still be downloaded and you can read it in smaller chunks or read a portion of the file.
The above answer is correct, I just commented the print statement and the error went away. just keeping it here so someone might find it useful. Suppose u are reading a csv file from google drive just import pandas and add pd.read_csv(downloaded) it will work just fine.
file_id = 'FILEID'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
pd.read_csv(downloaded);
Maybe this will help..
from via sv1997
IOPub Error on Google Colaboratory in Jupyter Notebook
IoPub Error is occurring in Colab because you are trying to display the output on the console itself(Eg. print() statements) which is very large.
The IoPub Error maybe related in print function.
So delete or annotate the print function. It may resolve the error.
%cd darknet
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
!sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
!apt update
!apt-get install libopencv-dev
its important to update your make file. and also, keep your input file name correct

what to use instead of bulk load? Got error 'do not have permission to use the bulk load statement'

I am trying to add a text file into SQL database table using BULK INSERT.
BULK
INSERT My_Tablename
FROM 'C:\testing\temptest.txt'
WITH
(
FIELDTERMINATOR = '|',
ROWTERMINATOR = '\n'
)
GO
But got error that 'do not have permission to use the bulk load statement'.
Is there any alternative way to do it?
I don't want to set TRUSTWORTHY ON or create certificate for BULK admin permission.
Try using the SQL Server Import and Export Wizard.
Right click on the database in in Object Explorer within SSMS.
Go to Tasks > Import Data
Select "Flat File Source" for your data source and follow the wizard to specify delimiters, etc.
Although #SQLChao definitely has the answer, I did not remember the location of said Import Data option and simply opened the delimited file with my favorite text editor, Notepad++ and did the following find and replaces with Search Mode set to extended:
Find: ' Replace: ''
Find: | Replace: ','
Find: \r\n Replace: ')\r\n
Find: \r\n Replace: \r\nINSERT INTO [DB_Name].[Schema_Name].[Table_Name] VALUES(\r\n'
The only issues should be in your first and last insert statements which can manually be edited as need be.
I then copied the text straight into Sql Server and executed.

hide error messages in dcl script

I have a test script I'm running that generates some errors,shown below, I expect these errors. Is there anyway I can prevent them from showing on the screen however? I use the
$ write sys$output
to display if there is an expected error.
I tried to use
$ DEFINE SYS$ERROR ERROR.LOG
but this then changed my entire error output log to this, if this is the correct way to handle it can I unset this at the end of my script somehow?
[error example]
%DCL-E-OPENIN, error opening TEST$DISK:[AAA]NOTTHERE.TXT; as input
-RMS-E-FNF, file not found
%DCL-E-OPENIN, error opening TEST$DISK:[AAA]NOTTHERE.TXT; as input
-RMS-E-FNF, file not found
%DCL-W-UNDFIL, file has not been opened by DCL - check logical name
DEFINE/USER creates a logical name that disappears when the next image exits.
So if you use that just before a command just to protect that command, then fine.
Otherwise I would prefer SET MESSAGE to control the output.
And of course yoy want to grab $STATUS and verify it after the command for success or for the expected error, reporting any unexpected error.
Better still... if you expect certain error conditions to occur,
then why not test for them?
For example:
$ file = F$SEARCH("TEST$DISK:[AAA]NOTTHERE.TXT")
$ IF file.NES."" THEN TYPE 'file'
Cheers,
Hein
To suppress Error message inside a script. try this command
$ DEFINE/USER SYS$ERROR NL:
NL: is a null device, so you don`t see any error messages displayed on your terminal.
good luck
This works interactively and in batch.
$ SET MESSAGE /NOTEXT /NOSEV /NOFAC /NOID
$ <DCL_Command>
$ SET MESSAGE /TEXT /SEV /FAC/ ID