How do I output the entire access log for a specific IP address only (from access.log)?
In other words, an access log that only contains details from a specific IP address.
If you are familiar with unix command line tools then you can also try grep or awk. Here's how you can use grep. Replace the IP with the one you are looking for and access.log with the full path to that file on your system.
grep 123.345.678.129 access.log
Hope that helps!
Copy the file and paste it on desktop or any other directory.
no open new excel file.
In excel, Go to Data From file, select all files type, open the file and import.
Select fixed width in original data type and finish.
select the first column and click on data ->filter
now select you can select data with ip now.
Related
For example, I want to open a PDF file in the browser from the command line (just because it's much faster and I need to open many files at once) and when I use the command start [file name] from its directory it try to open it as a executable, so I need to open the browser and type the full path of the file as an attribute, is there a way to call the full path without typing it?
what I exactly need is I need the full path of a file to convert it to string (for example in the browser)
Using tab completions may help. For example, if your target file is named thisPDFisTotallyBananas.pdf and you have another file in the same folder named thisOtherPDFisNot.pdf, you could type thisP then TAB to complete the file name in the command prompt without needing to type the whole filename.
I am new to postgres, probably missing something silly like (the correct name of my directory). Can someone guide me?
I am following book instructions, Practical SQL by Anthony DeBarros
Code:
copy us_counties_2010 from 'C:\Users\obella\OneDrive\Desktop\us_counties_2010.csv' with (FORMAT CSV, HEADER);
Error:
ERROR: could not open file
"C:\Users\obella\OneDrive\Desktop\us_counties_2010.csv" for reading:
Permission denied HINT: COPY FROM instructs the PostgreSQL server
process to read a file. You may want a client-side facility such as
psql's \copy. SQL state: 42501
copy us_counties_2010 from
'C:\Users\obella\OneDrive\Desktop\us_counties_2010.csv' with (FORMAT
CSV, HEADER);
Expected:
Query returned successfully: 3143 rows affected
Actual:
ERROR: could not open file
"C:\Users\obella\OneDrive\Desktop\us_counties_2010.csv" for reading:
Permission denied HINT: COPY FROM instructs the PostgreSQL server
process to read a file. You may want a client-side facility such as
psql's \copy. SQL state: 42501
All that is to be done is:
Go to Properties of that particular file by right clicking on it. Then, go to Security tab of the displayed Properties dialog box. Click on Edit option. Permissions dialog box appears, then click on Add button. Type 'Everyone' (without apostrophes) in the "Enter the object names to select" description box and click on OK button. Then, make sure all the checkboxes of "Permissions for Everyone" are selected by just ticking the "Full Control" check box to allow the control access without any restriction. Then, Apply and OK all the tabs to apply all the changes done.
You can now run/execute the query without any errors.
As the message tells you, Postres is not allowed to read the file.
If you want to fix that open the Task Manager, and click on "Show processes from all users". Look for the rows with the image name postgres.exe (likely more than one). Remember the value in the column "User Name" (it's probably NETWORK SERVICE). Open the properties of your file, add that user in the "Security" tab and grant read access to them.
Or use psql's \copy as the message suggests.
copy us_counties_2010 (your column name)//(country_code, latitude, longitude, country, population)
FROM 'D:\us_counties_2010 .csv' DELIMITER ',' csv HEADER
Your csv file should be in other than C drive. It can't access your C drive. Store it in D drive or any other it will work perfect.
Change the Location of data file and path to Drive['D:\us_counties_2010.csv' ] it will work.
The permission is denied because your files[us_counties_2010.csv] is in 'C' Drive ['C:\Users\obella\OneDrive\Desktop\us_counties_2010.csv'] which is a Windows Drive and permissions are restricted and may not be changed easily & impossible without administrative privileges.
Good Luck & happy programming!
If you are using PSQL, run it as administrator, then you shouldn't have any problem when you are using COPY
In the case when creating a table as well as importing data from a CSV file, we can skip the query and use the program itself. To do this, simply right-click on your table in the tree on the left and select the Import/Export… menu item.
A window will appear with the slider set to Import. Then select the source file and set the format to CSV. Set the Header to Yes if your file has a header. The only thing left is to select the delimiter (usually a comma).
When you click OK, the data will be imported.
For a better understanding, you can refer original article.
https://learnsql.com/blog/how-to-import-csv-to-postgresql/
copy us_counties_2010 from 'C:\Users\obella\OneDrive\Desktop\us_counties_2010.csv' with (FORMAT CSV, HEADER,DELIMITER ',');
Use this code instead of that
Is there a way to mass update the TableStorage entities?
Say, I want to rename all the Clients having "York" in the City field to "New-York".
Is there some tools to do it directly (without the need to writing code)?
You could try to use Microsoft Azure Storage Explorer to achieve it.
First, you have some entities in table storage with a City field in your Storage Explorer.
Then you could click Export button to export all your entities to a .csv file.
Enter Ctrl + F and choose Replace item.
Fill the find and replace item with what you want then choose Replace All.
Finally, go back to the Storage Explorer and click Import button to choose the .csv file you have edited before.
I wanted to do the trick with export/import but it's a no go when you have millions of records. I exported all the records and ended up with ~5gb file. Azure Storage Explorer couldn't handle it (my pc i7, 32gb ram).
If someone is also struggling with similar issue, you can do as follow:
Export records to csv file
Remove the lines that you don't want to modify (if needed). You can use grep "i_want_this_phrase" myfile > mynewfile or use -v option to find all that doesn't match the given phrase. If file is too large, split it with some command eg. cat bigFile.csv | parallel --header : --pipe -N999 'cat >file_{#}.csv'
Remove everything except the RowKey column.
Prepare az cli command similar to az storage entity merge --connection-string 'XXX' --account-name your_storage -t your_table -e PartitionKey=your_pk MyColumn=false MyColumn#odata.type=Edm.Boolean RowKey=. Remember about odata.type. At first I did an update without this and instead of bools, I switched to strings. Luckily it was easy to fix.
Open the file in VSC, select all with ctrl+a, then shift+alt+i to put a cursor at the end of all lines and then paste previously prepared az cli command. This way you will get a list of az cli updates for each RowKey.
Add #!/bin/bash at the beginning of the file, save as .sh, modify privileges chmod +x yourfile and run.
Of course if you want, you can create some bash script for that and read a file line by line and execute az command. I just did it my way as it was much simpler for me, I'm not so experienced in bash, so it would take me a while to dev&test the script.
I have a file in my D: drive of my computer and I want to copy this file to an SAP application server so that I am able to see my file with transaction AL11.
I know that I can create a file with AL11 but I want do this in ABAP.
Of course in my search I find this code but I cannot solve my problem with it.
data: unixcom like rlgrap-filename.
data: begin of tabl occurs 500,
line(400),
end of tabl.
dir =
unixcom = 'mkdir mydir'. "command to create dir
"to execute the unix command
call 'SYSTEM' id 'COMMAND' field unixcom
id 'TAB' field tabl[].
To upload the file to the application server, there are three steps to be followed. To open the file use the below statement:
Step1: OPEN DATASET file name FOR INPUT IN TEXT MODE ENCODING DEFAULT.
To write into the application server use.
Step2: TRANSFER name TO file name.
Dont forget to close the file once it is transferred.
Step3: CLOSE DATASET file name.
Plese mark with correct answer, if it helps! :)
If you want to do this using ABAP you could create a small report that uses the function module GUI_UPLOAD to get the file from your local disk into an internal table and then write it to the application server with something like this:
lv_filename = '\\path\to\al11\directory\file.txt'.
OPEN DATASET lv_filename FOR OUTPUT IN TEXT MODE ENCODING UTF-8.
LOOP AT lt_contents INTO lv_line.
TRANSFER lv_line TO lv_filename.
ENDLOOP.
CLOSE DATASET lv_filename.
I used CG3Z transaction and with this transaction I was able to copy a file in the application server directory.
Is there any app for mac to split sql files or even script?
I have a large files which i have to upload it to hosting that doesn't support files over 8 MB.
*I don't have SSH access
You can use this : http://www.ozerov.de/bigdump/
Or
Use this command to split the sql file
split -l 5000 ./path/to/mysqldump.sql ./mysqldump/dbpart-
The split command takes a file and breaks it into multiple files. The -l 5000 part tells it to split the file every five thousand lines. The next bit is the path to your file, and the next part is the path you want to save the output to. Files will be saved as whatever filename you specify (e.g. “dbpart-”) with an alphabetical letter combination appended.
Now you should be able to import your files one at a time through phpMyAdmin without issue.
More info http://www.webmaster-source.com/2011/09/26/how-to-import-a-very-large-sql-dump-with-phpmyadmin/
This tool should do the trick: MySQLDumpSplitter
It's free and open source.
Unlike the accepted answer to this question, this app will always keep extended inserts intact so the precise form of your query doesn't matter; the resulting files will always have valid SQL syntax.
Full disclosure: I am a share holder of the company that hosts this program.
The UploadDir feature in phpMyAdmin could help you, if you have FTP access and can modify your phpMyAdmin's configuration (or are allowed to install your own instance of phpMyAdmin).
http://docs.phpmyadmin.net/en/latest/config.html?highlight=uploaddir#cfg_UploadDir
You can split into working SQL statements with:
csplit -s -f db-part db.sql "/^# Dump of table/" "{99}"
Which makes up to 99 files named 'db-part[n]' from db.sql
You can use "CREATE TABLE" or "INSERT INTO" instead of "# Dump of ..."
Also: Avoid installing any programs or uploading your data into any online service. You don't know what will be done with your information!