Query runs successfully and fetches empty result from user defined bucket, scope, and collection - indexing

I have set up a local couchbase one node cluster environment on Ubuntu.
Query runs and fetches result from default bucket after importing all the JSON documents in zip folder using cbdoclcoader command to default bucket
Command:
/opt/couchbase/bin/cbdocloader -c localhost:8091 -u Administrator -p 10i-0113 -b mybucket -m 100 -d Downloads/JSONs_List20211229-20220123T140145Z-001.zip
Query runs and fetches empty result from user defined bucket, scope, and collection and I don't find the reason of this although i have successfully imported json documents using the below command
/opt/couchbase/bin/cbimport json -c localhost:8091 -u Administrator -p 10i-0113 -b One_bucket -f lines -d file://'fileset__e53c883b-bc30-42cb-b4f7-969998c91e3d.json' -t 2 -g %type%::%id% --scope-collection-exp Raw.%type%
My guess is that when I try to create the index, it creates an index on the default bucket and I can not find a way to create an index on my custom bucket.
Please assist

I have fixed it :). Yes I was not getting any results when I try to query the collection because there was no index created on it.
Creating the index fixed the issue.
CREATE PRIMARY INDEX ON default:onebucket.rawscope.fileset

Related

cant use createdb command on bash terminal windows

when i am trying to create postgres database with bash terminal on windows 11, im using below commands;
createdb 'test'
or
createdb -U postgres 'test'
and nothing happens.
i added the bin folder to paths in "environment variables" of windows. but it didnt solve the problem.
what am i doing wrong?
1st solution:
sudo su - postgres to become postgres
psql -c "create database demo" to create it from shell
2nd solution:
Just simply enter the following commands on bash:
$ createdb -U postgres(db user) dbname
If you set hba_config in pg for the access to the db in network type:
$ createdb -h YOUR_IP -U postgres(db user) dbname
Lastly, if you set password for db user, pg will ask your password to create database.
Note: If nothing works from above, double-check your system environment variables
for me I used Windows command directly, could you first try cd directly to your postgresql bin folder (I suppose createdb application must be there), then try using createdb command. If it works, there must be some wrong config with your Env variable :D (need restart, or just reopen your terminal)

Export Query Result as CSV file from Docker PostgreSQL container to local machine

I'm not sure if this is possible of if I'm doing something wrong since I'm still pretty new to Docker. Basically, I want to export a query result inside PostgreSQL docker container as a csv file to my local machine.
This is where I got so far. Firstly, I run my PostgreSQL docker container with this command:
sudo docker run --rm --name pg-docker -e POSTGRES_PASSWORD=something -d -p 5432:5432 -v $HOME/docker/volumes/postgres:/var/lib/postgresql/data postgres
Then I access the docker container with docker exec to run PostgreSQL command that would copy the query result to a csv file with specified location like this:
\copy (select id,value from test) to 'test_1.csv' with csv;
I thought that should export the query result as a csv file named test_1.csv in the local machine, but I couldn't find the file anywhere in my local machine, also checked both of these directories: $HOME/docker/volumes/postgres; /var/lib/postgresql/data postgres
You can export the data to the STDOUT and pipe the result to a file in the client machine:
docker exec -it -u database_user_name container_name \
psql -d database_name -c "COPY (SELECT * FROM table) TO STDOUT CSV" > output.csv
-c tells psql you to execute a given SQL statement when the connection is established.
So your command should look like this:
docker exec -it -u postgres pgdocker \
psql -d yourdb -c "COPY (SELECT * FROM test) TO STDOUT CSV" > test_1.csv
The /var/lib/postgresql/data directory is where the database server stores its data files. It isn't a directory that users need to manipulate directly or where nothing interesting can be found.
Paths like test_1.csv are relative to working directory. The default directory when you enter the postgres container with docker exec is / so that's where your file should be. You can also switch to another directory with cd before running psql:
root#b9e5a0572207:/some/other/path# cd /some/other/path/
root#b9e5a0572207:/some/other/path# psql -U postgres
... or you can provide an absolute path:
\copy (select id,value from test) to '/some/other/path/test_1.csv' with csv;
You can use docker cp to transfer a file from the container to the host:
docker cp pg-docker:/some/other/path/test_1.csv /tmp
... or you can create a volume if this is something you do often.

How can I import a SQL Server RDS backup into a SQL Server Linux Docker instance?

I've followed the directions from the AWS documentation on importing / exporting a database from RDS using their stored procedures.
The command was similar to:
exec msdb.dbo.rds_backup_database
#source_db_name='MyDatabase',
#s3_arn_to_backup_to='my-bucket/myBackup.bak'
This part works fine, and I've done it plenty of times in the past.
However what I want to achieve now; is restoring this database to a local SQL Server instance; however I'm struggling at this point. I'm assuming this isn't a "normal" SQL Server dump - but I'm unsure what the difference is.
I've spun up a new SQL Server for Linux Docker instance; which seems all set. I have made a few changes so that the sqlcmd tool is installed; so technically the image I'm running is comprised of this Dockerfile; not much different.
FROM microsoft/mssql-server-linux:2017-latest
RUN apt-get update && \
apt-get install -y curl && \
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - && \
apt-get update && \
apt-get install -y mssql-tools unixodbc-dev
This image works fine; I'm building it via docker build -t sql . and running it via docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=myPassword1!' -p 1433:1433 -v $(pwd):/backups sql
Within my local folder, I have my backup from RDS downloaded, so this file is now in /backups/myBackup.bak
I now try to run sqlcmd to import the data with the following command; and I'm running into an issue which makes me assume this isn't a traditional SQL dump. Unsure what a traditional SQL dump looks like, but the majority of the file looks garbled with ^#^#^#^# and of course other things.
/opt/mssql-tools/bin/sqlcmd -S localhost -i /backups/myBackup.bak -U sa -P myPassword1! -x
And finally; I get this error:
Sqlcmd: Error: Syntax error at line 56048 near command 'GO' in file '/backups/myBackup.bak'.
Final Answer
My final solution for this mainly came from using -Q and running a RESTORE query rather than importing with the file, but I also needed to include some MOVE options as they were pointing at Windows file paths.
/opt/mssql-tools/bin/sqlcmd -U SA -P myPassword -Q "RESTORE DATABASE MyDatabase FROM DISK = N'/path/to/my/file.bak' WITH MOVE 'mydatabase' TO '/var/opt/mssql/mydatabase.mdf', MOVE 'mydatabase_log' TO '/var/opt/mssql/mydatabase.ldf', REPLACE"
You should use the RESTORE DATABASE command to interact with your backup file instead of specifying it as an input file of commands to the database:
/opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P myPassword1! -Q "RESTORE DATABASE MyDatabase FROM DISK='/backups/myBackup.bak'"
According to the sqlcmd Docs, the -i flag you used specifies:
The file that contains a batch of SQL statements or stored procedures.
That flag likely won't work properly if given a database backup file as an argument.

PgSQL - Export select query data direct to amazon s3 with headers

I have this requirement where i need to export the report data directly to csv since getting the array/query response and then building the scv and again uploading the final csv to amazon takes time. Is there a way by which i can directly create the csv with the redshift postgresql.
PgSQL - Export select query data direct to amazon s3 servers with headers
here is my version of pgsql - Version PgSQL 8.0.2 on amazon redshift
Thanks
You can use UNLOAD statement to save results to a S3 bucket. Keep in mind that this will create multiple files (at least one per computing node).
You will have to download all the files, combine them locally, sort (if needed), then add column headers and upload result back to S3.
Using the EC2 instance shouldn't take a lot of time - connection between EC2 and S3 is quite good.
In my experience, the quickest method is to use shells' commands:
# run query on the redshift
export PGPASSWORD='__your__redshift__pass__'
psql \
-h __your__redshift__host__ \
-p __your__redshift__port__ \
-U __your__redshift__user__ \
__your__redshift__database__name__ \
-c "UNLOAD __rest__of__query__"
# download all the results
s3cmd get s3://path_to_files_on_s3/bucket/files_prefix*
# merge all the files into one
cat files_prefix* > files_prefix_merged
# sort merged file by a given column (if needed)
sort -n -k2 files_prefix_merged > files_prefix_sorted
# add column names to destination file
echo -e "column 1 name\tcolumn 2 name\tcolumn 3 name" > files_prefix_finished
# add merged and sorted file into destination file
cat files_prefix_sorted >> files_prefix_finished
# upload destination file to s3
s3cmd put files_prefix_finished s3://path_to_files_on_s3/bucket/...
# cleanup
s3cmd del s3://path_to_files_on_s3/bucket/files_prefix*
rm files_prefix* files_prefix_merged files_prefix_sorted files_prefix_finished

Open up a sql file and running it in PostgreSQL is having an issue with the path because of space in the folder name

I'm using command line to run the following script:
C:\Progra~1\pgAdmin III\1.16\psql -d [tablename] -h [servername] -p 5432 -U postgres -f C:\test\query.sql
But the issue comes with the folder pgAdmin III that I want to run the query in since it has a space in the name. When I changed the actual folder name to pgAdminIII and updated the script it will run the script just fine. I was wondering how I can run this script without physically modifying the folder name (i.e. keep it as pgAdmin III)?
Put double qoutes around a path with spaces in it:
"C:\Progra~1\pgAdmin III\1.16\psql" -d [tablename] ...