How can I add a geometry column in using pgAdmin? - sql

The following is the scripts I was using:
ALTER TABLE locations ADD COLUMN geom geometry(PointZ,4326);
I got this error:
ERROR: type "geometry" does not exist
LINE 1: ALTER TABLE steven_requests ADD COLUMN geom geometry(PointZ,...
^
SQL state: 42704
Character: 45
Thank you!

I found the way to deal with this extension:
I went into the application folder on my mac, then I opened the Postgresql folder, and then opened "Application stack builder"; from there, I was able to install the PostGIS extension.
Finally, this query was able to work:
ALTER TABLE locations ADD COLUMN geom geometry(PointZ,4326);

Related

Problems Loading data from CSV into a PostgreSQL table with PGADMIN

I am at a loss. I am on a mac using PG Admin 4 with PostgreSQL I tried to use the import/export data wizard when you right click on the table I created...and get this error ERROR: extra data after last expected column...all of the colomns match up and there is no additional data. I don't know what needs to change...
So then I tried to create it with the quarry tool with the following code (renaming things to put here):
create schema schema_name;
create table schema_name.tablename( column1 text, column2 text...); ***all the columns are a text data type***
copy schema_name.tablename
from '/Users/me/downloads/filename.csv'
delimiter ',' header csv;
and get this error message:
ERROR: could not open file "/Users/me/downloads/filename.csv" for reading: Permission denied
HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
SQL state: 42501
Going to properties for that database, and then to security, and privileges I made public all privileges. But that did nothing. And so I am here to ask yall for help.
Successfully import the data from the CSV.

Getting a Databricks drop schema error for delta table

I have a delta table schema that needs new columns/changed data types (Usually I do this on non delta tables and those work fine)
I have already dropped the existing delta table and tried dropping the schema and getting a 'v1 session catalog' error.
I am currently using SQL, 10.4 LTS cluster, spark3.2.1, scala 2.12 (I cant change these computes), driver and workers are standard E_v4
What I already did, and worked as usual
drop table if exists dbname.tablename;
What I wanted to do next:
drop schema if exists dbname.tablename;
The error I got instead:
Error in SQL statement: AnalysisException: Nested databases are not supported by v1 session catalog: dbname.tablename
When I try recreating the schema in the same location I get the error:
AnalysisException: The specified schema does not match the existing schema at dbfs:locationOfMy/table
... Differences
-Specified schema has additional fields newColNameIAdded, anotherNewColIAdded
-Specified type for myOldCol is different from existing schema ...
If your intention is to keep the existing schema, you can omit the
schema from the create table command. Otherwise please ensure that
the schema matches.
How can I do the schema drop and re-register it in same location and same name with new definitions?
Answering a month later since I didnt get replies and found the right solution;
Delta files have left over partitions and logs that cannot be updated using the drop commands. I had to manually delete the logs depending on where my location was.
Try this:
dbutils.fs.rm(path, True)
Use the path of your schema.
Then create your table again.

Cannot add a column after deleting another column in BigQuery

I cannot imagine there is such issue in BigQuery:
le's say if I drop a column using below command in BQ console for User table:
Alter table User drop column name -> successful
I am aware this column is preserved for 7 day(for time travel duration purpose).
But I cannot add any column anymore by running below command in BQ console:
ALTER TABLE User add column first_name STRING
Cause it will give an error like below even though the two columns have totally different naming:
Column name was recently deleted in the table User. Deleted column name is reserved for up to the time travel duration, use a different column name instead.
The above error is same as when I try to drop the same column again even with IF EXISTS:
Alter table User drop IF EXISTS column name
My question:
Why is this issue will happen? After 7 days, Can I add new columns as usual?
I have recreated your issue wherein I dropped a column named employee_like_2 and then tried to add a new column named new_column.
There is already a created bug for this issue. You may click on +1 to bring more attention on the issue and STAR the issue so that you can be notified for updates.
For the meantime, a possible workaround is to manually add columns through BigQuery UI.
Apart from the solution using UI suggested #Scott B, we can also do it using bq command:
Basically bq query --use_legacy_sql=false 'ALTER TABLE User add column first_name STRING' will fail to add a column. But I found a workaround
I can run bq update command instead like below:
bq show --schema --format=prettyjson DATASET.User > user_schema.json
Add a new column I want into file user_schema.json
bp update DATASET.User user_schema.json
So this basically means it is a 100% bug in BigQuery SQL command

Rename Hive schema/database

I'm using Hive 2.1.1 - and I need to rename one of my schemas.
I tried this command: ALTER SCHEMA name RENAME TO new_name;
But got this error back: FAILED: ParseException line 1:13 cannot recognize input near 'name' 'RENAME' 'TO' in alter database statement.
Does anyone know can I rename my schema?
Thanks!!

how to add a txt file into database table in netbeans

i want to add some data's into a table in netbeans. i have the values in .txt and .xls format. is there any way to insert it into the table.?
i found this link to do the work.
Link
but it showed an error when i performed this code
Create or Replace Directory cre_dir as 'C:\Users\Srinivasan\Desktop\SQL';
Error
Error code -1, SQL state 42X01: Syntax error: Encountered "OR" at line 1, column 8.
what is the mistake am i making here..?
You need to grant the CREATE DIRECTORY privilege to the user you are using or the user has to have DBA privilege
The path you are specifying must exists.
Read more about directories here