I have set up the code as described in this question.
Creating an alias works, as well as dropping it.
For members that I have created myself, this is working correctly, but for existing members I get the following error when selecting from the alias:
SQL State: 42704
Vendor Code: -204
Message: [SQL0204] MyMemberName in MyLib type *FILE not found.
Cause . . . . . : MyMemberName in
TPLWHS type *FILE was not found. If the member name is *ALL, the table
is not partitioned. If this is an ALTER TABLE statement and the type
is *N, a constraint or partition was not found. If this is not an
ALTER TABLE statement and the type is *N, a function, procedure,
trigger or sequence object was not found. If a function was not found,
MyMemberName is the service program that contains the function. The
function will not be found unless the external name and usage name
match exactly. Examine the job log for a message that gives more
details on which function name is being searched for and the name that
did not match.
Recovery . . . : Change the name and try the request
again. If the object is a node group, ensure that the DB2 Multisystem
product is installed on your system and create a nodegroup with the
CRTNODGRP CL command. If an external function was not found, be sure
that the case of the EXTERNAL NAME on the CREATE FUNCTION statement
exactly matches the case of the name exported by the service program.
Any help you can offer is much appreciated. Thanks!
EDIT: Here is my code:
create alias MyLib.MyAlias for MyLib.MyLogicalFile(MyMember);
select * from MyLib.MyAlias;
drop alias MyLib.MyAlias;
The format of Lib.Alias has worked for me when I directly created the phyiscal and logical members. Perhaps the logical file is missing? I'll double check...
This error message can indicate that the file/logical file/member does not exist.
Related
I need to make a query for a dataset provided by a public project. I created my own project and added their dataset to my project. There is a table named: domain_public. When I make query to this table I get this error:
Query Failed
Error: Not found: Dataset my-project-name:domain_public was not found in location US
Job ID: my-project-name:US.bquijob_xxxx
I am from non-US country. What is the issue and how to fix it please?
EDIT 1:
I change the processing location to asia-northeast1 (I am based in Singapore) but the same error:
Error: Not found: Dataset censys-my-projectname:domain_public was not found in location asia-northeast1
Here is a view of my project and the public project censys-io:
Please advise.
EDIT 2:
The query I used to type is based on censys tutorial is:
#standardsql
SELECT domain, alexa_rank
FROM domain_public.current
WHERE p443.https.tls.cipher_suite = 'some_cipher_suite_goes_here';
When I changed the FROM clause to:
FROM `censys-io.domain_public.current`
And the last line to:
WHERE p443.https.tls.cipher_suite.name = 'some_cipher_suite_goes_here';
It worked. Shall I understand that I should always include the projectname.dataset.table (if I'm using the correct terms) and point the typo the Censys? Or is this special case to this project for some reason?
BigQuery can't find your data
How to fix it
Make sure your FROM location contains 3 parts
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
Like so
`bigquery-public-data.hacker_news.stories`
*note the backticks
Examples
Wrong
SELECT *
FROM `stories`
Wrong
SELECT *
FROM `hacker_news.stories`
Correct
SELECT *
FROM `bigquery-public-data.hacker_news.stories`
In Web UI - click Show Options button and than select your location for "Processing Location"!
Specify the location in which the query will execute. Queries that run in a specific location may only reference data in that location. For data in US/EU, you may choose Unspecified to run the query in the location where the data resides. For data in other locations, you must specify the query location explicitly.
Update
As it stated above - Queries that run in a specific location may only reference data in that location
Assuming that censys-io.domain_public dataset has its data in US - you need to specify US for Processing Location
The problem turned out to be due to wrong table name in the FROM clause.
The right FROM clause should be:
FROM `censys-io.domain_public.current`
While I was typing:
FROM domain_public.current
So the project name is required in the FROM and `` are required because of - in the project name.
Make sure your FROM location contains 3 parts as #stevec mentioned
A project (e.g. bigquery-public-data)
A database (e.g. hacker_news)
A table (e.g. stories)
But in my case, I was using the LegacySql within the Google script editor, so in that case you need to state that to false, for example:
var projectId = 'xxxxxxx';
var request = {
query: 'select * from project.database.table',
useLegacySql: false
};
var queryResults = BigQuery.Jobs.query(request, projectId);
check exact case [upper or lower] and spelling of table or view name.
copy it from table definition and your problem will be solved.
i was using FPL009_Year_Categorization instead of FPL009_Year_categorization
using c as C and getting the error "not found in location asia-south1"
I copied with exact case and problem is resolved.
On your Big Query console, go to the Data Explorer on the left pane, click the small three dots, then select query option from the list. This step confirms you choose the correct project and dataset. Then you can edit the query on the query pane on the right.
may be dataset name changed in create dataset option. it should be US or default location
enter image description here
I have got a function module that counts some variables in sap system and export it as single INT4. But when I try to use this in gateway service, it says me
"no output table mapped" How can i overcome it, I tried to put this variable in a table and export then but I couldnt.
DATA: EV_ENQ TYPE STANDARD TABLE OF seqg3.
CALL FUNCTION 'ENQUEUE_READ'
EXPORTING
guname = '*'
IMPORTING
number = EV_TABLESIZE
TABLES
enq = EV_ENQ.
Ev_Tablesize is the variable that I want to export. It holds the total lock count.
Your parameter should be mapped under your service implementation in SEGW. If it is not, then you should map them again and be sure that the parameter is being displayed.
I am getting the BigQuery table name at runtime and I pass that name to the BigQueryIO.write operation at the end of my pipeline to write to that table.
The code that I've written for it is:
rows.apply("write to BigQuery", BigQueryIO
.writeTableRows()
.withSchema(schema)
.to("projectID:DatasetID."+tablename)
.withWriteDisposition(WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED));
With this syntax I always get an error,
Exception in thread "main" java.lang.IllegalArgumentException: Table reference is not in [project_id]:[dataset_id].[table_id] format
How to pass the table name with the correct format when I don't know before hand which table it should put the data in? Any suggestions?
Thank You
Very late to the party on this however.
I suspect the issue is you were passing in a string not a table reference.
If you created a table reference I suspect you'd have no issues with the above code.
com.google.api.services.bigquery.model.TableReference table = new TableReference()
.setProjectId(projectID)
.setDatasetId(DatasetID)
.setTableId(tablename);
rows.apply("write to BigQuery", BigQueryIO
.writeTableRows()
.withSchema(schema)
.to(table)
.withWriteDisposition(WriteDisposition.WRITE_TRUNCATE)
.withCreateDisposition(CreateDisposition.CREATE_IF_NEEDED));
I was playing around with DBLINK and I wanted to try it. So I run this simple query
CREATE EXTENSION dblink;
SELECT *
FROM dblink(('dbname=genesis_admin')::text,
('SELECT * FROM user_account')::text);
then to my surprise
[WARNING ] CREATE EXTENSION dblink
ERROR: extension "dblink" already exists
[WARNING ] SELECT * FROM dblink(('dbname=genesis_admin')::text, ('SELECT * FROM user_account')::text)
ERROR: function dblink(text, text) does not exist
LINE 1: SELECT * FROM dblink(('dbname=genesis_admin')::text, ('SELE...
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
How can it not be existing if it already exists?
I encountered the same error and the reason is that this is because DBLink gets installed (by default) in Public schema and you have probably modified the search_path to a list that doesn't include public). The Database although contains the function DBlink(text, text) is therefore unable to find that function.
To get this to work, you need to add specific schema to the DBlink function call.
SELECT public.dblink(xxx, yyy);
I can't figure out what to do to avoid this error whenever I try to make a migration using South in one of my Django projects:
ERROR:
Running migrations for askbot:
Migrating forwards to 0006_auto__del_field_tagplus_tag_ptr__add_field_tagplus_id__add_field_tagpl.
askbot:0006_auto__del_field_tagplus_tag_ptr__add_field_tagplus_id__add_field_tagpl
FATAL ERROR - The following SQL query failed: ALTER TABLE "tagplus" ADD COLUMN "id" serial NOT >NULL PRIMARY KEY DEFAULT -1;
The error was: multiple default values specified for column "id" of table "tagplus"
Error in migration: >askbot:0006_auto__del_field_tagplus_tag_ptr__add_field_tagplus_id__add_field_tagpl
DatabaseError: multiple default values specified for column "id" of table "tagplus"
MIGRATION FILE 0006 CODE (Partial):
class Migration(SchemaMigration):
def forwards(self, orm):
# Deleting field 'TagPlus.tag_ptr'
db.delete_column(u'tagplus', u'tag_ptr_id')
# Adding field 'TagPlus.id'
db.add_column(u'tagplus', u'id',
self.gf('django.db.models.fields.AutoField')(default=0, primary_key=True),
keep_default=False)
# Adding field 'TagPlus.name'
db.add_column(u'tagplus', 'name',
self.gf('django.db.models.fields.CharField')(default=0, unique=True, max_length=255),
keep_default=False)
Thanks!
EDIT:
I Guess the error has something to do with this choice I was prompted while creating the migration file.
? The field 'TagPlus.tag_ptr' does not have a default specified, yet is NOT NULL.
? Since you are removing this field, you MUST specify a default
? value to use for existing rows. Would you like to:
? 1. Quit now.
? 2. Specify a one-off value to use for existing columns now
? 3. Disable the backwards migration by raising an exception; you can edit the migration to fix it later
? Please select a choice:
I selected 'specify one-off value' and I set this value to 0
You are anyways saying keep_default=False. So remove that default=0 from your code
db.add_column(u'tagplus', u'id',
self.gf('django.db.models.fields.AutoField')(primary_key=True),
keep_default=False)
Per SQL it should be (remove the NOT NULL)
ALTER TABLE tagplus ADD COLUMN id serial PRIMARY KEY
See this document which explains the reason behind this error http://www.postgresql.org/docs/8.3/interactive/datatype-numeric.html#DATATYPE-SERIAL
There are two things to note:
Django will use 'id' as the default primary key if you haven't manually set one before, see here.
Postgres does not really have a 'serial' type. To resolve this issue, try to replace:
# Adding field 'TagPlus.id'
db.add_column(u'tagplus', u'id', self.gf('django.db.models.fields.AutoField')(default=0, primary_key=True), keep_default=False)
with:
# Adding field 'TagPlus.id'
if 'postgres' in db.backend_name.lower():
db.execute("CREATE SEQUENCE tagplus_id_seq")
db.execute("SELECT setval('tagplus_id_seq', (SELECT MAX(id) FROM tagplus))")
db.execute("ALTER TABLE tagplus ADD COLUMN id SET DEFAULT nextval('tagplus_id_seq'::regclass)")
db.execute("ALTER SEQUENCE tagplus_id_seq OWNED BY tagplus.id")
else:
db.add_column(u'tagplus', u'id', self.gf('django.db.models.fields.AutoField')(default=0, primary_key=True), keep_default=False)