Facing error while accessing cassandra data through Solr in DSE - datastax

When I issue this query on solr(separate db) then it is working. But when I am accessing cassandra data through Solr query(I am using DSE) then it returns nothing. And it is giving some error related UserCacheField. So How to enable UserCacheField in a solr query?
Update
My Query is
select * FROM trackfleet_db.location WHERE
solr_query='{"facet":{"pivot":"date,latitude,longitude"},"q":":"}' ;
And I am getting following error
InvalidRequest: Error from server: code=2200 [Invalid query]
message="Field cache is disabled, set the field=date to be docValues=true
and reindex. Or if the field cache will not exceed the heap usage,
then place useFieldCache=true in the request parameters."

The best approach should be enabling of docValues on given field (date) & reindex the data.
But it looks like that you have this field defined with date type, that (per documentation) doesn't support docValues, so you may need to change type for this field to timestamp (I'm not sure that you can use copy field with different type).

Related

Datastudio BigQuery connector: The query returned an error

When creating a BigQuery data connector for Google Data Studio, my query works until I attempt to parameterize some fields. As soon as I add parameters, I get the unhelpful and unspecific error:
The query returned an error.
Error ID: xyz
How can I figure out what the underlying issue is that is causing this problem?
1. Check BigQuery Logs in Cloud Logging
If there is an error executing a query in BigQuery, the underlying cause will likely be visible in Cloud Logging. In Cloud Logging, execute this query to show these errors, and hopefully get insight into the underlying problem:
resource.type="bigquery_resource"
severity=ERROR
Its possible these logs will show that the query is failing because the format of certain data is invalid -- in that case its likely because having no default values for parameters is preventing the BigQuery query from succeeding. In that case:
2. Give Parameters Default Values
The connector passes the query to BigQuery, which executes it. In order for this to work correctly, the parameters need to have some values. Provide them in the form of parameter default values that will result in a valid query.

StreamSets data not landing into table created on postgres db

I am using StreamSets to build a pipeline to land data from a table that sits in a sqlserver db to a table on postgres db.
JDBC Query Consumer --> Timestamp --> JDBC Producer
The pipeline passes validation checks and runs successfully on preview mode. However, the problem is that the data does not land into the postgres table.
I have checked the connection string and credentials and these should be right.
This is the error it throws in the logs.
No parameters found for record with YY SELECT 'XX' AS fieldA, YY AS
fieldB, ZZ AS fieldC::rowCount:#; skipping
How can I resolve this issue?
'No parameters found' means that there were no fields on the record that could be mapped to database columns. Check your field-to-column mappings. If they look correct, it might be a problem with case. Try enabling Enclose Object Names on the JDBC tab.

Google BigQuery: Error: Invalid schema update. Field has changed mode from REQUIRED to NULLABLE

I'm trying to append the results of a query to another table.
It doesn't work and sends out the following error:
Error: Invalid schema update. Field X has changed mode from REQUIRED to NULLABLE.
The field X is indeed REQUIRED, but I don't try to insert any NULL-values into that specific column (the whole table doesn't have a single NULL value).
This looks like a bug to me. Anyone knows a way to work around this issue?
The issue is fixed after switching from Legacy SQL to Standard SQL.

Deleting rows in BigQuery fails with "Invalid schema update"

I'm trying to delete some rows from a BigQuery table (using standard SQL dialect):
DELETE FROM ocds.releases
WHERE
ocid LIKE 'ocds-b5fd17-%'
However, I get the following error:
Query Failed
Error: Invalid schema update. Field packageInfo has changed mode from REQUIRED to NULLABLE
Job ID: ocds-172716:bquijob_2f60927_15d13c97149
It seems as though BigQuery doesn't like deleting rows with a REQUIRED column. Is there any way around this?
It has been a known limitation that BigQuery DML doesn't work with tables with required fields (see https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language#known_issues).
We are in the process of removing this limitation. We whitelisted your project today. Please try running your query again in the same project. Let us know if the problem is still there, or if you want to have more projects whitelisted.

DB2 LOAD Modifier - GeneratedOverride or IdentityOverride

I am performing a DB2 load, and I am struggling to understand the impact of using GeneratedOverride over IdentityOverride. When I run the following command:
db2 load from tab123.ixf of ixf replace into application.table_abc
All rows are rejected, with the following error being the culprit:
SQL3550W The field value in row row-number and column column-number is not NULL, but the target column has been defined as GENERATED ALWAYS.
So to try and step around this, I executed
:
db2 load from tab123.ixf of ixf modified by identityoverride replace into application.table_abc
But this immediately returned this error:
SQL3526N The modifier clause "IDENTITY OVERRIDE" is inconsistent with the current load command. Reason code: "3".
From checking the reason code I see that the issue is "Generated or identity related file type modifiers have been specified but the target table contains no such columns." .. but the SQL3550W error seems to infer that the columns are generated always!
The only way I can get these rows to commit to the table is to run..
db2 load from tab123.ixf of ixf modified by generatedoverride replace into application.table_abc
Can anyone enlighten me to why I am recieving the SQL3526N error, or what the implications of running generatedoverride are?
Thanks for sticking with me..
Generated columns are not necessarily identity columns, apparently that's the case in your situation. Check the CREATE TABLE syntax to see what are other ways to generate column values.
By using the GENERATEDOVERRIDE option during the load you are obviously replacing (overriding) the generated values with those from the input file.