using multiple parameters in append query in Access 2010 - sql

I have been trying to get an append query to work but I keep getting an error stating that 0 rows are being appended whenever I use more than 1 parameter in the query. This is for a
The table in question has 1 PK which is a GUID [which is generating values with newid()] and one required field (Historical) which I am explictly defining in the query.
INSERT INTO dbo_sales_quotas ( salesrep_id
, [year]
, territory_id
, sales_quota
, profit_quota
, product_super_group_uid
, product_super_group_desc
, class_9
, Historical
, sales_quotas_UID )
SELECT dbo_sales_quotas.salesrep_id
, dbo_sales_quotas.Year
, dbo_sales_quotas.territory_id
, dbo_sales_quotas.sales_quota
, dbo_sales_quotas.profit_quota
, dbo_sales_quotas.product_super_group_uid
, dbo_sales_quotas.product_super_group_desc
, dbo_sales_quotas.class_9
, dbo_sales_quotas.Historical
, dbo_sales_quotas.sales_quotas_UID
FROM dbo_sales_quotas
WHERE (((dbo_sales_quotas.salesrep_id)=[cboSalesRepID])
AND ((dbo_sales_quotas.Year)=[txtYear])
AND ((dbo_sales_quotas.territory_id)=[txtTerritoryID])
AND ((dbo_sales_quotas.sales_quota)=[txtSalesQuota])
AND ((dbo_sales_quotas.profit_quota)=[txtProfitQuota])
AND ((dbo_sales_quotas.product_super_group_uid)=[cboProdSuperGroup])
AND ((dbo_sales_quotas.product_super_group_desc)=[txtProductSuperGroupDesc])
AND ((dbo_sales_quotas.class_9)=[cboClass9])
AND ((dbo_sales_quotas.Historical)='No')
AND ((dbo_sales_quotas.sales_quotas_UID)='newid()'));
Even if I assign specific values, I still get a 0 rows error except when I reduce the number of parameters to 1 (which it then works perfectly regardless of which parameter) I have verified that the parameters have the correct formats.
Can anyone tell me what I'm doing wrong?

Break out the SELECT part of your query and examine it separately. I'll suggest a simplified version which may be easier to study ...
SELECT
dsq.salesrep_id,
dsq.Year,
dsq.territory_id,
dsq.sales_quota,
dsq.profit_quota,
dsq.product_super_group_uid,
dsq.product_super_group_desc,
dsq.class_9,
dsq.Historical,
dsq.sales_quotas_UID
FROM dbo_sales_quotas AS dsq
WHERE
dsq.salesrep_id=[cboSalesRepID]
AND dsq.Year=[txtYear]
AND dsq.territory_id=[txtTerritoryID]
AND dsq.sales_quota=[txtSalesQuota]
AND dsq.profit_quota=[txtProfitQuota]
AND dsq.product_super_group_uid=[cboProdSuperGroup]
AND dsq.product_super_group_desc=[txtProductSuperGroupDesc]
AND dsq.class_9=[cboClass9]
AND dsq.Historical='No'
AND dsq.sales_quotas_UID='newid()';
I wonder about the last 2 conditions in the WHERE clause. Is the Historical field type bit instead of text? Does the string 'newid()' match sales_quotas_UID in any rows in the table?

Related

JSON stored in SUPER type fails to select camelcase element. Too long to be serialized. How can I select?

Summary:
I am working with a large JSON that is stored in a redshift SUPER type.
Context
This issue is near identical to the question posted here for TSQL. My schema:
chainId BIGINT
properties SUPER
Sample data:
{
"chainId": 5,
"$browser": "Chrome",
"token": "123x5"
}
I have this as a column in my table called properties.
Desired behavior
I want to be able to retrieve the value 5 from the chainId key and store it in a BIGINT column.
What I've tried
I have referenced the following aws docs:
https://docs.aws.amazon.com/redshift/latest/dg/JSON_EXTRACT_PATH_TEXT.html
https://docs.aws.amazon.com/redshift/latest/dg/r_SUPER_type.html
https://docs.aws.amazon.com/redshift/latest/dg/super-overview.html
I have tried the following which haven't worked for me:
SELECT
properties.chainId::varchar as test1
, properties.chainId as test2
, properties.chainid as test3
, properties."chainId" as test4
, properties."chainid" as test5
, json_extract_path_text(json_serialize(properties), 'chainId') serial_then_extract
, properties[0].chainId as testval1
, properties[0]."chainId" as testval2
, properties[0].chainid as testval3
, properties[0]."chainid" as testval4
, properties[1].chainId as testval5
, properties[1]."chainId" as testval6
FROM clean
Of these, the attempt, serial_then_extract returned a not null, correct value, but not all of the values in my properties field are short enough to serialize, so this only works on some of the rows.
All others return null.
Referencing the following docs: https://docs.aws.amazon.com/redshift/latest/dg/query-super.html#unnest I have also attempted to iterate over the super type using partisql:
SELECT ps.*
, p.chainId
from clean ps, ps.properties p
where 1=1
But this returns no rows.
I also tried the following:
select
properties
, properties.token
, properties."$os"
from base
And this returned rows with values. I know that there is a chainId value as I've checked the corresponding key and am working with sample data.
What am I missing? What else should I be trying?
Does anyone know if this has to do with the way that the JSON key is formatted? [camelcase]
You need to enable case sensitive identifiers. By default Redshift maps everything to lower case for table and column names. If you have mixed case identifiers like in your super field you need to enable case sensitivity with
SET enable_case_sensitive_identifier TO true;
See: https://docs.aws.amazon.com/redshift/latest/dg/r_enable_case_sensitive_identifier.html

invalid identifier while parsing json

I am compiling a dbt base model. Currently I get this error below. Line 6 looks the same as other lines above. Might a small syntax error that I could not spot.
15:40:22 Database Error in model base_datacenter_handling_unit (models/l10_staging_datacenter/base_unit.sql)
15:40:22 000904 (42000): SQL compilation error: error line 6 at position 3
15:40:22 invalid identifier 'VALUE'
15:40:22 compiled SQL at target/run/dbt/models/l10_staging_datacenter/base_unit.sql
This is how my file looks like:
SELECT
JSON_DATA:"key"::text AS KEY
, value:"description"::text AS DESCRIPTION
, value:"globalHandlingUnitId"::text AS GLOBAL_HANDLING_UNIT_ID
, value:"tareWeight"::NUMBER(38,0) AS TARTE_WEIGHT
, value:"tareWeight_unit"::text AS TARTE_WEIGHT_UNIT
, value:"width"::NUMBER(38,0) AS WIDTH
, value:"width_unit"::text AS WIDTH_UNIT
, value:"length"::NUMBER(38,0) AS LENGTH
, value:"validFrom"::TIMESTAMP_NTZ AS VALID_FROM_TS_UTC
, value:"validTo"::TIMESTAMP_NTZ AS VALID_TO_TS_UTC
, value:"lastModified"::TIMESTAMP_NTZ AS LAST_MODIFIED_TS_UTC
, value:"status"::text AS STATUS
, md5(KEY::STRING || MASTERCLIENT_ID) AS HANDLING_UNIT_KEY --different logic than in POSTGRESDWH!
,MASTERCLIENT_ID
,{{ extract_masterclientname_clause('META_FILENAME') }} AS MASTERCLIENT_NAME
,META_ROW_NUM
,META_FILENAME
,META_LOAD_TS_UTC
,META_FILE_TS_UTC
,CASE WHEN {{table_dedup_clause('HANDLING_UNIT_KEY')}}
THEN True
ELSE False
END AS IS_RECORD_CURRENT
FROM {{ source('INGEST_DATACENTER', 'HANDLING_UNIT') }} src
QUALIFY {{table_dedup_clause('HANDLING_UNIT_KEY')}}
It could also be because of the STRING type md5(KEY::STRING || MASTERCLIENT_ID) I am using with md5 but I have another file, which is based on the same pattern but it does not throw an error tho:
SELECT
JSON_DATA:"issueId"::NUMBER(38,0) AS ISSUE_ID
, value:"slaName"::text AS SLA_NAME
, value:"slaTimeLeft"::NUMBER(38,0) AS SLA_TIME_USED_SECONDS
, md5(ISSUE_ID::STRING || SLA_NAME) AS ISSUE_SLA_ID
,MASTERCLIENT_ID
,{{ extract_masterclientname_clause('META_FILENAME') }} AS MASTERCLIENT_NAME
,META_ROW_NUM
,META_FILENAME
,META_LOAD_TS_UTC
,META_FILE_TS_UTC
,CASE WHEN {{table_dedup_clause('ISSUE_SLA_ID')}}
THEN True
ELSE False
END AS IS_RECORD_CURRENT
FROM {{ source('INGEST_EMS', 'ISSUES') }} src
, lateral flatten ( input => JSON_DATA:slas)
QUALIFY {{table_dedup_clause('ISSUE_SLA_ID')}}
I don't see any significant difference between the two
value is the output columns of a FLATTEN which you have in your second SQL. But not your first.
This is where putting an alias of every table, and using it on EVERY usage, you would see something like
SELECT t.json_data:"key",
f.value:"json_prop_name"
FROM table AS t;
and be like, where does f come from...
The most likely reason is the column is not named "tareWeight_unit". Snowflake creates column names in upper case regardless of how they are written unless the original create statement puts the columns names in double quotes (e.g. "MyColumn") in which case it will create the column names with the exact case specified. Use SHOW COLUMNS IN TABLE and check the actual column name.

"Circular reference caused by ..." error in Access SQL (but not in T-SQL)

I have the following SQL statement which returns the desired result in SQL Server 2012:
SELECT
S.ONOMA
, S.DIEY
, S.POLH
, S.TK
, S.IDIOT
, S.KODIKOS
, S.AFM
FROM
SYNERG AS S
INNER JOIN
(SELECT
G.AFM, MIN(KODIKOS) AS KODIKOS
FROM SYNERG AS G
WHERE LEN(ISNULL(AFM, '')) != 0
GROUP BY AFM) AS I ON S.KODIKOS = I.KODIKOS
ORDER BY
S.AFM
but when I run the same SQL statement in MS Access 2007 I get an error:
Circular reference caused by 'KODIKOS' in query definition's SELECT list.
Any help would be appreciated.
As explained in the link by HansUp:
The alias of a calculated field cannot be identical to any of the field names used to calculate the field.
This can be rather annoying (esp. if it is a field that is returned by the query), but there is no way around it.
So you need to change the alias, e.g.:
SELECT
S.ONOMA
, S.DIEY
, S.POLH
, S.TK
, S.IDIOT
, S.KODIKOS
, S.AFM
FROM
SYNERG AS S
INNER JOIN
(SELECT
G.AFM, MIN(KODIKOS) AS MinKODIKOS
FROM SYNERG AS G
WHERE LEN(Nz(AFM, '')) <> 0
GROUP BY AFM) AS I ON S.KODIKOS = I.MinKODIKOS
ORDER BY
S.AFM
Note also that an IsNull() function exists in Access, but has a different meaning (it takes one argument and returns a Boolean). The corresponding function is Nz()
And (thanks #HansUp), the unequal operator is <>, not !=. I always use <> in SQL Server too, no need to make things more complicated than necessary. :)

Append Query Trouble

I am having trouble with the final piece of my append query. I have the records generating just like I want with the exception of not triggering until the Expression Event Date is <=Date(). It is giving me a unmatched error when I place the <=Date() in the criteria field of the query builder. I tried it with DateSerial and a few other variations. I'm sure it has to do with the expression being that and not a hard date. Any assistance would be appreciated.
INSERT INTO SchedulingLog (
UserID
, LogDate
, EventDate
, Category
, CatDetail
, [Value]
)
SELECT Roster.UserID
, Date() AS LogDate
, DateSerial(Year(Date()),Month([WM DOH]),Day([WM DOH])) AS EventDate
, SchedulingLog.Category
, SchedulingLog.CatDetail
, Max(tblAccrual!WeeksAccrual*Roster!Schedule) AS [Value]
FROM tblAccrual
, [Schedule Type]
, Category
INNER JOIN CatDetail
ON Category.CategoryID = CatDetail.CategoryID
, SchedulingLog
INNER JOIN Roster
ON SchedulingLog.UserID = Roster.UserID
WHERE (((tblAccrual.Years)<=Round((Date()-[wm doh])/365,2)))
GROUP BY Roster.UserID
, Date()
, DateSerial(Year(Date()),Month([WM DOH]),Day([WM DOH]))
, SchedulingLog.Category
, SchedulingLog.CatDetail
HAVING (((SchedulingLog.Category) Like "Vac*")
AND ((SchedulingLog.CatDetail) Like "Ann*"));
I believe the issue is not explicitly converting the user input date with CDate. I suspect it's fine in most of the query because the [MW DOH] parameter is provided directly to functions which will convert it to date. However the WHERE clause would need an explicit conversion.
The following generates the error "This expression is typed incorrectly, or it is too complex to be evaluated. For example, a numeric expression may contain too many complicated elements. Try simplifying the expression by assigning parts of the expression to the variables."
SELECT Date()-[userinput] AS something;
Whereas the following code does not
SELECT Date()-CDate([userinput]) AS something;

Getting table(records) to update properply using the MERGE Statement

Good morning everyone!
Below is a piece of code I stitched together: I used a CTE to grab the records(data) from a link table and than convert strings to dates, than use the merge statement to get the data into a local table:
I am having a problem with the column(field) LAST_RACE_DATE this field is set to NULL and is not required but it does not update with my current set up. What I am trying to accomplished is for this field to populate when data is entered but also update, meaning it should also update with NULL.
So if the field has a specific date, and a new date is entered in the remote database, this field should update as well, even if the data is deleted in the back end, it should also remove the local table data for this field.
WITH CTE AS(
SELECT MEMBER_ID
,[MEMBER_DATE] = MAX(CONVERT(DATE, MEMBER_DATE))
,RACE_DATE = MAX(CONVERT(DATE, RACE_DATE))
,LAST_RACE_DATE = MAX(CONVERT(DATE, LAST_RACE_DATE))
FROM [EXAMPLE].[dbo].[LINKED_MEMBER_DATA]
WHERE (MEMBER_DATE IS NOT NULL) AND (ISDATE(MEMBER_DATE)<> 0) AND (RACE_DATE IS NOT NULL) AND (ISDATE(RACE_DATE)<> 0)
AND (LAST_RACE_DATE IS NULL) OR (ISDATE(LAST_RACE_DATE)<> 0)
GROUP BY MEMBER_ID)
MERGE dbo.LINKED_MEMBER_DATA AS Target
USING (SELECT
MEMBER_ID, MEMBER_DATE, RACE_DATE, LAST_RACE_DATE
FROM CTE
GROUP BY MEMBER_ID, RACE_DATE, LAST_RACE_DATE)AS SOURCE ON (Target.MEMBER_ID = SOURCE.MEMBER_ID)
WHEN MATCHED AND
(Target.MEMBER_DATE) <> (SOURCE.MEMBER_DATE)
OR (Target.RACE_DATE) <> (SOURCE.RACE_DATE)
OR ISNULL(TARGET.LAST_RACE_DATE , Target.LAST_RACE_DATE) <> ISNULL(SOURCE.LAST_RACE_DATE, SOURCE.LAST_RACE_DATE)
THEN UPDATE SET
Target.MEMBER_DATE = SOURCE.MEMBER_DATE
,Target.RACE_DATE = SOURCE.RACE_DATE
,Target.LAST_RACE_DATE = SOURCE.LAST_RACE_DATE
WHEN NOT MATCHED BY TARGET THEN
INSERT(
MEMBER_ID, MEMBER_DATE, RACE_DATE, LAST_RACE_DATE)
VALUES (Source.MEMBER_ID, Source.MEMBER_DATE, Source.RACE_DATE, Source.LAST_RACE_DATE);
I also tried this:
ISNULL(Target.LAST_RACE_DATE,'N/A') <> ISNULL(SOURCE.LAST_RACE_DATE,'N/A')
But it generates the below error for dates conversion:
Conversion failed when converting date and/or time from character string.
Thanks a Million!!
Your current statement is failing because the ISNULLs that you have don't do anything (if one of the values is NULL the expression will evaluate to NULL), and NULL values don't compare. Your second attempt doesn't work because ISNULL requires the data types of the two values to be the same, so you could try eg ISNULL(Target.LAST_RACE_DATE, '1970-01-01') <> ISNULL(Source.LAST_RACE_DATE, '1970-01-01').
Another option would be to simply enumerate the different cases (eg, (((Source.LAST_RACE_DATE IS NULL AND Target.LAST_RACE_DATE IS NOT NULL) OR (Source.LAST_RACE_DATE IS NOT NULL AND Target.LAST_RACE_DATE IS NULL) OR (Source.LAST_RACE_DATE <> Target.LAST_RACE_DATE))). Enumerating the different situations makes the code a bit more verbose, but it can result in better performance (whether it is measurably better really depends on how much data you are processing).