Rspec and SQL: Index 0 is out of range - sql

I am trying to run my code and I get the following error:
Failure/Error: Bookmark.new(id: res[0]['id'], title: res[0]['title'], url: res[0]['url'])
IndexError: Index 0 is out of range
I checked my database table and saw it is all good, there is data there (bookmarks) so not sure why this is occurring.

Apologies, it was an error on my part. My spec_helper file was referring to a different ENVIRONMENT compared to my code. That's why it couldn't find the data it needed. It's at least a good learning curve for next time

Related

Subtracting Values from Two Datasets in Single Table

I'm trying to basically subtract time to get the time difference in SSRS to take out total working hours. However, I'm getting error "An unexpected error occurred while compiling expressions. Native compiler return value: '[BC30025] Property missing 'End Property'.'." from the available solutions in here or over the internet.
I tried to use lookup as well: =Lookup(Fields!Date.Value & Fields!Employee_Name.Value, Fields!Date.Value & Fields!Name_or_title.Value, Fields!Check_Out_Time.Value, "DataSet2") - (Fields!Check_In_Time.Value) which shows error as enter image description here
Can someone guide me how to get time difference from two different datasets please? I'm new to SSRSSQL

DBT 404 Not found: Dataset hello-data-pipeline:staging_benjamin was not found in location EU

When doing "DBT run" I get the following error
{{ config(materialized='table') }}
SELECT customer_id FROM `hello-data-pipeline.adwords.google_ads_campaign_stats`
I am making sure that my FROM location contains 3 parts
A project (hello-data-pipeline)
A database (adwords)
A table (google_ads_campaign_stats)
But I get the following error
15:41:51 | 2 of 3 START table model staging_benjamin.yo......................... [RUN]
15:41:51 | 2 of 3 ERROR creating table model staging_benjamin.yo................ [ERROR in
0.32s]
Runtime Error in model yo (models/yo.sql)
404 Not found: Dataset hello-data-pipeline:staging_benjamin was not found in location EU
NB. Bigquery does not show any error when doing this query in Bigquery Editor.
NB 2 DBT does not show any error when "running sql" command directly in the script editor
What I am doing wrong ?
You may need to specify a location where your query will run. Queries that run in a specific location may only reference data in that location. You may choose auto-select to run the query in the location where the data resides.
Read more about Dataset locations
OK I found. I needed to specify the location in the profile.yml file.
=> https://docs.getdbt.com/reference/warehouse-profiles/bigquery-profile/#dataset-locations
In DBT cloud you will find it when setting up your project
I had a similar error to your 'hello-data-pipeline:staging_benjamin was not found in location EU'
However, my issue was not that the dataset was not in the incorrect location. If was that DBT was not targeting the schema I wanted.
e.g. for your example it would be that hello-data-pipeline:staging_benjamin would actually not be the target schema you initially wanted.
Adding this bit of code on top of my query solved the issue.
{{ config(schema='marketing') }}
select ...
cf DBT's schemas: https://docs.getdbt.com/docs/building-a-dbt-project/building-models/using-custom-schemas
here is another doc that helped me understand why this was happening:
"dbt Cloud IDE: The values are defined by your connection and credentials. To check any of these values, head to your account (via your profile image in the top right hand corner), and select the project under "Credentials".
https://docs.getdbt.com/reference/dbt-jinja-functions/target

Snowflake COPY INTO from JSON - ON_ERROR = CONTINUE - Weird Issue

I am trying to load JSON file from Staging area (S3) into Stage table using COPY INTO command.
Table:
create or replace TABLE stage_tableA (
RAW_JSON VARIANT NOT NULL
);
Copy Command:
copy into stage_tableA from #stgS3/filename_45.gz file_format = (format_name = 'file_json')
Got the below error when executing the above (sample provided)
SQL Error [100069] [22P02]: Error parsing JSON: document is too large, max size 16777216 bytes If you would like to continue loading
when an error is encountered, use other values such as 'SKIP_FILE' or
'CONTINUE' for the ON_ERROR option. For more information on loading
options, please run 'info loading_data' in a SQL client.
When I had put "ON_ERROR=CONTINUE" , records got partially loaded, i.e until the record with more than max size. But no records after the Error record was loaded.
Was "ON_ERROR=CONTINUE" supposed to skip only the record that has max size and load records before and after it ?
Yes, the ON_ERROR=CONTINUE skips the offending line and continues to load the rest of the file.
To help us provide more insight, can you answer the following:
How many records are in your file?
How many got loaded?
At what line was the error first encountered?
You can find this information using the COPY_HISTORY() table function
Try setting the option strip_outer_array = true for file format and attempt the loading again.
The considerations for loading large size semi-structured data are documented in the below article:
https://docs.snowflake.com/en/user-guide/semistructured-considerations.html
I partially agree with Chris. The ON_ERROR=CONTINUE option only helps if the there are in fact more than 1 JSON objects in the file. If it's 1 massive object then you would simply not get an error or the record loaded when using ON_ERROR=CONTINUE.
If you know your JSON payload is smaller than 16mb then definitely try the strip_outer_array = true. Also, if your JSON has a lot of nulls ("NULL") as values use the STRIP_NULL_VALUES = TRUE as this will slim your payload as well. Hope that helps.

NodeJS + Express "Error: -2006,\, Can not bind parameter(s)"

My company is working on converting from ColdFusion to NodeJS with Express, I'm running into an error trying to update some data in SQLAnywhere.
I have one update function working with 5 pieces of data. I'm working on my second, with 23 data points, but I'm running into an error stating:
"Error: Code: -2006 Msg: Can not bind parameter(s)."
I can't find any information about this online, not even using the error code. Any help, or pointing me in the right direction, would be appreciated.
Turns out it was trying to save integers into "char" fields on the database. Odd that we never had this issue with ColdFusion, but using "String(…)" around the values seemed to solve the issue.

Apache Pig join error 2087 "Found index:0 in multiple LocalRearrange operators"

So I've got two relations:
pageview counts by a GUID and URL pv_counts
events by the same GUID and url ev_counts
I'm trying to join them with joined_counts = JOIN ev_counts BY ev_site_guid, pv_counts BY pv_site_guid;, but I keep getting this error:
ERROR 2087: Unexpected problem during optimization. Found index:0 in multiple LocalRearrange operators.
I've tried using Pig 10 and Pig 11, but both return the same error.
I've Googled it, but I'm mostly just coming up with the Pig source code, but not an explanation of what it is or means. I've tried making sure I don't have any nulls or empty strings in the keys
Anyone have any idea what I'm doing wrong?
Here's the schema and some sample data:
pv_counts
describe pv_counts;
{group::pv_site_guid:chararray, group::pv_hostname:chararray, pv_count:long}
dump pv_counts;
(bSAw-mF-0r4Q-4acwqm_6r,example-url.com,10)
(bSAw-mF-0r4Q-4acwqm_6r,sports.example-url.com,10)
(bSAw-mF-0r4Q-4acwqm_6r,opinion.example-url.com,10)
(bSAw-mF-0r4Q-4acwqm_6r,newsinfo.example-url.com,10)
(bSAw-mF-0r4Q-4acwqm_6r,lifestyle.example-url.com,10)
.... many more pageviews than events ....
(dZiLDGjsGr3O3zacn9QLBk,example-url2.com.com,10)
(dZiLDGjsGr3O3zacn9QLBk,example-url3.com,10)
ev_counts
describe ev_counts;
{group::ev_site_guid:chararray, group::ee_hostname:chararray, ev1count:long, ev2count:long, ev3count:long, ev4count:long, ev5count:long}
dump ev_counts;
(bSAw-mF-0r4Q-4acwqm_6r,example-url.com,29,0,0,0,0)
(bSAw-mF-0r4Q-4acwqm_6r,sports.example-url.com,7,0,0,0,0)
(bSAw-mF-0r4Q-4acwqm_6r,lifestyle.example-url.com,2,0,0,0,0)
.... not as many events as pageviews ....
(dZiLDGjsGr3O3zacn9QLBk,example-url2.com.com,0,0,37,0,0)
(dZiLDGjsGr3O3zacn9QLBk,example-url3.com,0,0,1,0,0)
I can dump the relations just fine in Pig and Grunt.
When I add the following join statement, it gets to the very end and dies:
joined_counts = JOIN ev_counts BY ev_site_guid, pv_counts BY pv_site_guid;
dump joined_counts;
It'll throw the "ERROR 2087: Unexpected problem during optimization. Found index:0 in multiple LocalRearrange operators." error and an ugly stacktrace. I'm relatively new to pig and so I've never dug into it's internals.
If anyone had any tips or things to try, I'd gladly try them. We're running on Cloudera's CDH3U3 (0.20.2).