Serverless Invalid variable reference syntax for variable ([\s\S]+?) - serverless-framework

I am getting the following error when doing a serverless deploy
Invalid variable reference syntax for variable ([\s\S]+?). You can only reference env vars, options, & files. You can check our docs for more info.
I think it is caused by the following line in my serverless.yml (in the environment block of the provider block):
variableSyntax: "\\${{([\\s\\S]+?)}}"
Why is this invalid? Or is the error actually elsewhere, and if it is, how can I see what line in what file is causing the error?

Related

Why JCL encountered error IGYWCL expansion

I wrote a cobol code to add record to a file and I runed it using jcl but encounterd an error. The error message is:
´´ procedure IGYWCL was expanded using system library sys1.proclib ´´.
How can i fix this error.

How to ignore import file from Global import in Robot framework

I have imported Resource from the submodule and then I need to rewrite some data in the file but it doesn't work because it calling the global file import
Example:
# submodule/imports/web_global.robot
*** Settings ***
Variables ${CURDIR}/../test_data/products_data.yaml
${products_data}: global
# imports/web_local.robot
*** Settings ***
Resource ${CURDIR}/../submodule/imports/web_global.robot
Variables ${CURDIR}/../test_data/products_data.yaml
${products_data}: local
# testcases/web/test_file.robot
*** Settings ***
Resource ${CURDIR}/../../imports/web_local.robot
*** Test Cases ***
Example
Log To Console ${products_data}
I got the last result is global. How to get the last result to local
You might be confusing the yaml method of supplying variables with the ${} "usual way". As given, your "${var}: " causes a syntax error because it is an illegal variable name (cannot end with a colon).
You do not show your products_data.yaml (please supply all code), but I am guessing that your ${products_data}: should be expressed as
VAR: value
PRODUCTS_DATA: global
It is a confusing question and I am struggling to grasp it.
Here is my attempt to re-create your question, but I could not re-produce the full sense of your question.
*** Settings ***
Variables ${CURDIR}/test_data/products_data.yaml
*** Variables ***
${products_data} local
*** Test Cases ***
Example
Log To Console Value of products_data is ${products_data}\n
and the result:
==============================================================================
SOExample
==============================================================================
Example Value of products_data is local
Example | PASS |
I painstakingly re-created your entire, convoluted chain.
The semantic answer to your question: when there is a naming conflict, Robot takes the first one (ignores your 2nd definition of the like-named variable products_data).
You can read about it in the RF User Guide, section on variables
Here is the relevant paragraph:
All variables from a variable file are available in the test data file that imports it. If several variable files are imported and they contain a variable with the same name, the one in the earliest imported file is taken into use. Additionally, variables created in Variable tables and set from the command line override variables from variable files.
Simplest solution: in imports/web_local.robot, remove the Resource which you do not want:
*** Settings ***
#Resource ${CURDIR}/../submodule/imports/web_global.robot
Variables ${CURDIR}/test_data/products_data.yaml
Now your variable from imports/test_data/products_data.yaml is used.
Good luck untangling! I recommend re-thinking your structure.

Getting error while executing DACPAC file (using sqlpackage.exe)

I am getting below error while executing DACPAC file using SQLPackage.
The column [dbo].[Temp].[GMTOffset] on table [dbo].[Temp] must be added, but the column has no default value and does not allow NULL values. If the table contains data, the ALTER script will not work. To avoid this issue you must either: add a default value to the column, mark it as allowing NULL values, or enable the generation of smart-defaults as a deployment option.
PowerShell script -
& $using:SqlPackagePath /Action:Publish /tu:$using:DatabaseUsername /tp:$using:DatabasePassword
/tsn:$using:ServerInstance /tdn:"$_" /sf:$using:DacpacLocation /p:BlockOnPossibleDataLoss=False
I have set 'Generate smart defaults, when applicable' setting in publish profile of the DB project and execute the PowerShell script after compiling the project, however, still getting this error. Any pointers or help would be appreciated.
This error was resolved after specifying this option on the command line like below as #Peter also mentioned.
& $using:SqlPackagePath /Action:Publish /tu:$using:DatabaseUsername /tp:$using:DatabasePassword /tsn:$using:ServerInstance /tdn:"$_" /sf:$using:DacpacLocation /p:GenerateSmartDefaults=True /p:BlockOnPossibleDataLoss=False

Azure DevOps SQL DB Deploy

I'm trying to deploy a dacpac with db references to 2 other dbs using Azure DevOps. I'm unable to find the right syntax for mentioning additional arguments for the sqlcmd variables for those dbs. I keep getting 'Unrecognized command line argument' error everytime I trigger deployment. the current syntax I'm using is
/Variables:variable1 = "value1" /Variables:variable2 = "value2"
Follow documentation https://learn.microsoft.com/en-us/sql/tools/sqlpackage?view=sql-server-ver15 and use
SQLCMD Variables
The following table describes the format of the option that you can use to override the value of a SQL command (sqlcmd) variable used during a publish action. The values of variable specified on the command line override other values assigned to the variable (for example, in a publish profile).
Parameter Default Description
/Variables:{PropertyName}={Value} Specifies a name value pair for an action-specific variable; {VariableName}={Value}. The DACPAC file contains the list of valid SQLCMD variables. An error results if a value is not provided for every variable.
/Variables: /v {PropertyName}={Value} Specifies a name value pair for an action-specific variable; {VariableName}={Value}. The DACPAC file contains the list of valid SQLCMD variables. An error results if a value is not provided for every variable.

pig error: Job in state DEFINE instead of RUNNING - Generic solution

A typical Pig error that occurs without much usefull information is the following:
Job in state DEFINE instead of RUNNING
Often found in a line like this:
Caused by: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
I have seen some examples of this error, but would like to have the generic solution for this problem.
So far, at each occasion where I have encountered this error, it is because Pig fails to load files. The error in the question is printed to stderr log, and you will not find anything usefull there.
However, if you were to look in the stdout log, you would expect to find the following:
Message: org.apache.pig.backend.executionengine.ExecException: ERROR 2118: Input Pattern hdfs://x.x.x.x:x/locationOnHDFS/* matches 0 files
Typically followed by:
Caused by: org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input Pattern hdfs://x.x.x.x:x/locationOnHDFS/* matches 0 files
At this point the most likely suspects are:
There are no files in the specified folder (though the folder exists)
The user that is running the script does not have the rights to access the relevant files
All files are empty (not sure about this one)
Note that it is a commonly known difficulty that pig will error out if you try to read an empty directory (rather than just processing the alias with 0 lines).