Problem saving data in a database column with enum data type - sql

Have created a Laravel schema and the column has enum data type. By default, it should save Agent in the column which works fine. When a user inserts some data am trying to change it to Agency but when checking on the DB it still remains as Agent:
Kindly assist?
Database schema
$table->enum('agent_or_agency', array('Agent','Agency'))->default('Agent');
Saving data in the above column from the logic
$data = new Agents();
$data->agent_or_agency = 'Agency';
$data->save();

Related

Copy Data from Blob to SQL via Azure data factory

I have two sample files in blob as sample1.csv and sample2.csv as below
data sample
SQL table name sample2, with column Name,id,last name,amount
Created a ADF flow without schema, it results as below
preview data
source settings are allow schema drift checked.
sink setting are auto mapping turned on. allow insert checked. table action none.
I have also tried setting a define schema in dataset, its result are same.
any help here?
my expected outcome would be data in sample1 will inserted null into the column "last name"
If I understand correctly, you said: "my expected outcome would be data in sample1 will inserted null into the column last name", you only need to add a derived column to you sample1.csv file.
You could follow my steps:
I create a sample1.csv file in Blob Storage and a sample2 table in my SQL database:
Using DerivedColumn to create new column last name with null value:
expression: toString(null())
Sink settings:
Run the pipeline and check the data in table:
Hope this helps.
You cannot mix schemas in the same source in the same data flow execution.
Schema Drift will handle changes to the schema on an execution-per-execution basis.
But if you are reading multiple different schemas from a folder, you will get non-deterministic results.
Instead, if you loop through those files in a pipeline ForEach one-by-one, data flow will be able to handle the evolving schema.

How to resolve special character issue in SQL Server data warehouse

I have to load the data from datalake into a SQL Server data warehouse using the polybase tables. I have created the set up for the creation of external tables. I have created the external tables and I am trying to do select * from ext_t1 table but I'm getting ???? for a column in ext_table.
Below is my external table script. I have found the issue with the special character in data. How can we escape the special character and need to use only varchar datatype not nvarchar. Can some help me on this issue?
CREATE EXTERNAL FILE FORMAT [CSVFileFormat_Test] WITH (FORMAT_TYPE = DELIMITEDTEXT, FORMAT_OPTIONS (FIELD_TERMINATOR = N',', STRING_DELIMITER = N'"',DATE_FORMAT='yyyy-MM-dd', FIRST_ROW = 2, USE_TYPE_DEFAULT = True,Encoding='UTF8'))
CREATE EXTERNAL TABLE [dbo].[EXT_TEST1]
( A VARCHAR(10),B VARCHAR(20))
(DATA_SOURCE = [Azure_Datalake],LOCATION = N'/A/Test_CSV/',FILE_FORMAT =csvfileformat,REJECT_TYPE = VALUE,REJECT_VALUE = 1)
Data: (special character in csv for A column as follows)
ÐК Ð’ÐЗМ Завод
ÐК Ð’ÐЗМ ЗаÑтройщик
This is data mismatch issue and this read may help you .
External Table Considerations
Creating an external table is easy, but there are some nuances that need to be discussed.
External Tables are strongly typed. This means that each row of the data being ingested must satisfy the table schema definition. If a row does not match the schema definition, the row is rejected from the load.
The REJECT_TYPE and REJECT_VALUE options allow you to define how many rows or what percentage of the data must be present in the final table. During load, if the reject value is reached, the load fails. The most common cause of rejected rows is a schema definition mismatch. For example, if a column is incorrectly given the schema of int when the data in the file is a string, every row will fail to load.
Data Lake Storage Gen1 uses Role Based Access Control (RBAC) to control access to the data. This means that the Service Principal must have read permissions to the directories defined in the location parameter and to the children of the final directory and files. This enables PolyBase to authenticate and load that data.

Schema errors after restoring a table

Has anyone encountered schema issues after restoring a previously overwritten (deleted) table in BQ?
A few months ago I overwrote a table by mistake and restored it using the undelete (cp#time) function. The data was restored but the schema came back corrupted to the point that the data is unusable. For example, I have a company ID column that was originally loaded into bq as a string. The field is a set of numbers and if I let BQ auto define the schema that field would've been an integer. Since this was an ID i manually loaded it as a string. After the undelete anytime I try to run a query involving this field I get an error:
Type mismatch for column 'Company_ID' in table 'log.428001'. Expected type 'int64', actual type 'string' in file :mdb=cloud-dataengine.
It seems like the underlying data is a string as it always was but the schema somehow is expecting an Int64. Support and I have tried all sorts of exports, cast's, and copies to somehow get this data out in the hopes of re-importing correctly. Thus far nothing has worked.
Has anyone experienced something similar?
It seems like when you overwrote the table, the schema is changed from string to integer. When you copy the deleted data out, it's copied with the integer schema, even though the data has the field as string. You can try correct it by overwrote your new table again, changing the schema back to string. Then, restore/copy the again-deleted data using time decorator. Now the final table will have both data and schema with type string.

Azure Machine Learning Write output to Azure SQL Database

I am using Azure Machine Learning to clustering data.
The input data is from an Azure SQL Database, and it works fine.
At the end of everything I want to write the output to a table in the same Azure SQL Database, but I get this error:
Error: Error 1000: AFx Library library exception:
Sql encountered an error: Login failed for user
Anyone any idea?
Thank you very much!
Please follow the instructions and examine the examples provided here to properly use the Export Data module to save the data of ML to Azure SQL Database.
How to Export Data to an Azure SQL Database
Add the Export Data module to your experiment. You can find this module in the Data Input and Output group in the experiment items list in Azure Machine Learning Studio.
Connect it to the module that produces the data that you want to export to Azure SQL DB.
For Data destination, select Azure SQL Database. This option supports Azure SQL Data Warehouse as well.
Set the following options specific to Azure SQL Database or Azure SQL Data Warehouse.
Database server name
Type the server name that is generated by Azure. Typically it has the form <generated_identifier>.database.windows.net.
Database name
Type the name of a database on the server you just specified.The database must already exist; the Export Data cannot create it.
Server user account name
Type the user name of an account that has access permissions for the database.
Server user account password
Provide the password for the specified user account.
Comma-separated list of columns to be saved
Type the names of the columns in the experiment that you want to write to the database.
Data table name
Type the name of the table where data will be stored.
For Azure SQL Database, if the table does not exist, it will be created. For Azure SQL Data Warehouse, the table must already exist and have the correct schema, so be sure to create it in advance.
Comma-separated list of datatable columns
Type the names of the columns as you wish them to appear in the destination table. The columns should correspond in order with the column names that you list in Comma-separated list of columns to be saved.
if you are writing to Azure SQL Data Warehouse, the columns names must match those already in the destination table schema.
Number of rows written per SQL Azure operation
Indicate how many rows should be written to the destination table in each batch. By default, the value is set to 50, which is the default batch size for Azure SQL Database. However, you should increase this value if you have a large number of rows to write.
TIP:
For Azure SQL Data Warehouse, we recommend that you set this value to 1. If you use a larger batch size, the size of the command string that is sent to Azure SQL Data Warehouse can exceed the allowed string length, causing an error.
If you don't want to write new results each time you run the experiment, select the Use cached results option. If there are no other changes to module parameters, the experiment will write the data the first time the module is run, and thereafter not perform writes.
However, a write will always be performed if any parameters have been changed in Export Data that would change the results.
Run the experiment.
Find the issue!
I needed to create an specific user with this SQL code:
CREATE USER AMLApplicationUser WITH PASSWORD = '************';
and then add the user to these roles on the database I want to write.
ALTER ROLE db_datareader ADD MEMBER AMLApplicationUser;
ALTER ROLE db_datawriter ADD MEMBER AMLApplicationUser;
I guess only the datawriter role is enough, but I needed datareader too.
So in conclusion, seems that database admin role can be used to read data, but not to write data from AML.
Thank you for your help!

updating a database using a table adapter and data table

I am currently working on a VB program connected to an Access database. I have fill a data table using a query and data adapter. At a later stage in the program, i want to go through and make permanent changes to the database using the adapter and table. I tried this:
For Each row As DataRow In db.DBDT.Rows
row("fldsentda") = "Y"
row("flddasenddate") = Date.Today
Next row
'db.DBDT.AcceptChanges()
db.DBDA.Update(db.DBDT)
*db is a class file, dbda is the data adapter, and dbdt is the data table
but i realized these changed are only effected the data table and not the actual database. How can I get it to where it will effect only the database rows that are inside of the data table filled using the query?
Update: I'm thinking my update function isn't written. I don't know if this should be a separate question or not, but how do I write the update function to only change fields in the database that has been changed on the data table??
Do not call
db.DBDT.AcceptChanges()
before
db.DBDA.Update(db.DBDT)
Doing so marks everything in the datatable as not changed. See here especially in remarks section.
Just call the update method and the acceptchanges should be called automatically for you.