On Adding row in interactive grid apex 5, the sequence doesn't work and displays error that the field cant be empty - sequence

I imported an application from workspace 1 to workspace 2 together with its table data and definition, but the problem is that, on adding a record in interactive grid of the imported application in workspace 2, it displays error that the PK cant be NULL. I leave the PK field empty because i expect that the sequence will do that job the way it does in the same application of workspace 1.
What is the reason that the same imported application is working a bit differently in a sense that sequence doesn't populate values itself.
What should be done to make the sequence work in the imported application in workspace 2.

A sequence isn't just "known" to the Interactive grid. You have to specify what sequence to use. In your Interactive Grid go to the column definition. Under the "Default" header change the Type to "Sequence". Put the name of your sequence in the Sequence Field:
If the schema in workspace one is different then the schema of workspace two there could be a whole lot of reasons it isn't behaving the same. Check for differences in the schemas. Does the sequence exist in your new schema? If using the old schema is it as simple as do you need to preface your sequence with the schema name? Lots of reasons that could be, rule out the simple stuff first.

Related

How to update existing data in Apache Druid

The problem was to add a new field to existing datasource and fill it with some default value.
I have tried so via this aticle
But the actual result is that new column was added, but it filled with a null value.
Where I was wrong and can I fix it in the same way?
It would be hard to tell without looking at how you have added the new column in your ingestions spec.
I will suggest using druid unified console data loader UI > and parse your input data > define the additional column under transform section. The advantage of data loader UI is that you can preview the transformed result immediately and once the workflow is completed you will get an ingestions spec and can submit it from there itself.
Eg-
Transformation example
From the documentation here: https://druid.apache.org/docs/latest/design/segments.html#different-schemas-among-segments
Maybe the result is related to the fact that existing segments don't have the new field and therefore show null.

SSAS Tabular Model - Add column to existing table

I am using VS2019 for SSAS Tabular Model development. Have imported a table from a CSV. The source CSV has undergone a change(new column has been added). When I process my table in VS2019, it gets processed successfully. However I am unable to see the new column introduced in source CSV. I went to Table properties and did a Refresh Preview but was not able to see the new column. Closed and re-started solution, re-processed the table but no luck! I remember in VS2017 we used to add the column by going into table properties and selecting the new column but things seem to be different in VS2019. Any help would be appreciated.
I'm assuming you used Get Data / Power Query to import the CSV. This unfortunately generates a Power Query Csv.Document function call that includes the number of columns when the query was generated. This parameter isn't exposed through the usual Power Query UI.
If you use the Advanced Editor or turn on the Formula Bar (view menu), you will see a parameter like Columns=10, was generated, usually in your Source step.
It currently seems safe to delete that parameter by editing the code - it will then always pull back all columns presented. Or if you prefer, you can edit the number of columns, as described in this blog post:
https://prathy.com/2016/08/how-to-add-extra-columns-to-an-existing-power-bi-file-which-using-csv-data-source/

How to have Google BigQuery properly detect header names?

I successfully created a new table using the data I uploaded onto Google Cloud Platform's Storage, but the problem is the header field names are always wrong when I use the Automatically Detect setting, and set "Header rows to skip" to be 1...I just got generic names such as "string_field_0".
I know I can manually add field names under Schema, however, that is not feasible with tables that have many fields. Is there a way to fix the header names? It doesn't seem to be a big thing though...Pandas does this automatically all the time.
Thanks!
csv file in Excel:
The problem is that you only have String types in your file. So, BigQuery can't differentiate between the header and actual valid rows. If you had say another column with something other than a String e.g. Integer, then it will detect the column names. For example:
column1,column2,column3
foo,bar,1
cat,dog,2
fizz,buzz,3
Correctly loads as this because there is something other than just Strings in the data:
So, either you need to have something other than just Strings, or you need to explicitly specify the schema yourself.
Hint: you don't have the use the UI and click a load of buttons for define the schema. You can programatically do it using the API or the gcloud CLI tool.
Since it was not mentioned here, what helped me was to add 1 to Header rows to skip. You can find it under Advanced Options:
My database came from Google Sheet and it already had integer values in some columns.
Same issue occurs with Google Sheets as well. Right, the cause is having all string data in the sheet. But the workaround is simple with Google Sheets; just adding an integer column as described here

Get list of columns of source flat file in SSIS

We get weekly data files (flat files) from our vendor to import into SQL, and at times the column names change or new columns are added.
What we have currently is an SSIS package to import columns that have been defined. Since we've assigned the mapping, SSIS only throws up an error when a column is absent. However when a new column is added (apart from the existing ones), it doesn't get imported at all, as it is not named. This is a concern for us.
What we'd like is to get the list of all the columns present in the flat file so that we can check whether any new columns are present before we import the file.
I am relatively new to SSIS, so a detailed help would be much appreciated.
Thanks!
Exactly how to code this will depend on the rules for the flat file layout, but I would approach this by writing a script task that reads the flat file using the file system object and a StreamReader object, and looks at the columns, which are hopefully named in the first line of the file.
However, about all you can do if the columns have changed is send an alert. I know of no way to dynamically change your data transformation task to accomodate new columns. It will have to be edited to handle them. And frankly, if all you're going to do is send an alert, you might as well just use the error handler to do it, and save yourself the trouble of pre-reading the column list.
I agree with the answer provided by #TabAlleman. SSIS can't natively handle dynamic columns (and niether can your SQL destination).
May I propose an alternative? You can detect a change in headers without using a C# Script Tasks. One way to do this would be to create a flafile connection that reads the entire row as a single column. Use a Conditional Split to discard anything other than the header row. Save that row to a RecordSet object. Any change? Send Email.
The "Get Header Row" DataFlow would look like this. Row Number if needed.
The Control Flow level would look like this. Use a ForEach ADO RecordSet object to assign the header row value to an SSIS variable CurrentHeader..
Above, the precedent constraints (fx icons ) of
[#ExpectedHeader] == [#CurrentHeader]
[#ExpectedHeader] != [#CurrentHeader]
determine whether you load data or send email.
Hope this helps!
i have worked for banking clients. And for banks to randomly add columns to a db is not possible due to fed requirements and rules. That said I get your not fed regulated bizz. So here are some steps
This is not a code issue but more of soft skills and working with other teams(yours and your vendors).
Steps you can take are:
(1) reach a solid columns structure that you always require. Because for newer columns older data rows will carry NULL.
(2) if a new column is going to be sent by the vendor. You or your team needs to make the DDL/DML changes to the table were data will be inserted. Ofcouse of correct data type.
(3) document this change in data dictanary as over time you or another member will do analysis on this data and would like to know what is the use of each attribute or column.
(4) long-term you do not wish to keep changing table structure monthly because one of your many vendors decided to change the style the send you data. Some clients push back very aggresively other not so much.
If a third-party tool is an option for you, check out CozyRoc's Data Flow Task Plus. It handles variable columns in sources.
SSIS cannot make the columns dynamic,
one thing, i always do, is use a script task to read the first and last lines of a file.
if it is not an expected list of csv columns i mark file as errored and continue/fail as required.
Headers are obviously important, but so are footers. Files can through any unknown issue be partially built. Requesting the header be placed at the rear of the file it is a double check.
I also do not know if SSIS can do this dynamically, but it never ceases to amaze me how people add/change order of columns and assume things will still work.
1-SSIS Does not provide dynamic source and destination mapping.But some third party component such as Data flow task plus , supporting this feature
2-We can achieve this using ssis script task.
3-If the Header is correct process further for migration else fail the package before DFT execute.
4-Read the line from the header using script task and store in array or list object
5-Then compare those array values to user defined variables declare earlier contained default value as column name.
6-If values are matching exactly then progress further else fail it.

#DBColumn in Lotus Notes

I've been tasked with learning Lotus Domino Designer - not sure what I did in a previous life, but it must have been pretty bad... - and was wondering how to do a lookup on a database to get some values for selections. As this information could potentially be used in a lot of the applications, I'd prefer it only to be in the one place.
I gather I can use #DBColumn, but what happens if an entry in that lookup changes? If the unique value of the lookup is the text, then the relationship would be broken, wouldn't it? Is there any way of mimicing the idea of relational lookups?
I'm assuming I'm looking at Lotus development from the wrong angle, as this seems to be a real limitation of look ups.
I haven't found any decent learning material on the interwebs, so would appreciate any help.
Ta
You would want to store a unique ID along with the textual value in the source database (not unlike what you would do in an RDBMS). Then, only store that ID in any referencing documents, and use a computed-for-display field to lookup the display value. (There is a performance consideration here - and you could "de-normalize" the data and store the ID and text value in the referencing documents, and do some asynchronous work to keep the values in sync - eg: using a scheduled agent that runs every night or every week).
If DB1 has the key values and DB2 has the documents which will reference these values, then in the form in DB2, you would still do a #DbColumn to lookup your value list. In the lookup view in DB1, concat the text value and ID with a pipe separator (textField + "|" + ID) in the first column. That will tell Notes to store only the ID value (what follows the pipe is the "alias" and is what will be stored).
Note: I would avoid using #DocumentUniqueID as the unique ID for these values, as the Document Unique ID will change if the documents are copied and pasted, or the entire database is copied, etc. You can use the #unique formula function in a computed-when-composed field to generate something close to a unique ID (almost like an identity column in sql).
If you need relational properties, look for non-Notes solutions. It is possible to get some relational behavior using document UNIDs and update agents, but it will be harder than with a proper relational backend.
Your specific problem with referencing to a piece of text that might change can to some extent be resolved by using aliases in the choice fields. If a dialog list contains values on the form...
Foo|id1
Bar|id2
...the form will display Foo but the back-end document will store the value id1 - (and this is what you will be able to show in standard views - although xpages could solve that). Using the #DocumentUniqueID for alias can be a good idea under some circumstances.
It depends on where your using the data. The #DBLookup or #DBColumn will work in Lotus Notes fields if the fields are set to be computed for display. That way they always get the most up to date information when you open the form etc.
If you make it so the data is saved on to the document then you will have to write some update code when you need to refresh the values.
The Lotus Notes help files for designer are pretty good, have a look at that.
SM
You could use a key or alias to store the relationship to your lookup value so if the value itself changes, the connection remains because the alias is intact. For example, if your lookup values were being stored as a collection of documents, I'd have the #DBColumn retrieve Document UNID|lookup value pairs. When in display mode, you could then retrive the value using #GetDocField. If the lookup values are in a different database, then you'd have to retrieve them for display using #DBLookup and construct a view that is keyed off of the UNID or whatever key you decide to use.The only drawback to this technique is that you wouldn't be able to display the field value in views as the actual value isn't stored in the document, just a reference to it. Using XPages, though, you COULD map the relationship into a dynamic datatable just like you would in a truly relational system.
It's tricky, but using LEI, you could also use Notes to front-end a relational backend system, also giving you the dynamic relationship you desire in your lookups.
Hope this helps!
The content of the lookup can change freely. A problem only arises (as it would on any other platform in the same circumstances) if the lookup key changes. You need to use a key that won't change. Human-readable text is an advantage, but if you want to be able to change your key description from, say, "Divisions" to "Business Units" and still have lookups work, you need to use an alias of some kind, which will presumably be mapped to your text description and only used internally. #Unique is pretty good for this, and gives a shortish key, if that is important to you. #DocumentUniqueID is most reliable, but as Ed pointed out, will change (must change - it's a new document) if you copy/paste or make a non-replica copy. This is easy to get around, though. Create a Computed-when-composed field (called, say, "LookupRef") on the form you are using for your reference document with the formula "#DocumentUniqueID". That will capture the ID at the time of creation, and it will not change on copy/paste etc. Use that as your key.