I have created a key called key1 which contains Firstname, Lastname and Age.
I want to delete all the fields created under key1 but not the key. What command do I need to use to delete all the fields and their value?
My hashmap key1 contains Firstname, Lastname and Age. I want to delete all the fields (Firstname, Lastname, and Age). I tried using -
HDEL key1 Firstname
It worked but I have to do one at a time.I am looking for a command which deletes all fields at a time.
I expect all the fields to be deleted at once but the key should remain in Redis.
It is definitely not possible
Because Redis creates Hash when 1st item is inserted and delete the hash when last item is removed. It is not possible to keep an empty hash structure in redis.
For more info redis-doesn't-store-empty-hashes
Related
My first post here as I am starting in a new job where my old sql skills already are on the test - I am not advanced user from before either.
I believe there might be some answers here already that might be answering my question, but I am still a bit unfamiliar - both with the forum and more advanced sql syntaxes. Some answers are pretty old as well.
And - please excuse me for any grammatical errors.
Anyways, if anyone might be able to help.
I will be receive huge lists (.csv or similar) with data input.
The datalists will contain fields for type customerdata (name, address etc) and fields for type real estate/property data (street address, buildingIDs etc).
The customerdata and propertydata needs to be put into two separate tables.
My problem is like this:
- The two tables are depended on eachother as in the propertydata table needs to be populated with data first, which will generate a unique GUID - which again will be used when populating the customerdata table - connecting the customer to the correct property/real estate.
The first thing that came to mind is to populate all data into a temporary table.
But I am not quite sure how to loop through each row so I ensure that propertytable is populated first, then the customertable using the GUID.
Get data that involves propertydata and populate the property table
Get the unique GUID generated in property table
Get data that involves customerdata and populate the customer table - with correct GUID
Loop through rest of the set until there are no more rows with data.
I have seen some things like transactions, cursors, output etc that seems to be within my scope, but not sure which would be best way to solve my challenge? Am I near something when thinking like that?
EDIT:
These are example fields that will come as one row in .xlsx/.csv format.
Number of rows in such list will vary from time to time.
Property ID
Property address
Property building ID (only one)
Property established date
...Misc other property related fields
Customer ID
Customer Name
Customer Address
Postal code
...Misc other customer related fields
Fields 1 through 5 will need to populate a property table first. When each row in property table is populated it will generate unique GUID.
Then fields 6 through 10 will be used to populate a customer table, but also need to be populated with the corresponding unique GUID created above in the property table.
Property table:
Property ID
Property address
Property building ID (only one)
Property established date
...Misc other property related fields
UNIQUE PROPERTY GUID (created when populating each new row in table)
Customer table:
UNIQUE PROPERTY GUID
Customer ID
Customer Name
Customer Address
Postal code
...Misc other customer related fields
I suggest you use a staging table.
Load all data into a staging / temporary table and assign GUID numbers to each row.
Copy property details from staging table to property table taking GUID from the staging table
Copy customer details from staging table to customer table taking GUID from the staging table.
Delete data in staging table.
A quick example:
INSERT INTO PropertyDetails( GUID, PropertyID, PropertyAddress, ... )
SELECT GUID, PropertyID, PropertyAddress, ...
FROM StagingTable
INSERT INTO CustomertDetails( GUID, CustomerID, CustomerName, ... )
SELECT GUID, PropertyID, PropertyAddress, ...
FROM StagingTable
Lets say that I have the following columns in excel: Level, Group, Code, Name, date, Additional info with the following values:
1, A, 1234, John, 2019-09-01, info 1
1, A, 1234, John, 2019-09-01, info 2
I have currently the following logic for importing, if there is no record in database with certain code and level, then new record will be inserted, if code already exists in database then record will be updated. But as there is no unique identifier in excel then it is quit hard to update correct record. What are the common approaches in such cases?
Lets say that in above example, Group or date will be changed for one record. How to implement the logic, which updates correct record in db.
You aren't going to be able to have a distinct dataset if there isn't a unique primary key. Without such a primary key you will not be able update only one row, but rather update one or more similar rows. In the current state, it is not possible to accurately track changes.
If you did have a unique primary key the simplest solution would be to append a datetime as a way to track changes and add it as a new row when any value changes. Your dataset would look like:
1, A, 1234, John, 2019-09-01, Info1, DateCreated, DateChanged
1, A, 1234, John, 2019-09-01, Info2, DateCreated, DateChanged2
1, A, 1234, John, 2019-09-01, Info3, DateCreated, DateChanged3
It is important to remember that this only works with a static primary key, certain fields that are commonly used for composite keys may not work. Users could change their name or rectify an incorrectly entered birth date which could change some composite keys.
In SSIS this would be implemented using two Lookup tasks:
In the first Lookup Task compare the primary key. If the primary key does not exist, use a Derived Column task to set the DateCreated and DateModified to GETDATE().
If the primary key does exist, run a second lookup task that compares all rows in the record. If they are all identical it means that there were no changes to the record and no update needs to be sent to the database.
If there is a difference then use a Derived Column SSIS task to only update the DateModified column to GETDATE() and add it as a new row.
These three branching options should account for every potential state: New record, existing record with no changes, existing record with changes
Since you don't have a unique identifier then you should try to create your own. In a similar case I will use a derived column to concatenate all values that I am sure it will not changes. As Example:
Level + "|" + Code + "|" + Name + "|" + Additional info
Then I will compare the source data and the existing data based on that derived column. You can choose to store these derived column in the destination database or you can use a staging table to store these values.
Consider I have a columnFamily named PEOPLE. Suppose the columns are:
1. id text (PRIMARY KEY)
2. first_name text
3. last_name text
4. countries_visited Set<text>
I have created one secondary index on "countries_visited".
Now if I fire up a query like this:
select eid, first_name, countries_visisted from PEOPLE where countries_visited CONTAINS 'FRANCE';
So for a large number of records, it returns many records where countries_visited contains some other Strings (not "FRANCE").
I did use nodetool rebuild_index utility also, but I still get such result.
Is it the expected response ?
I am using:
CQL 3.2.1
Cassandra 2.1.11.908
I have a two tables such as customer_name and customer_phone, but the unique customer is identified from the combination of all the four columns from two of the tables.
Since we have multiple souce systems inserting into the below tables at the same time, in all those jobs we validate before insert using a function to check if the customer already exist using (f_name,l_name,area_code,phone_num) this combination. However we still see duplicates getting inserted, because the validation happens while other job has already inserted but not yet commited. Is there any solution to avoid duplicates ?
customer_name
Col: ID, First_name, last_name
cutomer_phone
col: ID,area_code, Phone_number
Yes. Don't do the checking in the application. Let the database do the checking by using unique indexes/constraints. If I had to guess on the constraints you want:
create unique index idx_customer_name_2 on customer_name(first_name, last_name);
create unique index idx_customer_phone_2 on customer_phone(customer_id, phone_number);
The database will then do the checking and you don't have to worry about duplicates -- although you should check for errors on the insert.
I am trying to delete a record in sqlite. i have four records record1, record2, record3, record 4
with id as Primary Key.
so it will auto increment for each record that i insert. now when i delete record 3, the primary key is not decrementing. what to do to decrement the id based on the records that i am deleting.
i want id to be 1,2,3 when i delete the record 3 from the database. now it is 1,2,4. Is there any sql query to change it. I tried this one
DELETE FROM TABLE_NAME WHERE name = ?
Note: I am implementing in xcode
I don't know why you want this but I would recommend leaving these IDs as is.
What is wrong with having IDs as 1,2,4?
Also you can potentially break things (referential integrity) if you use these ID values as foreign keys somewhere else.
Also please refer to this page to get a better understanding how autoincrement fields works
http://sqlite.org/autoinc.html
The sense of auto increment is always to create a new unique ID and not to fill the gaps created by deleting records.
EDIT
You can reach it by a special table design. There are no deleted records but with a field "del" marked as deleted.
For example, with a "select ... where del> 0" will find all active records.
Or place without the "where" all the records, then the ID's remain unaffected. To loop through an array with "if del = 0 continue". Thus, the array is always in consecutive order.
It's very flexible. Depending on the select ... you get.
all active records
all the deleted records
all records