updating text file contents using vba statement or code - vba

I have a question regarding Text file update through VBA code.
Please see the format of my text file.
No FirstName LastName JoinDate
0001 Anil Patel 01011901
0002 David Sam 01121973
0003 Jaya Shah 01011890
My question is I want to update Join date 01011901 and 01011890 to update 01012001. We have so many big text files and many dates that we have to update manually.
So if anyone helps to solve this problem it reduces our manual task.
Thank you very much in advance

Related

Present URI in database as clickable link

I have a table in a SQL Server 2019 that stores URI's like this:
documentID | documentlink
1 | \\server\share\documentid\Contract.pdf
2 | \\server\share\documentid\Salesnumbers.xlsx
3 | \\server\share\documentid\NicePicture.xlsx
These values are stored as nvarchar. Is there a way to make these clickable?
So that, when this table is for example read by PowerQuery users only have to click the link to open the file? It is assumed that only those filetypes are allowed for which the users have applications to view those files.
This does not necessarily have to be in SQL Server itself. If someone could tell me how to make it clickable in for example Excel or PowerBI, I would be gratefull also.
Adding file:\ in front of documentlink makes Excel makes it into a clickable link. My Google-fu abandoned me.

An alternative to creating tables that only store default data?

I have a small database where most of my tables have a duplicate table whose sole purpose is to hold default data. For example I have "tableA" and "default_TableA" and default_TableA may have the following values:
default_TableA
id | name
----------
1 | John
2 | Mark
The purpose of this is so that values can be added to tableA and in the event that I need to reset tableA to the default values, I can clear it then copy the values from default_TableA.
My question is if there is a better way to get this done or if it's better for the application to store the default values somewhere and then insert them into the table when needed?
My apologies if my question isn't clear or if it's already been asked. I wasn't sure how to google this question and what I've found doesn't really answer my question. If you have a link to similar question that's already been answered I would greatly appreciate it.
Thanks

Merge multiple rows in fixed width file source into one row

I'm working with the craziest file format I've seen. It is fixed width, and contains multiple record types (in the sense that each row may have different columns and widths). There's a file header, trailer, and then a static number of rows that when put together make up one record. The problem I'm having is that there is nothing in the rows that tell you they belong to the same record other than their sort order and a row number attribute.
Example:
001 David Wellingsworth Mr.
002 312-555-5555 3060 W Maple St. Chicago
001 Jimothy Bogendath Dr.
002 563-555-5432 123 Main St. Davenport
My question is therefore: is it possible, without using a Script Component, to process a file like this? I understand the basic concept of how to handle disparate record types in a fixed width file (making use of conditional splits and substrings), but I can't get past how to join up all this data after the splits if the rows don't have identifiers.
If it helps, my question is basically this previous question but in reverse.
Possible but with some work. I've worked with data like these and this was our approach on how we solved them.
You will need to build a table that will give them their own unique RecordID
Create another table for your Files to log in your filename and unique fileID
Link your fileID to the RecordID so you know which file each record came from
Build all your sub tables linking to each unique RecordID
Building your tables this way will give you:
Unique recordID for each row (though there maybe duplicate in the file, in your tables they are unique).
Knowing which file each record comes from.

How do I normalize data when an external program writes several records into one record?

I am using an external program that writes onto Access. This program collects data from an electronic form and writes all the data from the submitted form onto my Access database. However, the issue that has arisen and caused a lot of issues and slowed down our database, is that when it writes to my database, the data is not normalized.
The form looks something like this
Name: John Doe
DOb: April 1 1950
SIN: 123456789
Marital Status: Married
Phone: 123456789
Email: john#email.com
Then it writes everything on the form as one record using the Question as the field name and the entered data as the data. Something like this:
Name | DOB | SIN | Marital_Status | Phone | Email
John Doe| April 11 1950| 123456789| Married | 123456789| john#email.com
See this isn't much of an issue with the example form here, however, we have forms with about 100 questions which which we end up with a table with fields like:
Name|Date|Weather|Question1|Question2|Question3|Question4|...|Question100
.... and so forth.
As a noob, what I have done thus far was using the union sql query to manipulate the data so that it reads:
Name|Date|Weather|Question1
Name|Date|Weather|Question2
Name|Date|Weather|Question3
Name|Date|Weather|QuestionN
I have been able to get by with this but it is seriously slowing down my database and now I am having other issues.
How can I normalize this data when the external program writes data like this? I don't get to manipulate how the program writes to my Access Database.
Access 2010 has a feature called event-driven data macros, which are similar to triggers in other database systems. I don't personally have any experience using them, but it looks like you should be able to create an After Insert macro that will run when a new row is inserted. Within that macro you could split your questions up and insert them into a more normalized table (which you would then use to report off of).
You're doing it correctly, a union query is indeed the correct way to normalize a denormalized table. However, consider storing it normalized in addition to denormalized, so you can actually work with the data without having Access executing 100 queries every time you want to access your data. And consider splitting Name|Date|Weather to a different table, since you are repeating them 100 times per question.
You can store the union query result in a table by simply doing SELECT * INTO MyTable From UnionQuery. Combine the import from the other program with this query in a macro.
Obviously, this is not ideal. The ideal fix would be to manipulate the external program to not denormalize the data in the first place

Syntax for adding to this T-SQL Query

I built a simple UI for our users to query on our SQL Server DB. The UI started off as just one input field for a person's name. This field's input would be used to search on 3 fields on our database. The query up until now looks like this:
SELECT [Id], [Url], [PersonName], [BusinessName], [DOB], [POB], [Text]
FROM dbo.DataAggregate
WHERE CONTAINS([PersonName], 'NEAR((john, doe), 2, FALSE)')
OR CONTAINS([BusinessName], 'NEAR((john, doe), 2, FALSE)')
OR CONTAINS([Text], 'NEAR((john, doe), 2, FALSE)')
The above assumes the user queried on John Doe. The requirement for NEAR has to do with the format inconsistencies across data in our fields, but that's not relevant to this question, just an FYI.
Now, I've been instructed to add 4 more input fields in the UI to allow users to further tailor their query. These fields already exist in the DB records. My question is how do I add on to the above query for when the additional fields in the UI are used? Am I simply just adding several AND statements to it or OR statements to it?
Let me give an example to help you help me:
User Query:
Person Name: John Doe
DOB: 01/01/1900
Address: 123 Main St
POB: USA
Occupation: Worker
How would I add to my query to include the data for the other 4 input fields? Initially, to handle which input fields are populated and which are not, do I need IF statements in the query?
Each value in the other 4 input fields would need to be searched for in its own field, plus the Text field - i.e.
-The DOB would need to searched for in the DOB field and Text field
-The Address would need to searched for in the Address field and the Text field
etc.
It just seems there has to be a more efficient way to structure a query like this than having basically 5 sections similar to my above query separated by IF/AND/OR.
Thank you.
If you use parameters you can get round unpopulated inputs with this trick
Where
((#input1 is null) or (somefield = #input1))
Or / And
...
This is going to get messy real quick though, they'll come up with more inputs next week.
Other options
Grid with Filter capability.
Data dump for say Excel
Building the query programatically, with parameters.