I have a table in a SQL Server 2019 that stores URI's like this:
documentID | documentlink
1 | \\server\share\documentid\Contract.pdf
2 | \\server\share\documentid\Salesnumbers.xlsx
3 | \\server\share\documentid\NicePicture.xlsx
These values are stored as nvarchar. Is there a way to make these clickable?
So that, when this table is for example read by PowerQuery users only have to click the link to open the file? It is assumed that only those filetypes are allowed for which the users have applications to view those files.
This does not necessarily have to be in SQL Server itself. If someone could tell me how to make it clickable in for example Excel or PowerBI, I would be gratefull also.
Adding file:\ in front of documentlink makes Excel makes it into a clickable link. My Google-fu abandoned me.
Related
Like in front end applications, it shows the red line under text which means text spelling is incorrect. Similar to this do we have any function or method in which we can identify and correct wrong spellings in Database ? Databases like: MSSQL, Oracle or SAS.
HAVE | WANT
----------------------------------|---------------------------------
dis is windows based app | this is window based application
thsi phn has some many aplication | this phone has some many applications
I'm new into AS400 and I got a job where I'm using AS400 and Powerlink (XA) to access and manage big ERP data. And I found a way to access the data through Excel VBA and SQL using the System I Nagivator tables.
My problem is that I can't find the correct Schemas>Tables in Navigator to feed the excel VBA that matches the data that I want in AS400.
Question: let's say I want to find the price for an item, and I want to find the price table in Navigator. Is there a way in AS400 to get the price table name that matches the same table in Navigator ?
This is my first question please let me know if more information is needed.
Please help, thank you!
First a little terminology, AS/400 is an old term, the current name for the Platform and OS that used to be called AS/400 is now IBM i on Power Systems. IBM i is the OS. (that is until IBM changes the name again)
If You Know the Table Name but not the IBM i Object Name
On IBM i, the database is built into the OS and many of the OS objects are in fact database objects. Here is how some of the SQL concepts map to IBM i terms.
SQL IBM i
-------------- ------------------
Schema Library
Table Physical file
Index Logical file
View Logical file
Row Record
Column Field
Unfortunately in IBM i, object names are limited to 10 characters. SQL names on the other hand can be up to 128 characters. You won't find a Physical file named CustomerMaster. DB2 maps that long name to a system name. You can find the system name by querying the catalog like this:
select system_schema_name, system_table_name
from qsys2.systables
where table_name = 'Navigator name'
The column TABLE_NAME will hold the long SQL name of the table, SYSTEM_TABLE_NAME will hold the IBM i object name. Note that long schema names can be mapped to system names as well. The column SCHEMA_NAME holds the long SQL name of the schema while SYSTEM_SCHEMA_NAME holds the IBM i library name. It is uncommon for schema names to be longer than 10 characters, so the two schema name columns are typically the same.
If You Know the Program Name, and Have Access to the Source
This may be obvious to you, but I am putting it here just for completeness. You can look in the source for the files being used, and back track from the screen field to the file.
If You Only Have A Green Screen
You can retrieve the open files for the current job if you have the appropriate authority. If this doesn't work for you, you will have to get help from your system administrator, or someone who does have authority. This will only get you candidate files though, and likely they are logical files. To do this, you are going to have to have authority to view your job, and you will have to know how the system request key is mapped to your keyboard (that is implementation specific, and may be customized, so you will have to check with someone inside your company or your emulator to determine that).
With that behind us, start the green screen program that shows the price field you are looking for. Then press the system request key. If you are configured to allow this, you will get an input line on the bottom of your screen, and the cursor will be positioned to it.
Press Enter.
You should now be in the System Request menu.
Select option 3 and press enter again. You should be in the Display Job screen for your current job.
If this all worked correctly for you, then option 12 will show you the files that your job currently has a lock on. That is, the files that are open for your job. The price field should be in one of them.
My apologies if stackoverflow is not the best place for this question.
I'm relatively new to SQL still so I have a question about the best way to handle certain information. My job is to populate products on an ecommerce site with their relevant PDF files. This can range from product manuals, to CAD drawings, brochures, data sheets, so on and so on.
At first it seems like I'd want to give each category of downloadable file it's own column in the database. But that's going to get bloated as we sell a very large range of products, so the total number of downloadable file categories is going to be ridiculous.
My second thought was to load the data all into one column, but use something like JSON. When the data is pulled and read on the website I could read the JSON server-side to create the listing of filename titles and urls in a nice HTML list.
Is there a third option that I am overlooking? What's the best practice here?
What actual information are you storing regarding the files, just the Filename and URL?
I would probably go for a single table, with a column for specifying the document type. Assuming you have a central table with a unique key for the ProductId, it might look like this
ProductId | DocType | Filename | URL
=================================================
1 |CAD |name1.pdf | http://...
1 |Manual |name2.pdf | http://...
2 |Marketing Img|name3.jpg | http://...
...
I am experiencing a problem with mirrored datasets. This situation occured because the data model was switched a few months past and I just got recently assigned to this project, which already had a new application and data model done.
I was tasked with importing all the data from the old MS Access application to the new one and here's where the error has its source. The old data model was written in a way that every dataset was also stored as its mirrored counterpart. Imagine a database table like this:
pk | A | B
1 | hello | world
2 | world | hello
I imported the data via a self made staging process via Excel and VBA coding and that worked fine. The staging was necessary because I wanted to create insert statements and therefore had to map all the old IDs, names, ... to the news ones.
While testing the application after the import was done, I realized that the GUI showed all datasets twice. (The reason for it being shown twice and not once and then once again in mirrored form, is the way we fill the ListBox that shows the results)
I found the reason for that error in the mirrored data and now would like to get rid of it. The first idea I had is rather long and probably over-complicated, that's why I am posting here, in hope of finding a shorter solution.
So, my idea is as follows and would use solely VBA coding:
Filling recordSet with a SELECT * FROM mirroredDataTable
Write a SQL-Statement and check if the recordCount of that statements result is >1 for each record in the recordSet from 1.)
If the resultCount is >1 then one of the IDs in that result is written into a new recordSet or Array
The recordSet / array from 4.) is parsed again and for each ID in there I create a DELETE statement
???
profit
Now I already have an idea for the SQL statement in 2.), but before I begin I'd just like to ensure that there is no "easy" way that I haven't considered yet or just have overlooked.
Would greatly appreciate any help/info/tips you can provide.
PS: It is NOT an option to redesign the whole data model or something among the lines of this (not my decision)
Thanks to #Gord Thompson I was able to solve this issue on a purely SQL basis. See the answer of this subthread for the detailed solution: How to INTERSECT in MS Access?
I am having trouble coming up with a good way to store a dataset that continually changes.
I want to track and periodically report on the contents of specific websites. For example, for a certain website I want to keep track of all the PDF documents that are available. Then I want to report periodically (say, quarterly) on the number of documents, PDF version number and various other statistics. In addition, I want to track the change of these metric over time. E.g. I want to graph the increase in PDF documents offered on the website over time.
My input is basically a long list of URLs that point to all the PDF documents on the website. These inputs arrive intermittently, but they may not coincide with the dates I want to run the reports on. For example, in Q4 2010 I may get two lists of URLs, several weeks apart. In Q1 2011 I may get just one.
I am having trouble figuring out how to efficiently store this input data in a database of some sorts so that I can easily generate the correct reports.
On the one hand, I could simply insert the complete list into a table each time I recieve a new list, along with a date of import. But I fear that the table will grow quite big in a short time, and most of it will be duplicate URLs.
But, on the other hand I fear that it may get quite complicated to maintain a list of unique URLs or documents. Especially when documents are added, removed and then re-added over time. I fear I might get into the complexities of creating a temporal database. And I shudder to think what happens when the document itself is updated but the URL stays the same (in that case the metadata might change, such as the PDF version, file size, etcetera).
Can anyone recommend me a good way to store this data so I can generate reports from it? I would especially like to have the ability to retroactively generate reports. E.g, when I want to track a new website in Q1 2011, I would like to be able to generate a report from both the Q4 2010 data as well, even though the Q1 2011 data has already been imported.
Thanks in advance!
Why not just a single table, called something like URL_HISTORY:
URL VARCHAR (PK)
START_DATE DATE (PK)
END_DATE DATE
VERSION VARCHAR
Have END_DATE as either NULL or a suitable dummy date (eg. 31-Dec-9999) where the version has not been superceded; set END_DATE to be the last valid date where the version has been superceded, and create a new record for the new version - eg.
+------------------+-------------+--------------+---------+
|URL | START_DATE | END_DATE | VERSION |
|..\Harry.pdf | 01-OCT-2009 | 31-DEC-9999 | 1.1.0 |
|..\SarahJane.pdf | 01-OCT-2009 | 31-DEC-2009 | 1.1.0 |
|..\SarahJane.pdf | 01-JAN-2010 | 31-DEC-9999 | 1.1.1 |
+------------------+-------------+--------------+---------+
What about using a document database and instead of saving each url you save a document that has a collection of urls. At this point whenever you execute whatever process that iterates over all the urls you get all of the documents that existing a time frame or whatever qualifications you have on that and then run all of the urls across each of the documents.
This could also be emulated in sql server by just serializing your object to json or xml and storing the output in a fitting column.