Import database table (specific row) based on matching validator - sql

I have a database running now that has had all data in the "leads column" / "phone number row" removed. I have created an updated csv file that has most of the phone numbers present in addition to the client name, email and address.
How can I import the phone numbers in the phone numbers row based on matching the client name, email or address data, without affected any other columns or rows other than the phone numbers row?

This sounds like the perfect fit for an SSIS package! (this is assuming you are referring to SQL Server...since you didnt list an RDBMS it is just a guess)
Some SSIS package basics materials:
http://www.codeproject.com/Articles/155829/SQL-Server-Integration-Services-SSIS-Part-Basics
https://technet.microsoft.com/en-us/library/ms169917(v=sql.110).aspx
http://ssistutorial.blogspot.com/
SSIS is basically an ETL package development tool used with SQL Server that has countless options for moving data around. You would only need on data flow task inside of SSIS to accomplish what you are searching for. I highly recommend reading up on some of the content above and giving it a shot!

Related

AS400 and System I Navigator

I'm new into AS400 and I got a job where I'm using AS400 and Powerlink (XA) to access and manage big ERP data. And I found a way to access the data through Excel VBA and SQL using the System I Nagivator tables.
My problem is that I can't find the correct Schemas>Tables in Navigator to feed the excel VBA that matches the data that I want in AS400.
Question: let's say I want to find the price for an item, and I want to find the price table in Navigator. Is there a way in AS400 to get the price table name that matches the same table in Navigator ?
This is my first question please let me know if more information is needed.
Please help, thank you!
First a little terminology, AS/400 is an old term, the current name for the Platform and OS that used to be called AS/400 is now IBM i on Power Systems. IBM i is the OS. (that is until IBM changes the name again)
If You Know the Table Name but not the IBM i Object Name
On IBM i, the database is built into the OS and many of the OS objects are in fact database objects. Here is how some of the SQL concepts map to IBM i terms.
SQL IBM i
-------------- ------------------
Schema Library
Table Physical file
Index Logical file
View Logical file
Row Record
Column Field
Unfortunately in IBM i, object names are limited to 10 characters. SQL names on the other hand can be up to 128 characters. You won't find a Physical file named CustomerMaster. DB2 maps that long name to a system name. You can find the system name by querying the catalog like this:
select system_schema_name, system_table_name
from qsys2.systables
where table_name = 'Navigator name'
The column TABLE_NAME will hold the long SQL name of the table, SYSTEM_TABLE_NAME will hold the IBM i object name. Note that long schema names can be mapped to system names as well. The column SCHEMA_NAME holds the long SQL name of the schema while SYSTEM_SCHEMA_NAME holds the IBM i library name. It is uncommon for schema names to be longer than 10 characters, so the two schema name columns are typically the same.
If You Know the Program Name, and Have Access to the Source
This may be obvious to you, but I am putting it here just for completeness. You can look in the source for the files being used, and back track from the screen field to the file.
If You Only Have A Green Screen
You can retrieve the open files for the current job if you have the appropriate authority. If this doesn't work for you, you will have to get help from your system administrator, or someone who does have authority. This will only get you candidate files though, and likely they are logical files. To do this, you are going to have to have authority to view your job, and you will have to know how the system request key is mapped to your keyboard (that is implementation specific, and may be customized, so you will have to check with someone inside your company or your emulator to determine that).
With that behind us, start the green screen program that shows the price field you are looking for. Then press the system request key. If you are configured to allow this, you will get an input line on the bottom of your screen, and the cursor will be positioned to it.
Press Enter.
You should now be in the System Request menu.
Select option 3 and press enter again. You should be in the Display Job screen for your current job.
If this all worked correctly for you, then option 12 will show you the files that your job currently has a lock on. That is, the files that are open for your job. The price field should be in one of them.

Dynamic number of columns exceeds max column limitation SQL Server

I have what I consider a real need to create a query with several hundred columns.
We are working on a mailing for our client. In this mailing, they are listing out several locations where their customers can go to get information. As our designers create the template for this mailing, they are setting up "Slots" for each address. The number of slots on the mailing varies from one mailing to the other, from 6 to possibly 50.
My need for the query is to setup the merge of data into the mailing. I need to provide a query where each mailing is 1 record containing all the information they need for that mailing. I am dynamically creating the SQL statement with the max number of slots on that mailing. With up to 50 slots on that mailing, my query needs to look like this:
MailingID,
LogoLocation,
APNCode,
TFN,
CopyVersion,
Slot1_Name,
Slot1_Address,
Slot1_City,
Slot1_State,
Slot1_DateTime,
...
Slot50_Name,
Slot50_Address,
Slot50_City,
Slot50_State,
Slot50_DateTime
My first attempt was to create a table with all these fields, but I got this error:
The table has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes. INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit.
They only want the data in a CSV file, so I don't need to create a temp table for it.
My problem is that I'm trying to create a standard process and with the number of fields varying like that, I want to set this up in a way that we won't blow up the system every time we try and run it.
I've looked at a few pages and found details on the size limitations of SQL Server and several comments saying a table like this shows a bad database design.
http://msdn.microsoft.com/en-us/library/ms143432(v=sql.105).aspx
http://social.msdn.microsoft.com/Forums/en-US/fec1efbb-94ff-4fe9-8d69-12e95c48587d/its-maximum-row-size-exceeds-the-allowed-maximum-of-8060-bytes-insert-or-update-to-this-table-will?forum=transactsql
Work around SQL Server maximum columns limit 1024 and 8kb record size
I'm hoping that someone out there has some experience doing this and can share some insights on how to make this efficient. Is there another way to accomplish this that I don't know about?
UPDATE:
Thanks for all the quick replies.
More detail on my scenario. You get a flyer in the mail and when you turn the flyer over, it lists 50 locations in your county where you could go take a class or attend a meeting or something. All the details for that flyer needs to be in 1 record so they can map the fields on the one page. If that county has 50 address/date/time combinations, they need them included in the 1 record so they can properly slot the flyer. Think giant mail merge where there might only be 100 counties (100 flyers) but each flyer has tons of information.
When the data is actually stored in the database, I'm storing an id for the specific flyer (MailingID) and each address/date/time combo is its own record. It's just the file they need to merge the details onto the creative piece that has to be denormalized like this.
I haven't been able to find any details on limitations on views. Does a View have the same limitations as a table? Would it work to create a view for them that they can download when they need the data?
All the details for that flyer needs to be in 1 record so they can map the fields on the one page That is a questionable assumption. Why can't the data be stored in 50 rows in a 2nd table?
Anyway, if you insist on storing everything in one row you should probable use XML or JSON. That makes all these problems go away. SQL Server has great support for XML. You can even generate XML on the fly. So you could properly store the 50 items in a 2nd table and only combine them into one XML value for query purposes.

Concatenating Numbers Together in Oracle APEX using SQL

Ok. Here is some background: I have created a simple APEX application that is going to replace a handful of static HTML pages, an Access Database, and LOTS of manual work. Users use the application to submit work requests to my team and upon submission of the form the information is presented back to them in a 'receipt' with a new 'Request #' which they can use kind of like a UPS tracking number to make inquiries of their project. This number is the primary key of the submitted table and is auto-incremented by a sequence. This all works perfectly so far.
My problem is that for auto-incrementing to work, my PK obviously has to be a 'Number'. Again not really a problem. The issue is that prior to migrating to the APEX tool our 'Request #s' were formatted as a string of numbers 8 digits long with the necessary number of zeros (0's) to the left of the actual number. So Request # 789 is actually stored as 00000789 in our Access database. My boss has indicated that this same format needs to be mimicked when the # is displayed in the APEX tool as well since that is what our clients are used to seeing.
I need the Request # to continue to be stored as a Number so that I can continue to auto-increment but I need to find some way to append/concatenate the appropriate number of 0's to the front of the number when it is displayed. This is will likely need to be done with SQL. I am currently using this simple SQL script to display the #:
SELECT req_num
FROM proj_apex
WHERE req_num = (SELECT MAX(req_num) FROM proj_apex)
Thoughts? Any APEX or SQL developers have ideas?
select to_char(req_num, '00000009') ...
http://www.techonthenet.com/oracle/functions/to_char.php

Use a Query from the Destination db to limit OLE DB Source task in SSIS 2008

All,
I have a package that I'm building as a data importer so I can copy sets of data from my production environment and develop on another instance.
I have two tables that contain header and detail rows for service tickets. Those service tickets are tied back to orders.
I am pulling the service tickets from a certain time window, however, the originating orders fall outside of the date range that I'm pulling for the tickets.
I want to be able to take the following steps in an SSIS package:
Import the header and detail rows within the given date range from prod to dev
Select the relevant order numbers from dev tables
Use the list of order numbers to import only the relevant orders from prod
I poked through other answers and couldn't find answers that addressed this directly, so I apologize if there is an answer out there and I missed it. I may not have been asking the question correctly. I'm assuming that I would need to pull those order numbers into a temp table or variable in order to apply them as a filter.
As I write this, it just crossed my mind to use a join on the source system with the ticket to order tables and still use the date range to limit, but I'm still posting the question to see if anyone has dealt with this before.
Your steps are already fairly clear, are you asking how to actually implement them? It looks like you can do all three steps by using SELECT statements in your data sources:
Build a SELECT statement dynamically with the correct dates to use in your data source. The dates could be programmatically generated in a script task, or saved in a database table and populated into variables. Then you copy the data across to the dev system.
Run a SELECT statement in the dev system that returns the order numbers, and copy the results to a table in the prod database.
Run a SELECT statement in the prod database that joins on the table from step 2 and copy the results back to dev.
An alternative to the table in steps 2 and 3 would be a lookup transformation, but if you have a large number of rows then using a table will probably be faster.

Storing Data as XML BLOB

At the moment the team i am working with is looking into the possibility of storing data which is entered by users from a series of input wizard screens as an XML blob in the database. the main reason for this being that i would like to write the input wizard as a component which can be brought into a number of systems without having to bring with it a large table structure.
To try to clarify if the wizard has 100 input fields (for example) then if i go with the normal relational db structure then their will be a 1 to 1 relationship so will have 100 columns in database. So to get this working in another system will have to bring the tables,strore procedures etc into the new system.
I have a number of reservations about this but i would like peoples opinions??
thanks
If those inputted fields don't need to be updated or to be used for later calculation or computation some values using xml or JSON is a smart choice.
so for your scenario seems like its a perfect solution