AutoCAD Block comparison from file - vb.net

I am not asking for the answer, as that seems to upset some people, but does anyone know if the AutoCAD block table keeps a record of the block creation data (such as the date etc) that can be accessed?
My scenario:
I have a drawing of whatever. I insert a block called BlockA.dwg, that was created on monday. BlockA.dwg is then updated on wednesday.
I want to be able to run a command that checks if the block in my drawing is the latest version on file.
Can this be done? Has anyone else ever come across this need?

AutoCAD does not store this type of data at the entity level.
You could try adding an xrecord with a date stamp or other data you require. Please note that this will be attached to the block reference in the file, not the original BlockA.dwg. You could check the xrecord date stamp and then compare it to the modified date of the original file.
It's not a bulletproof solution but could get you going in the right direction.

Related

Passing data from one Pentaho transformation to another in a job?

Fairly straightforward question I think, I just haven't been able to find a clear example. I have a very complex transformation that I'm breaking down into a job. Having never created a job before, I'm struggling to send the data from one transformation to another. I used Copy Rows to Result in the first one and Get Rows From Result in the second one, but I feel like I'm still missing something. When I used Get Rows, I had to specify the row names - there was no sort of Get Fields button. I also can't preview the data in the transformation without running the job and having it save to an Excel file. When I did that, ALL of the fields were in the output file -- instead of just the ones I'd specified in the second transformation.
I've searched through the documentation and tried Googling but I can't find a clear walkthrough just on how to smoothly move data from one transformation to another. Any responses would be appreciated even if it's just pointing me towards something I've overlooked.
Thanks!
The most commom way is to use copy rows to result at the end of one KTR and use get rows from result as the starting point for the next one. Though you really can't "see" the result while operating in the next KTR, what you can do to ease the reading is set a preview window and leave it open to see all the columns names and data.
Whoever if you want to set just a few lines of code through to the next KTR you can use Set variables as the ending step of the first KTR and capture those variables at anytime in the second using Get Variables steps. Don't forget that if you do so you need to set the variables in the parent KJB(the Job that called the first KTR) with no Default value, and the Variable scope type of the Set variables step has to be set to Valid in the parent job.
The best way is to create KTR's, run/test each. This way you can examine resulting data and then integrate all individual transformations into the final job.

Comparing text files using Excel

This is not a question about excel vba in particular. The question is what approach would be the best.
Here is the problem I have. I have 2 text files (one of current month and one for a prior month) that have account information as the first piece of information followed by other information that I am not concerned about. Here is an example. The information in bold is the account number and unfortunate for me, the records are not sorted i.e. the account numbers could be in any order
1030887-7 JAMES SMITH 12/15/13 03/05/13 212.50 180+
This kind of information exists in both files. I need to create a report of what is new in the current month file and what was carried over from the prior month. I am not concerned about any that was present in the prior month and not in the current.
I was thinking of reading one set of information into an array, sort and then begin reading the second one to start the comparison. Can anyone suggest a better method? The text files have almost 20000 lines in them.
I should mention that the text file I am trying to compare is a report and so has multiple headers, trailers etc and that is complicating the comparison. Also these accounts are by branch and I have to ensure that I don't mix 2 of them up. It seems to be doable but a little bit complex
Instead of using Excel, I might suggest using a tool like diff. Please see Modern version of WinDiff? for a discussion.
(Win)Diff will perform a line-by-line comparison and tell you what lines are changed, deleted, or inserted.

DDay iCal RecurrenceId

I am actually using the Telerik kendo scheduler, instead of an iCal file, but I need to display an occurrence list in a manner other than the schedulers agenda list and the mappings between the schedule object and an ical object are similar so I figured I would use DDay.
Creating an iCalendar object and loading in the schedule table I have the code working where I can build the occurrence list and filter out events deleted from the series. Where I am having problems is where an event was modified in the series.
With the way the data is stored in the db when a modified event in a series is created it creates a new record and populates the RecurrenceId field with the original record's event Id.
Seems simple enough just match against id fields, right? Problem is the RecurrenceId in DDay iCal is an IDateTime and not an int. I am not sure how to process it because we have a lot of events that start/end at the same time so grabbing the original start/end does not seems viable. Unless I am misunderstanding what is actually happening.
I did figure out a convoluted way to handle a modified event if it is the only modified event for the master event in the occurrence. But this method does not account for an occurrence series that might include the master event and multiple modified events off of the master.
I guess one possible solution would be to build 2 lists from the scheduler table one that has only modified recurring events and the other master events.
Then as I am processing the occurrences I do a check against each occurrence to see if it exists in the modified events list. If so update the occurrence accordingly. Seems like a very kludgy solution however. Even if it would address all the modified scenarios. I think I am missing something in the library that would handle this.
As always any help that can be provided on this issue would be appreciated.
Thanks,
Chris
It seems that my test set data was screwed up. The way it appears that it works is the master event will have a recurrence exception. So when you make a call to get an occurrences if an exception is in that range the master record will be skipped/filtered and the modified event for the series will remain. Additionally, if the event in the occurrence range was deleted then the event will not appear in the occurrence range.
There were some errors in my test data set which made it look like there needed to be an additional filter step.

Get list of columns of source flat file in SSIS

We get weekly data files (flat files) from our vendor to import into SQL, and at times the column names change or new columns are added.
What we have currently is an SSIS package to import columns that have been defined. Since we've assigned the mapping, SSIS only throws up an error when a column is absent. However when a new column is added (apart from the existing ones), it doesn't get imported at all, as it is not named. This is a concern for us.
What we'd like is to get the list of all the columns present in the flat file so that we can check whether any new columns are present before we import the file.
I am relatively new to SSIS, so a detailed help would be much appreciated.
Thanks!
Exactly how to code this will depend on the rules for the flat file layout, but I would approach this by writing a script task that reads the flat file using the file system object and a StreamReader object, and looks at the columns, which are hopefully named in the first line of the file.
However, about all you can do if the columns have changed is send an alert. I know of no way to dynamically change your data transformation task to accomodate new columns. It will have to be edited to handle them. And frankly, if all you're going to do is send an alert, you might as well just use the error handler to do it, and save yourself the trouble of pre-reading the column list.
I agree with the answer provided by #TabAlleman. SSIS can't natively handle dynamic columns (and niether can your SQL destination).
May I propose an alternative? You can detect a change in headers without using a C# Script Tasks. One way to do this would be to create a flafile connection that reads the entire row as a single column. Use a Conditional Split to discard anything other than the header row. Save that row to a RecordSet object. Any change? Send Email.
The "Get Header Row" DataFlow would look like this. Row Number if needed.
The Control Flow level would look like this. Use a ForEach ADO RecordSet object to assign the header row value to an SSIS variable CurrentHeader..
Above, the precedent constraints (fx icons ) of
[#ExpectedHeader] == [#CurrentHeader]
[#ExpectedHeader] != [#CurrentHeader]
determine whether you load data or send email.
Hope this helps!
i have worked for banking clients. And for banks to randomly add columns to a db is not possible due to fed requirements and rules. That said I get your not fed regulated bizz. So here are some steps
This is not a code issue but more of soft skills and working with other teams(yours and your vendors).
Steps you can take are:
(1) reach a solid columns structure that you always require. Because for newer columns older data rows will carry NULL.
(2) if a new column is going to be sent by the vendor. You or your team needs to make the DDL/DML changes to the table were data will be inserted. Ofcouse of correct data type.
(3) document this change in data dictanary as over time you or another member will do analysis on this data and would like to know what is the use of each attribute or column.
(4) long-term you do not wish to keep changing table structure monthly because one of your many vendors decided to change the style the send you data. Some clients push back very aggresively other not so much.
If a third-party tool is an option for you, check out CozyRoc's Data Flow Task Plus. It handles variable columns in sources.
SSIS cannot make the columns dynamic,
one thing, i always do, is use a script task to read the first and last lines of a file.
if it is not an expected list of csv columns i mark file as errored and continue/fail as required.
Headers are obviously important, but so are footers. Files can through any unknown issue be partially built. Requesting the header be placed at the rear of the file it is a double check.
I also do not know if SSIS can do this dynamically, but it never ceases to amaze me how people add/change order of columns and assume things will still work.
1-SSIS Does not provide dynamic source and destination mapping.But some third party component such as Data flow task plus , supporting this feature
2-We can achieve this using ssis script task.
3-If the Header is correct process further for migration else fail the package before DFT execute.
4-Read the line from the header using script task and store in array or list object
5-Then compare those array values to user defined variables declare earlier contained default value as column name.
6-If values are matching exactly then progress further else fail it.

Create a report that could be one page or two, depending on what field was modified

In an alternate application, the user has the ability to update their address and phone number. When these are changed, three fields will update: Old Value, New Value, and Field Changed. If the Field Changed was the address, I need to create two report pages - one with the old address and one with the new. However, if the Field Changed was the phone number, I only need to create one report page for the current address.
My initial plan was to do a Union that would have one record with the Old Value and another with the New Value. This should work when only the Address has changed. However, it won't whenever the Phone Number has changed. I assume I need to do some sort of case statement, but I'm not really sure if this is the right approach. Sorry if the data is a little confusing (I didn't design the data structure. This was provided by our professor's assistant). If you need more information, I'll try to provide it.
I'm not looking for exact SQL, but I am wondering if I'm approaching this the correct way.
What do you mean by a 1 or 2 page report? Are you outputting to a CSV, PDF, XLSX or something eles?
If you need to do this through "pure" sql I would recommend a stored procedure that is given a value stating whether it's the address or phone number that is being updated. It can then do the update and you can simply do an if statement which determines which report to run and return.
I'd recommend handling it programatically if possible. Have your code run the sql update and then call the appropriate function within your code to get the report you need. You can then easily re-use the code for that report in other ways.