LinqPad - Share data between queries - linqpad

I have two connections to two distinct servers.
I'd like to access data from databases on both servers. It seems impossible (or at least complicated).
I was thinking it may be easier to do some requests on one server, store the results in memory in some variable and then access that variable in another query on the other server.
I tried with static variables in MyExtensions as well as with
AppDomain.CurrentDomain.SetData("myVariable", results) and AppDomain.CurrentDomain.GetData("myVariable") but both don't work.

I had this same problem, but the solution ended up being much simple than I had expected.
Since I had a project with both of datacontexts (on different servers) I wished to query, I added a reference (F4 in a query, then Additional References->Find your bin folder or whatever) to the .dll of my project. I added a config/connectionstrings section to the app.config of the query that contained the names of the connectionStrings my project context was looking for with the correct connection Info.
This gave me not only access to my data context, but also much of the business logic (for instance, repos/dtos/viewModels/other transformations) from my project. From there, I could grab whatever I needed from DB/server A, put it my preferred data type (usually a List), and then have it interact with data from DB/server B as needed.
Hope this helps!

Related

get a list of listItem.fieldValues Client object model Sharepoint 2010

I'm building a Sharepoint 2010 export tool for back up reasons (a bit like the filemanager from Metavis).
When downloading a file to local disk I need to back up the metadata associated with the document. Which I will store in a csv-file. My first approach was to iterate all listItem.fieldvalues, but that doesn't really work because some fieldvalues are complex types, which would needlessly complicate the backup file. Some values even have line endings, for example "MetaInfo". Furthermore not all values are needed to restore the content when that might be necessary.
So my idea is to only get the values from the Fieldvalues collection which are needed to do a functional restore, supplemented with all the user added metadata.
To do this I want to check all fieldvalues against an exclusion list to see if it is present. If it is present don't back up. If it is it is either user generated metadata or a value I need like for instance "author", "created".
So my question is, does anyone know of a list of all fieldvalues keys?
Or is there a better approach to my problem?
Thanks
Update: Well, as I was iterating through the FieldValues collection any way. It was easy to do a dump of all the values to a CSV. Running it once was enough to get all the values. Now all I need to write is an xml file for configuration. This leaves the question: is there a better way of doing this?
Filter the list fields by writing following code
using System;
using Microsoft.SharePoint.Client;
clientContext.Load(
listItems,
items => items
.Include(
item => item["Title"],
item => item["Category"],
item => item["Estimate"]));
Source: http://msdn.microsoft.com/en-us/library/ee857094.aspx#SP2010ClientOM_Creating_Windows_Console_Application
You can create an view with all fields, get the view using sharepoint object model and and get its column name from collection and filter them as per your requirement.
I have finished the application. As I wrote in my update I have made a list of all fieldValues by exporting them to a CSV file. After that I made a configuration-file with a boolean 'Backup'. This makes it possible to control which values are to be used when a backup is made.
I retrospect I think a configuration file was not needed. The values used when backing up are so much part of the whole workings of the program that a configuration file gives an administrator or casual future developer the impression that a simple reconfiguring will fulfill there needs.
I can now see that if the program needs to change due to new requirements the code has to be changed anyway. So even though setting a value to 'True' will change the output. Some other code has to be written as well. If I were to write it again I would probably use constants. This makes it all less dynamic, but still fulfill the needs of the program.
(BTW a list of all the names off the standard fieldValues would have been nice to start with. I would publish it here, but I don't have access to the file anymore, because I switched jobs recently.)

Accessing tables from different .mdb files

I need to show a grid of saved projects (compare "orders") in a datagrid, where the projects are saved in an Access 2000 database with a similar schema as follows:
ID Name Country_ID Plant_Type
1 'Test' 1 1
2 'Second' 2 2
Let's call the file "Projects.mdb". This is then showed in the datagrid as:
ID Name Country Plant Type
1 'Test' 'Germany' 'Free Range'
2 'Second' 'France' 'Inclined Roof'
where the countries and "Plant Types" are fetched from a different table in a different .mdb file (also Access 2000, call it "Language.mdb", although there is a lot of different background data in it), depending on the current user's language preference. It is unfortunately not an alternative to merge these .mdb's into one file.
To be able to show the datagrid I have so far linked the tables from "Language.mdb" into "Projects.mdb", but this screws up when the project is being installed on another computer with the .msi file i created (we'd like to have this easily packaged and installed), as the "Language.mdb" doesn't exist on the linked path on the target computer (Basically the problem here).
I can come up with the following solutions:
Force all users to install on the same path, so that the links will work (undesirable)
Use connection strings in the query as shown here on MSDN (still trying this out, but I need to work on the details)
make a post-install script that relinks the tables according to the correct path.
But I think I'm doing something wrong here. As stated above, it is not an option to merge the .mdb-files, but other suggestions to changing the database schema or whatever it could be (I'm not very experienced with databases) would be very appreciated.
To get around the 'different install paths' problem I use code (on every database load) which first looks for any back end databases in the current db folder; if not found, it asks the user to locate the missing .mdb file. Then the code relinks the database(s). Once the dbs have been successfully linked, the database saves the path and checks this path first on subsequent loads.
Well, based on the constraints that you have put on the solution. I would either go with option 2 or 3. There is not an elegant solution to this at all.
I would however, lean towards your third option, as a "one time" fix to get the files linked, so that the path between them is known, and you are not dynamically adding path information into every query.
note
I'll just mention, but I'm sure you already know this, that if you are looking at doing something like this, it just feels wrong to be doing it with Access, let alone access 2000 at this time for client deployments. I would strongly recommend additionally truly evaluating the solution and see if you can either merge to one, or possibly move to SQL Server Express or something that you could send off to the user as an installer
Is Project split, as it should be, to allow a front end on each user's computer? If so, can you not store the path on the front-end and only re-link if it changes? Code to re-link tables is quite simple, for the most part. The user can be allowed to browse for the location and the Connect property can be updated accordingly.

Do you put your database static data into source-control ? How?

I'm using SQL-Server 2008 with Visual Studio Database Edition.
With this setup, keeping your schema in sync is very easy. Basically, there's a 'compare schema' tool that allow me to sync the schema of two databases and/or a database schema with a source-controlled creation script folder.
However, the situation is less clear when it comes to data, which can be of three different kind :
static data referenced in the code. typical example : my users can change their setting, and their configuration is stored on the server. However, there's a system-wide default value for each setting that is used in case the user didn't override it. The table containing those default settings grows as more options are added to the program. This means that when a new feature/option is checked in, the system-wide default setting is usually created in the database as well.
static data. eg. a product list populating a dropdown list. The program doesn't rely on the existence of a specific product in the list to work. This can be for example a list of unicode-encoded products that should be deployed in production when the new "unicode version" of the program is deployed.
other data, ie everything else (logs, user accounts, user data, etc.)
It seems obvious to me that my third item shouldn't be source-controlled (of course, it should be backuped on a regular basis)
But regarding the static data, I'm wondering what to do.
Should I append the insert scripts to the creation scripts? or maybe use separate scripts?
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
Should I differentiate my two kind of data? (the first one being usually created by a dev, while the second one is usually created by a non-dev)
How do you manage your DB static data ?
I have explained the technique I used in my blog Version Control and Your Database. I use database metadata (in this case SQL Server extended properties) to store the deployed application version. I only have scripts that upgrade from version to version. At startup the application reads the deployed version from the database metadata (lack of metadata is interpreted as version 0, ie. nothing is yet deployed). For each version there is an application function that upgrades to the next version. Usually this function runs an internal resource T-SQL script that does the upgrade, but it can be something else, like deploying a CLR assembly in the database.
There is no script to deploy the 'current' database schema. New installments iterate trough all intermediate versions, from version 1 to current version.
There are several advantages I enjoy by this technique:
Is easy for me to test a new version. I have a backup of the previous version, I apply the upgrade script, then I can revert to the previous version, change the script, try again, until I'm happy with the result.
My application can be deployed on top of any previous version. Various clients have various deployed version. When they upgrade, my application supports upgrade from any previous version.
There is no difference between a fresh install and an upgrade, it runs the same code, so I have fewer code paths to maintain and test.
There is no difference between DML and DDL changes (your original question). they all treated the same way, as script run to change from one version to next. When I need to make a change like you describe (change a default), I actually increase the schema version even if no other DDL change occurs. So at version 5.1 the default was 'foo', in 5.2 the default is 'bar' and that is the only difference between the two versions, and the 'upgrade' step is simply an UPDATE statement (followed of course by the version metadata change, ie. sp_updateextendedproperty).
All changes are in source control, part of the application sources (T-SQL scripts mostly).
I can easily get to any previous schema version, eg. to repro a customer complaint, simply by running the upgrade sequence and stopping at the version I'm interested in.
This approach saved my skin a number of times and I'm a true believer now. There is only one disadvantage: there is no obvious place to look in source to find 'what is the current form of procedure foo?'. Because the latest version of foo might have been upgraded 2 or 3 versions ago and it wasn't changed since, I need to look at the upgrade script for that version. I usually resort to just looking into the database and see what's in there, rather than searching through the upgrade scripts.
One final note: this is actually not my invention. This is modeled exactly after how SQL Server itself upgrades the database metadata (mssqlsystemresource).
If you are changing the static data (adding a new item to the table that is used to generate a drop-down list) then the insert should be in source control and deployed with the rest of the code. This is especially true if the insert is needed for the rest of the code to work. Otherwise, this step may be forgotten when the code is deployed and not so nice things happen.
If static data comes from another source (such as an import of the current airport codes in the US), then you may simply need to run an already documented import process. The import process itself should be in source control (we do this with all our SSIS packages), but the data need not be.
Here at Red Gate we recently added a feature to SQL Data Compare allowing static data to be stored as DML (one .sql file for each table) alongside the schema DDL that is currently supported by SQL Compare.
To understand how this works, here is a diagram that explains how it works.
The idea is that when you want to push changes to your target server, you do a comparison using the scripts as the source data source, which generates the necessary DML synchronization script to update the target. This means you don't have to assume that the target is being recreated from scratch each time. In time we hope to support static data in our upcoming SQL Source Control tool.
David Atkinson, Product Manager, Red Gate Software
I have come across this when developing CMS systems.
I went with appending the static data (the stuff referenced in the code) to the database creation scripts, then a separate script to add in any 'initialisation data' (like countries, initial product population etc).
For the first two steps, you could consider using an intermediate format (ie XML) for the data, then using a home grown tool, or something like CodeSmith to generate the SQL, and possible source files as well, if (for example) you have lookup tables which relate to enumerations used in the code - this helps enforce consistency.
This has another benefit that if the schema changes, in many cases you don't have to regenerate all your INSERT statements - you just change the tool.
I really like your distinction of the three types of data.
I agree for the third.
In our application, we try to avoid putting in the database the first, because it is duplicated (as it has to be in the code, the database is a duplicate). A secondary benefice is that we need no join or query to get access to that value from the code, so this speed things up.
If there is additional information that we would like to have in the database, for example if it can be changed per customer site, we separate the two. Other tables can still reference that data (either by index ex: 0, 1, 2, 3 or by code ex: EMPTY, SIMPLE, DOUBLE, ALL).
For the second, the scripts should be in source-control. We separate them from the structure (I think they typically are replaced as time goes, while the structures keeps adding deltas).
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
We have a complete procedure for that, and a readme coming with each release, with scripts and so on...
First off, I have never used Visual Studio Database Edition. You are blessed (or cursed) with whatever tools this utility gives you. Hopefully that includes a lot of flexibility.
I don't know that I'd make that big a difference between your type 1 and type 2 static data. Both are sets of data that are defined once and then never updated, barring subsequent releases and updates, right? In which case the main difference is in how or why the data is as it is, and not so much in how it is stored or initialized. (Unless the data is environment-specific, as in "A" for development, "B" for Production. This would be "type 4" data, and I shall cheerfully ignore it in this post, because I've solved it useing SQLCMD variables and they give me a headache.)
First, I would make a script to create all the tables in the database--preferably only one script, otherwise you can have a LOT of scripts lying about (and find-and-replace when renaming columns becomes very awkward). Then, I would make a script to populate the static data in these tables. This script could be appended to the end of the table script, or made it's own script, or even made one script per table, a good idea if you have hundreds or thousands of rows to load. (Some folks make a csv file and then issue a BULK INSERT on it, but I'd avoid that is it just gives you two files and a complex process [configuring drive mappings on deployment] to manage.)
The key thing to remember is that data (as stored in databases) can and will change over time. Rarely (if ever!) will you have the luxury of deleting your Production database and replacing it with a fresh, shiny, new one devoid of all that crufty data from the past umpteen years. Databases are all about changes over time, and that's where scripts come into their own. You start with the scripts to create the database, and then over time you add scripts that modify the database as changes come along -- and this applies to your static data (of any type) as well.
(Ultimately, my methodology is analogous to accounting: you have accounts, and as changes come in you adjust the accounts with journal entries. If you find you made a mistake, you never go back and modify your entries, you just make a subsequent entries to reverse and fix them. It's only an analogy, but the logic is sound.)
The solution I use is to have create and change scripts in source control, coupled with version information stored in the database.
Then, I have an install wizard that can detect whether it needs to create or update the db - the update process is managed by picking appropriate scripts based on the stored version information in the database.
See this thread's answer. Static data from your first two points should be in source control, IMHO.
Edit: *new
all-in-one or a separate script? it does not really matter as long as you (dev team) agree with your deployment team. I prefer to separate files, but I still can always create all-in-one.sql from those in the proper order [Logins, Roles, Users; Tables; Views; Stored Procedures; UDFs; Static Data; (Audit Tables, Audit Triggers)]
how do you make sure they execute it: well, make it another step in your application/database deployment documentation. If you roll out application which really needs specific (new) static data in the database, then you might want to perform a DB version check in your application. and you update the DB_VERSION to your new release number as part of that script. Then your application on a start-up should check it and report an error if the new DB version is required.
dev and non-dev static data: I have never seen this case actually. More often there is real static data, which you might call "dev", which is major configuration, ISO static data etc. The other type is default lookup data, which is there for users to start with, but they might add more. The mechanism to INSERT these data might be different, because you need to ensure you do not destoy (power-)user-created data.

How to store configuration data so that to not copy it during database copy?

There are parameters that I would not want to be transferred from production environment to QA system. Staff like network path and url's. The problem is that in ABAP everything is in the database and when the database is copied to the QA system you have to manually change those parameters. And this is prone to errors.
Is there a way to store configuration information in a way that won't get transferred with the database?
Thanks.
In short: no - at least that would be very unusual in a SAP environment.
If your QA system is set up as a system copy of your production environment (which is the usual path), there are quite a few steps to do to make the system work correctly. This includes some configuration, which can be as simple as filepaths such as you mention, but also the addresses and names of "partner systems". For example, one of my customers is a bank, so when copying his production system, he makes triply sure that no activity on the QA side accidentally trickles to the production side. Some other changes are made as well, for example obscuring peoples names and addresses so no mail gets accidentally sent etc.
There are a few ways to make applying these changes as easy as possible (look for some SAP documentation or books on SAP Transport and Change management, I had one by Sue McFarland Metzger or so that was quite good). From what I've seen, there is usually a set of transports that change the configuration and customizing etc. on the QA system to the
appropriate values.
Hope that helps.
You cannot prevent the configuration stored in the database from being copied to the cloned instance. However, you can design the configuration storage in a way that will prevent the copied entries from being used. You should check with your basis administrators if they can guarantee that the cloned system will get a new system ID (SID). If this is the case, then you can simply use the SID as key field in your configuration table. After the system copy, the SID will be changed and the cloned system will no longer access the original entries.
your question is not clear, are you talking about standard or custom config ?
Greetings, assuming you are storing these paths in a Z table, then some shops put the sy-sysid ( system id ) as one of the columns. Maintain all systems in your dev and transport to production. This becomes painful after a while, so I would only suggest this for information that does not change a lot ( file paths might be good ).
T.

Ajax autocomplete extender populated from SQL

OK, first let me state that I have never used this control and this is also my first attempt at using a web service.
My dilemma is as follows. I need to query a database to get back a certain column and use that for my autocomplete. Obviously I don't want the query to run every time a user types another word in the textbox, so my best guess is to run the query once then use that dataset, array, list or whatever to then filter for the autocomplete extender...
I am kinda lost any suggestions??
Why not keep track of the query executed by the user in a session variable, then use that to filter any further results?
The trick to preventing the database from overloading I think is really to just limit how frequently the auto updater is allowed to update, something like once per 2 seconds seems reasonable to me.
What I would do is this: Store the current list returned by the query for word A server side and tie that to a session variable. This should be basically the entire list I would think. Then, for each new word typed, so long as the original word A exists, you can filter the session info and spit the filtered results out without having to query again. So basically, only query again when word A changes.
I'm using "session" in a PHP sense, you may be using a different language with different terminology, but the concept should be the same.
This question depends upon how transactional your data store is. Obviously if you are looking for US states (a data collection that would not change realistically through the life of the application) then I would either cache a System.Collection.Generic List<> type or if you wanted a DataTable.
You could easily set up a cache of the data you wish to query to be dependent upon an XML file or database so that your extender always queries the data object casted from the cache and the cache object is only updated when the datasource changes.
RAM is cheap and SQL is harder to scale than IIS so cache everything in memory:
your entire data source if is not
too large to load it in reasonable
time,
precalculated data,
autocomplete webservice responses.
Depending on your autocomplete desired behavior and performance you may want to precalculate data and create redundant structures optimized for reading. Make use of structs like SortedList (when you need sth like 'select top x ... where z like #query+'%'), Hashtable,...
While caching everything is certainly a good idea, your question about which data structure to use is an issue that wasn't fully answered here.
The best data structure for an autocomplete extender is a Trie.
You can find a good .NET article and code here.