Microsoft dynamics 2011 with MSQL transactional replication and CDC - sql

I want to know if anyone has a suggestion of getting data from one database into Dynamics CRM?
We have suggested using CDC and transactional replication.
Any suggestions?

There are a myriad of ways this might be done. The method I use most often is:
1. Create an custom entity in CRM to store the data
2. Write a console app that
a. Queries the source
b. Cleans inbound data as appropriate
c. Connects to CRM
d. Creates, Updates, Deactivates or Deletes entities as required
3. Logs the activity
The most important steps for me are “Cleans the inbound data”, and “Logs the activity”. Because I am writing the code I have complete control as to what is written and where. Using some import utility usually leaves me frustrated sooner or later. In my experience writing a custom import utility almost always pays off.
Having said that there is plenty of code and some decent third party tools available. You might look at Scribe.
Performing Bulk Operations Using Scribe Insight Adapter for Dynamics CRM 2011
http://www.powerobjects.com/blog/2013/08/20/performing-bulk-operations-scribe-insight-adapter-dynamics-crm-2011/

Related

Are there any potential issues caused with enabling change tracking in SQL Server 2012 on a CRM2013 Database?

We are attempting to use a service Stitch to pull out some sections of data from our CRM DB for reporting purposes. To do so I would need to enable change tracking on our CRM database and the tables I would want to pull the information from.
Generally, I shy away from do anything but reads on the CRM Database as I have read of various issues others have encountered by performing various sql tasks in sql on a CRM DB rather than using CRM for whatever the task is.
I would imagine the only consequence should be slightly larger logs while it retains the change tracking period, but was trying to determine if anyone knew of a reason not to enable change tracking on a crm database or other potential consequences.
Thanks
Performance hit first and foremost. Also I believe the settings applied to enable change tracking will be at risk of being overwritten when you update/upgrade your CRM environment.
Since it wont be changing the base schema or performing DML on the base tables, it should be fine from a CRM Functionality perspective.

Better test reporting

I'm looking for some help designing a better summary report. Right now we publish and send everything (execution% by modules, defects etc) in an excel and I was hoping if we could use that excel data to generate a live dashboard that would be accessible by a URL.
To add, the execution data comes from QTest and defects from JIRA. At this point we are even ok with filling data in excel manually and using that as a source for any reporting tool.
If a free tool is available, even more better.
Any leads, helps, feedback is appreciated.
Thanks,
MD
Sounds like you need Microsoft's Power BI. We've done a lot of reporting from JIRA using this free tool (Desktop). If you need to share it with others "real time", you'll prefer the online experience for about $10/user/month. But if you're looking to stay "free", you can simply share the Power BI file with your stakeholders.
I recommend AGAINST using the already built in JIRA APP. It seems to want to pull back all your issues. Instead, use a REST API Call like this:
https://domain/rest/api/2/search?jql=filter=22605&fields=id,key,summary,description
If you get more issues back than your Issue Search is configured for, the pagination can be a little tricky. Also multiple values in a custom field need special handling.
Or if you're on premise and know your JIRA DB, direct SQL is an efficient way to go.
We use both mechanisms... (REST and SQL). SQL let us add logic in the view of the data that JIRA itself doesn't report on easily. (Parent-Child-subchild relationships and roll up of effort, story points, etc)
The best part of the Power BI solution is you should be able to integrate the data from JIRA and your test tool. (We pull from JIRA and our time tracking system).

How to model data flows with a SQL backend?

My question is not about a specific code. I am trying to automate a business data governance data flow using a SQL backend. I have put a lot of time searching the internet or reaching out people for the right direction, but unfortunately I have not yet found something promising so I have a lot of hope I would find some people here to save from a big headache.
Assume that we have a flow (semi static/dynamic flow) for our business process. We have different departments owning portions of data. we need to take different actions during the flow such as data entry, data validation, data exportation, approvals, rejections, notes etc and also automatically define deadlines, create reports of overdue tasks and people accountable for them etc.
I guess the data management part would not be extremely difficult, but how to write an application (codes) to run the flow (workflow engine) is where I struggle. Should I use triggers or should I choose to write codes to frequently run queries to push the completed steps to next step, how I can use SQL tables to keep the track of flow etc
If one could give me some hints on this matter, I would be greatly appreciated
I would suggest using the sql server integration services SSIS, you can easily mange the scripts and workflow based on some lookup selections, and also you can schedule SSIS package on timely bases to trigger and do the job.
It's hard task to implement application server on sql server. Also it's will be very vendor depended solution. Best way i think to use sql server as data storage and some application server for business logic over data storage.

Best Approach for syncing Azure SQL Database

Right now, our application only has one Web Site instance along with SQL Database deployed at Azure US datacenter. We are looking for deploying more Web Site instance at other datacenter such as APAC and Europe. There still be a local SQL Database for each of those web site instance. We would like end user could fail over to another instance if his registered instance is not available, such as if US web site instance is down, we could fail over user to Europe instance. With this, we would need to synchronize local SQL Database at all data centers, US, Europe and APAC.
So we are looking for what's best approach to implement the database synchronization here for Azure SQL Database. Here are what we found at this point:
Azure Data Sync, it looks like that it is the perfect choice since it is available right away at Azure Management Portal and it would be up and running with some simple configuration. However there seems couple catches. The feature has been on preview about 2 years now (see this link with the following quote from comment):
SQL Data Sync has been in preview for over 2 years and the last update was December 2012. Has this been abandoned? Is this a technology we should encourage our clients to use? There absolutely needs to be an ability to synchronize data between a local SQL DB and Azure but Microsoft seems to have dropped this and I'm leery of putting a client on this only to find that the plug has been pulled. You owe it to your users to give us some information
I also saw the post Azure data sync not syncing all databases at SO, it seems that this feature is a second class feature at Azure and MS doesn't really pay sufficient attention to it. So I am worried how good it is.
Microsoft Sync Framework, it seems a more generic sync framework and more suitable for client and server sync instead of sync among server database. Plus it is not simple as above SQL Data Sync which is available just by configuration at Azure.
Any other suggestions on sql database sync at Azure? It would be really appreciated if you could share your experience here.
Thanks very much in advance for your insight.
Update:
Azure Data Sync is built upon using Microsoft Sync Framework: see link, the quote:
Microsoft SQL Data Sync is a cloud-based data synchronization service built on the Microsoft Sync Framework technologies.
Since no one is answering this question and I am going to do it myself. Based on some latest information, the Azure Data Sync is buggy and can not be used for production at this point. I guess that's the reason why it never moves out of preview even after around 2 years. There is no other good approach for handling Azure SQL Database sync at this point unless you want to build something yourself.
you can use RedGate Data Compare to sync your Azuresql DB with your Local DB

What are ways to transfer tables from Oracle to SQL Server

I've been searching the internet for this question:
What are ways to transfer data and tables on a daily basis from an Oracle's Hyperion to SQL Server 2000?
I am an intern at a company and trying to figure out possible ways to do this. Any help or point in the right direction is greatly appreciated
This is going to depend a lot on specifics. Here are just a few possible solutions:
DTS
DTS is packaged with SQL 2000 and is made for this kind of a task. If written correctly, your DTS package can have good error-handling and be rerunnable/reusable.
SSIS
SSIS is actually packaged with SQL 2005 and above, but you can connect it to other databases. It's basically a better version of DTS. (technically it's radically different than DTS, but has a lot of the same functionality)
Linked Servers
From SQL 2000 you should be able to connect directly to your Oracle database as a linked server. In the pros column this kind of direct access can be easy to work with if you don't have any other technical skills such as DTS or SSIS, but it can be complex to get the initial set-up right and there may be security concerns/issues.
Build Your Own
Depending on what other technologies you use you can build your own application to do the ETL (Extract/Transform/Load, which is what you're doing). This could be in .NET, Java, etc. In the pros column you can use something with which you're familiar but there's a big downside here in that most of the low level type of work is already out there in tools like DTS/SSIS, so why reinvent the wheel?
BCP
You can simply extract the data from Oracle as .csv files (or some other format) and then import them back in using SQL Server's Bulk Copy Process. This can be fast, but there aren't many bells and whistles to go with this. If this is a one-time thing with just a few tables though then this is probably the easiest and fastest way to do it.
Third Party Applications
There are a slew of ETL applications already written out there (Data Import, Data Slave, etc.). They will usually provide wizards and one-click solutions (maybe a few more than one click), but they are also going to cost a bit of extra money.
EDIT:
Given your latest comment, I would probably go with a DTS package that's scheduled in SQL Agent to run daily. You can add in error-handling and have the system email/text/call someone if there's ever an issue (or do positive case reporting - ie. send a message when it's successful so that someone knows that there's a problem if they don't get a message each day.
In our company we use ADO.Net for the same task.
We created a source to Oracle , taking all data and then creating it in SQL server
You could write DTS packages to copy the data, and schedule them to run within Sql Server Agent.
See DTS Overview for information on DTS packages.
Here's a tutorial on creating a DTS package: Creating DTS Packages With SQL Server 2000
Oracle Hyperion is a suite of products, largely unrelated to Oracle's database product. I expect you are referring to a product such as Hyperion Financial Management or Hyperion Strategic Finance. These products have APIs that can be consumed using COM Interop or web services. The data can be extracted from the internal multidimensional database by analyzing the database metadata, creating dimension trees, and then using the information to create selections, that represent subcubes within the database; allowing you to get or set cell data.
I don't know what your level of knowledge of multidimensional databases is, but unless it is substantial you may find the task pretty hard. You also need to get a handle on the particular product API.
My company specializes in these kinds of activities, and we have components for this kind of thing. Drop me a line on my blog if you need further advice.
danielvaughan.org
Cheers,
Daniel
I don't know anything about Hyperion, but SQL Server 2000 is very old and may not have a driver to be able to pull data from Hyperion if the version of that is newer than the year 2000. You may need to look to see if there is a way to push the data from Hyperion rather than pull it into SQL Server 2000. One way i have done this is the past is to create pipe delimited text file from the data base that orginally has the data and palce it in a processing directory. I do know that DTS will process a pipe-delimited text file. So if you can't find a driver to process this data directly, consider if you can push it out to file and then process. You wil have to schedule a time gap between the job on Hyperion that creates the file and the DTS package job. But if you are only doing it once a day, that's prbably not a problme.