Executing ssis package using command line arguments - batch-processing

I want to execute SSIS package using command line arguments.
As we can do it in executing C# project. And i want to use that argument.
CmdLineArguments: INTRADAY OPT OPTION_DAILY_INTRADAY_VOL12/02/2014
And then i want to use these different vlues to do some operations.
What I have got: I searched on line and got that we have to give something like below
dtexec /file Package.dtsx /Set \Package.Variables[User::UniversFileAddress].Properties[Value];\" INTRADAY OPT OPTION_DAILY_INTRADAY_VOL12/02/2014\"
which have no effect on execution. i mean it's not working for me. May be my concept is wrong.
whereas i want to pass arguments as below
INTRADAY OPT OPTION_DAILY_INTRADAY_VOL12/02/2014
And use these arguments in script task.
How can i do so..?

There are many ways actually but i found following way suitable for my application.
Create a console application.
Call that ssis package in that application.
And you can set varible values in that console application
here is the code:
using Microsoft.SqlServer.Dts.Runtime;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
string[] argsArray = new string[] {"","","","" };
if (args == null)
Console.WriteLine("args is null");
else
{
if (args.Length > 4)
{
}
else if (args.Length > 3)
{
for (int i=0;i<args.Length;i++)
{
argsArray[i] = args[i];
}
}
}
string pkglocation="h:\\My Documents\\Visual Studio 2008\\Projects\\Try_Project_To_Convert_Fro_Asia_Euro_US\\Try_Project_To_Convert_Fro_Asia_Euro_US\\Package.dtsx";
Application app= new Application();
Package Pkg=app.LoadPackage(pkglocation ,null);
Pkg.Variables["User::fileName"].Value = argsArray[2] + argsArray[3].Substring(6, 4) + argsArray[3].Substring(3, 2) + argsArray[3].Substring(0, 2);
string test = (string )Pkg.Variables["User::fileName"].Value;
Microsoft.SqlServer.Dts.Runtime.DTSExecResult results = Pkg.Execute();
if (results == Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure)
{
string err = "";
foreach (Microsoft.SqlServer.Dts.Runtime.DtsError local_DtsError in Pkg.Errors)
{
string error = local_DtsError.Description.ToString();
err = err + error;
}
}
if (results == Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success)
{
string message = "Package Executed Successfully....";
}
}
}
Please let me know if you have any problem

Related

Unable to delete documents in documentum application using DFC

I have written the following code with the approach given in EMC DFC 7.2 Development Guide. With this code, I'm able to delete only 50 documents even though there are more records. Before deletion, I'm taking the dump of object id. I'm not sure if there is any limit with IDfDeleteOperation. As this is deleting only 50 documents, I tried using DQL delete command, even there it is limited to 50 documents. I tried using destory() and destroyAllVersions() method that document has, even this didn't work for me. I have written everything in main method.
import com.documentum.com.DfClientX;
import com.documentum.com.IDfClientX;
import com.documentum.fc.client.*;
import com.documentum.fc.common.DfException;
import com.documentum.fc.common.DfId;
import com.documentum.fc.common.IDfLoginInfo;
import com.documentum.operations.IDfCancelCheckoutNode;
import com.documentum.operations.IDfCancelCheckoutOperation;
import com.documentum.operations.IDfDeleteNode;
import com.documentum.operations.IDfDeleteOperation;
import java.io.BufferedWriter;
import java.io.FileWriter;
public class DeleteDoCAll {
public static void main(String[] args) throws DfException {
System.out.println("Started...");
IDfClientX clientX = new DfClientX();
IDfClient dfClient = clientX.getLocalClient();
IDfSessionManager sessionManager = dfClient.newSessionManager();
IDfLoginInfo loginInfo = clientX.getLoginInfo();
loginInfo.setUser("username");
loginInfo.setPassword("password");
sessionManager.setIdentity("repo", loginInfo);
IDfSession dfSession = sessionManager.getSession("repo");
System.out.println(dfSession);
IDfDeleteOperation delo = clientX.getDeleteOperation();
IDfCancelCheckoutOperation cco = clientX.getCancelCheckoutOperation();
try {
String dql = "select r_object_id from my_report where folder('/Home', descend);
IDfQuery idfquery = new DfQuery();
IDfCollection collection1 = null;
try {
idfquery.setDQL(dql);
collection1 = idfquery.execute(dfSession, IDfQuery.DF_READ_QUERY);
int i = 1;
while(collection1 != null && collection1.next()) {
String r_object_id = collection1.getString("r_object_id");
StringBuilder attributes = new StringBuilder();
IDfDocument iDfDocument = (IDfDocument)dfSession.getObject(new DfId(r_object_id));
attributes.append(iDfDocument.dump());
BufferedWriter writer = new BufferedWriter(new FileWriter("path to file", true));
writer.write(attributes.toString());
writer.close();
cco.setKeepLocalFile(true);
IDfCancelCheckoutNode cnode;
if(iDfDocument.isCheckedOut()) {
if(iDfDocument.isVirtualDocument()) {
IDfVirtualDocument vdoc = iDfDocument.asVirtualDocument("CURRENT", false);
cnode = (IDfCancelCheckoutNode)cco.add(iDfDocument);
} else {
cnode = (IDfCancelCheckoutNode)cco.add(iDfDocument);
}
if(cnode == null) {
System.out.println("Node is null");
}
if(!cco.execute()) {
System.out.println("Cancel check out operation failed");
} else {
System.out.println("Cancelled check out for " + r_object_id);
}
}
delo.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
IDfDeleteNode node = (IDfDeleteNode)delo.add(iDfDocument);
if(node == null) {
System.out.println("Node is null");
System.out.println(i);
i += 1;
}
if(delo.execute()) {
System.out.println("Delete operation done");
System.out.println(i);
i += 1;
} else {
System.out.println("Delete operation failed");
System.out.println(i);
i += 1;
}
}
} finally {
if(collection1 != null) {
collection1.close();
}
}
} catch(Exception e) {
e.printStackTrace();
} finally {
sessionManager.release(dfSession);
}
}
}
I don't know where I'm making mistake, every time I try, the program stops at 50th iteration. Can you please help me to delete all documents in proper way? Thanks a lot!
At first select all document IDs into List<IDfId> for example and close the collection. Don't do another expensive operations inside of the opened collection, because you are then unnecessarily blocking it.
This is the cause why it did only 50 documents. Because you had one main opened collection and each execution of delete operation opened another collection and it probably reached some limit. So as I said it is better to consume the collection at first and then work further with those data:
List<IDfId> ids = new ArrayList<>();
try {
query.setDQL("SELECT r_object_id FROM my_report WHERE FOLDER('/Home', DESCEND)");
collection = query.execute(session, IDfQuery.DF_READ_QUERY);
while (collection.next()) {
ids.add(collection.getId("r_object_id"));
}
} finally {
if (collection != null) {
collection.close();
}
}
After that you can iterate through the list and do all actions with the document you need. But don't execute delete operation in each iteration - it is ineffective. Instead of it add all documents into one operation and execute it once at the end.
IDfDeleteOperation deleteOperation = clientX.getDeleteOperation();
deleteOperation.setVersionDeletionPolicy(IDfDeleteOperation.ALL_VERSIONS);
for (IDfId id : ids) {
IDfDocument document = (IDfDocument) session.getObject(id);
...
deleteOperation.add(document);
}
deleteOperation.execute();
The same is for the IDfCancelCheckoutOperation.
And another thing - when you are using FileWriter use close() in the finally block or use try-with-resources like this:
try (BufferedWriter writer = new BufferedWriter(new FileWriter("file.path", true))) {
writer.write(document.dump());
} catch (IOException e) {
throw new UncheckedIOException(e);
}
Using of StringBuilder is good idea, but create it only once at the beginning, append all attributes in each iteration and then write the content of the StringBuilder into the file at the end and not during each iteration - it is slow.
You could just do this from inside your code:
delete my_report objects where folder('/Home', descend)
no need to fetch information you are throwing away again ;-)
You're probably facing result set limit for DFC client.
Try adding to dfc.properties these lines and rerun your code to see if can delete more than 50 rows and after it adjust to your needs.
dfc.search.max_results = 100
dfc.search.max_results_per_source = 100

autodesk design automation

FATAL ERROR: Unhandled Access Violation Reading 0x0008 Exception at 1d8257a5h
Failed missing output
I finally made it work with HostApplicationServices.getRemoteFile in local AutoCAD, then migrated it to Design Automation. It is also working now. The below is the command of .NET plugin.
To have a simple test, I hard-coded the URL in the plugin. you could replace the URL with the workflow at your side (either by an json file, or input argument of Design Automation)
My demo ReadDWG the entities from the remote URL file, then wblock the entities to current drawing (HostDWG), finally save current drawing.
Hope it helps to address the problem at your side.
.NET command
namespace PackageNetPlugin
{
class DumpDwgHostApp: HostApplicationServices
{
public override string FindFile(string fileName,
Database database,
FindFileHint hint)
{
throw new NotImplementedException();
}
public override string GetRemoteFile(Uri url,
bool ignoreCache)
{
//return base.GetRemoteFile(url, ignoreCache);
Database db =
Autodesk.AutoCAD.ApplicationServices.Application.
DocumentManager.MdiActiveDocument.Database;
string localPath = string.Empty;
if (ignoreCache)
{
localPath =
Autodesk.AutoCAD.ApplicationServices.Application.
GetSystemVariable("STARTINFOLDER") as string;
string filename =
System.IO.Path.GetFileName(url.LocalPath);
localPath += filename;
using (var client = new WebClient())
{
client.DownloadFile(url, localPath);
}
}
return localPath;
}
public override bool IsUrl(string filePath)
{
Uri uriResult;
bool result = Uri.TryCreate(filePath,
UriKind.Absolute, out uriResult)
&& (uriResult.Scheme == Uri.UriSchemeHttp ||
uriResult.Scheme == Uri.UriSchemeHttps);
return result;
}
}
public class Class1
{
[CommandMethod("MyPluginCommand")]
public void MyPluginCommand()
{
try {
string drawingPath =
#"https://s3-us-west-2.amazonaws.com/xiaodong-test-da/remoteurl.dwg";
DumpDwgHostApp oDDA = new DumpDwgHostApp();
string localFileStr = "";
if (oDDA.IsUrl(drawingPath)){
localFileStr = oDDA.GetRemoteFile(
new Uri(drawingPath), true);
}
if(!string.IsNullOrEmpty(localFileStr))
{
//source drawing from drawingPath
Database source_db = new Database(false, true);
source_db.ReadDwgFile(localFileStr,
FileOpenMode.OpenTryForReadShare, false, null);
ObjectIdCollection sourceIds =
new ObjectIdCollection();
using (Transaction tr =
source_db.TransactionManager.StartTransaction())
{
BlockTableRecord btr =
(BlockTableRecord)tr.GetObject(
SymbolUtilityServices.GetBlockModelSpaceId(source_db),
OpenMode.ForRead);
foreach (ObjectId id in btr)
{
sourceIds.Add(id);
}
tr.Commit();
}
//current drawing (main drawing working with workitem)
Document current_doc =
Autodesk.AutoCAD.ApplicationServices.Application.
DocumentManager.MdiActiveDocument;
Database current_db = current_doc.Database;
Editor ed = current_doc.Editor;
//copy the objects in source db to current db
using (Transaction tr =
current_doc.TransactionManager.StartTransaction())
{
IdMapping mapping = new IdMapping();
source_db.WblockCloneObjects(sourceIds,
SymbolUtilityServices.GetBlockModelSpaceId(current_db),
mapping, DuplicateRecordCloning.Replace, false);
tr.Commit();
}
}
}
catch(Autodesk.AutoCAD.Runtime.Exception ex)
{
Autodesk.AutoCAD.ApplicationServices.Application.
DocumentManager.MdiActiveDocument.Editor.WriteMessage(ex.ToString());
}
}
}
}

AsyncTask doInBackground() does not execute correctly on run, but works on debugger

#Override
protected ArrayList<HashMap<String, String>> doInBackground(Void... params) {
ArrayList<HashMap<String, String>> PLIST = new ArrayList<>();
HttpHandler sh = new HttpHandler();
String jsonStr = sh.makeServiceCall(jsonUrl);
ArrayList<String> URLList = new ArrayList<>();
if (jsonStr != null) {
placesList.clear();
try {
JSONObject jsonObj = new JSONObject(jsonStr);
// Getting JSON Array node
JSONArray placesJsonArray = jsonObj.getJSONArray("results");
String pToken = "";
// looping through All Places
for (int i = 0; i < placesJsonArray.length(); i++) {
JSONObject placesJSONObject = placesJsonArray.getJSONObject(i);
String id = placesJSONObject.getString("id");
String name = placesJSONObject.getString("name");
HashMap<String, String> places = new HashMap<>();
// adding each child node to HashMap key => value
places.put("id", id);
places.put("name", name);
PLIST.add(places);
}
//TODO: fix this...
if (SEARCH_RADIUS == 1500) {
Log.e(TAG, "did it get to 1500?");
try {
for (int k = 0; k < 2; k++) {
//error is no value for next_page_token... this
ERROR HERE
pToken = jsonObj.getString("next_page_token"); //if I place breakpoint here, debugger runs correctly, and returns more than 20 results if there is a next_page_token.
String newjsonUrl = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?location="
+ midpointLocation.getLatitude() + "," + midpointLocation.getLongitude()
+ "&radius=" + SEARCH_RADIUS + "&key=AIzaSyCiK0Gnape_SW-53Fnva09IjEGvn55pQ8I&pagetoken=" + pToken;
URLList.add(newjsonUrl);
jsonObj = new JSONObject(new HttpHandler().makeServiceCall(newjsonUrl)); //moved
Log.e(TAG, "page does this try catch");
}
}
catch (Exception e ) {
Log.e(TAG, "page token not found: " + e.toString());
}
for (String url : URLList){
Log.e(TAG, "url is : " + url);
}
I made an ArrayList of URLS after many attempts to debug this code, I planned on unpacking the ArrayList after all the urls with next_page_tokens were added, and then parsing through each of them later. When running the debugger with the breakpoint on pToken = getString("next_page_token") i get the first url from the Logger, and then the second url correctly. When I run as is, I get the first url, and then the following error: JSONException: No value for next_page_token
Things I've tried
Invalidating Caches and restarting
Clean Build
Run on different SDK versions
Made sure that the if statement is hitting (SEARCH_RADIUS == 1500)
Any help would be much appreciated, thanks!
Function is called in a listener function like this.
new GetPlaces(new AsyncResponse() {
#Override
public void processFinish(ArrayList<HashMap<String, String>> output) {
Log.e(TAG, "outputasync:" );
placesList = output;
}
}).execute();
My onPostExecute method.
#Override
protected void onPostExecute(ArrayList<HashMap<String, String>> result) {
delegate.processFinish(result);
// Dismiss the progress dialog
if (pDialog.isShowing())
pDialog.dismiss();
}
It turns out that the google places api takes a few milliseconds to validate the next_page_token after it is generated. As such, I have used the wait() function to pause before creating the newly generated url based on the next_page_token. This fixed my problem. Thanks for the help.

Any way to unzip file on react-native

Managed to download .zip file to my filesystem on mobile phone. But after a while realised I can't find a way how to unzip that file. As I tried with:
https://github.com/plrthink/react-native-zip-archive
https://github.com/remobile/react-native-zip
First one dies immidiately after requiring, getting error "Cannot read property 'unzip' of undefined" (followed instructions carefully)
And the second one dies because it's dependant on codrova port to react native which also doesn't work.
Any suggestions or way to solve these problems?
Using react-native 0.35, testing on Note4 with android 5.1.1.
I did manage in the end solve my problem:
using react-native-zip-archive
the solution was to change code inside:
RNZipArchiveModule.java file which is inside module
The changes that needed to be applied are written in this comment:
https://github.com/plrthink/react-native-zip-archive/issues/14#issuecomment-261712319
So credits to hujiudeyang for solving problem.
go to this direction :
node_modules\react-native-zip-archive\android\src\main\java\com\rnziparchive\RNZipArchiveModule.java
and replace this codes instead of unzip method
public static void customUnzip(File zipFile, File targetDirectory) throws IOException {
ZipInputStream zis = new ZipInputStream(
new BufferedInputStream(new FileInputStream(zipFile)));
try {
ZipEntry ze;
int count;
byte[] buffer = new byte[8192];
while ((ze = zis.getNextEntry()) != null) {
File file = new File(targetDirectory, ze.getName());
File dir = ze.isDirectory() ? file : file.getParentFile();
if (!dir.isDirectory() && !dir.mkdirs())
throw new FileNotFoundException("Failed to ensure directory: " +
dir.getAbsolutePath());
if (ze.isDirectory())
continue;
FileOutputStream fout = new FileOutputStream(file);
try {
while ((count = zis.read(buffer)) != -1)
fout.write(buffer, 0, count);
} finally {
fout.close();
}
/* if time should be restored as well
long time = ze.getTime();
if (time > 0)
file.setLastModified(time);
*/
}
} finally {
zis.close();
}
}
//**************************
#ReactMethod
public void unzip(final String zipFilePath, final String destDirectory, final String charset, final Promise promise) {
new Thread(new Runnable() {
#Override
public void run() {
try {
customUnzip(new File(zipFilePath ) , new File(destDirectory));
} catch (IOException e) {
e.printStackTrace();
}
}
}).start();
}

asp.net web api file upload without saving

Ok, so I am writing a service to recieve file uploads from an iPhone application through phonegap. They send me a file and I am trying to grab the actual file without saving it to any type of file system. Currently this is what I have
[HttpPost]
public string processRequest()
{
string ext = "Entered";
Request.Content.ReadAsMultipartAsync<MultipartMemoryStreamProvider>(new MultipartMemoryStreamProvider()).ContinueWith((tsk) =>
{
ext = "Request";
MultipartMemoryStreamProvider prvdr = tsk.Result;
foreach (HttpContent ctnt in prvdr.Contents)
{
ext = "Foreach";
// You would get hold of the inner memory stream here
Stream stream = ctnt.ReadAsStreamAsync().Result;
if (stream == null)
{
ext = "Null Stream";
}
Image img = Image.FromStream(stream);
if (ImageFormat.Jpeg.Equals(img.RawFormat))
{
ext = "jpeg";
}
else if (ImageFormat.Png.Equals(img.RawFormat))
{
ext = "Png";
}
else if (ImageFormat.Gif.Equals(img.RawFormat))
{
ext = "Gif";
}
// do something witht his stream now
}
});
return ext;
}
I have put various responses in there so I can see where the function is getting to. Right now it always returns "Entered" which means its not even reading the content of the request, the end game is for me to grab the file object, convert it into an image and then to base 64. Any direction would be appreciated. Remember I want to do this without any file system so no solutions that involve mapping a path to a server folder.
Ok so a little update, I have edited my code according to my first response and at least it attempts to execute now but it just gets infinitely stuck inside the code. This happens during the ReadAsMultipartAsync function
[HttpPost]
public string processRequest()
{
string ext = "Entered";
Request.Content.ReadAsMultipartAsync(new MultipartMemoryStreamProvider()).ContinueWith((tsk) =>
{
ext = "Request";
MultipartMemoryStreamProvider prvdr = tsk.Result;
foreach (HttpContent ctnt in prvdr.Contents)
{
ext = "Foreach";
// You would get hold of the inner memory stream here
Stream stream = ctnt.ReadAsStreamAsync().Result;
if (stream == null)
{
ext = "Null Stream";
}
Image img = Image.FromStream(stream);
if (ImageFormat.Jpeg.Equals(img.RawFormat))
{
ext = "jpeg";
}
else if (ImageFormat.Png.Equals(img.RawFormat))
{
ext = "Png";
}
else if (ImageFormat.Gif.Equals(img.RawFormat))
{
ext = "Gif";
}
// do something witht his stream now
}
}).Wait();
return ext;
}
The block inside ContinueWith also runs asynchronously (if you look at the signature for ContinueWith, you'll see that it returns a Task as well). So, with the above code, essentially you're returning before any of that has a chance to execute.
Try doing:
Request.Content.ReadAsMultipartAsync().ContinueWith(...).Wait();
Also, not sure you need to go to the trouble of doing Request.Content.ReadAsMultipartAsync<MultipartMemoryStreamProvider>(new MultipartMemoryStreamProvider()); I believe Request.Content.ReadAsMultipartAsync() should suffice.
Hope that helps!