Sharepoint 2010 workflow error, unable to open file - sharepoint-2010

I created my list and workflow that starts automatically on new entry on list.
I have another workflow that checks my list and waits for my column to come be a specific status. Then that workflow create entry in doc library.
After creation I tested it a lot and everything was great.
But in production I get errors randomly. Approximatly 50 users use that list to create items.
In the history list I get the error:
Workflow could not create the item list. Make sure the list exists and that the user has permission to add items to the list.
In the log files I found errors that say:
unable to open document ("name_of_document.docx")
and some error number:
System.Runtime.InteropServices.COMException (0x80070050)
It's not a permission problem because these error appear rarely -- most of the operations ends successfully.
Appreciate for help.

the problem could be that two process are trying to access for create the document and both create the same filename.
Try to modify the name of generated file adding more data to the name in order to differentiate the file.
Dig into the server for more information about the exception

Related

Automatically add database entry after ftp upload

Sorry if this seems stupid but I wonder if it's possible to add a database entry after an ftp upload.
To be more clear, thanks to winSCP I have several folders sending everything I put in there automatically to my server.
However, I would like to create a mysql entry for each uploaded files and once again, automatically. Is it possible to do that? How?
To gives the full details of what I need to do, you can read the following.
I have several folders with pictures and each folders are uploaded automatically.
Each of those folders belong to one user and the goal is to give them an account and allow them to see and download those files through a web interface. Since one account = one folder, that's kinda easy.
And I think a simple .htaccess can simply secure things so one user can only see and download the file in his own repository, no?
However if I want them to be able to see what's new (=something they didn't download or simply mark as read) I think I need a table to manage those files.
Something like id | file (string) | read (bool).
If you think this way to proceed is bad, they I'm open to change how to do things, but to be clear uploading the file need to work this way. Not using any kind of formulary.
Thanks for reading that, sorry for my english.
Your problem contains three steps:
Folders/Files been automatically uploaded to your server directory, as you say, this been efficiently handled by winSCP.
You need to update your database with all the files and folders present in your server directory.
You need to update whether or not it is been read/downloaded by the user.
Since your first step is in place, we don't need anything there. For second step, you should write a script and schedule that script to run at a fixed time interval using CRON (if using LINUX or UNIX, or WINDOWS). The script would be responsible to create a list of file(s) present in the directory, and simply insert the file(s) information that are not present in your database.
EDIT:
This edit is to describe how your script file should work. As I explained, the cron jobs would simply help you run your script file in fixed set of interval (which can be every minute, or every hour, or every day, and so on). Lets say your database table has following columns:
fileid (varchar[20])
filepath (varchar[20])
status (boolean)
Your script file should do following things:
Create a list of existing filepaths in your server directory
Run a select query, create a list of existing filepaths from database table.
Compare list1 with list2, and find the ones that doesn't exist in list2 (This would give you a list of filepath that needs to be inserted into table)
Just insert the list of file paths you got above, and set there status to be false (which means the file is not read/downloaded yet)
NOTE: Please keep in mind that I am not advising right now that how your database table should look like. It can be what you have proposed or can even differ depending on your will or requirements.
For the third step, simply keep the status of your file to be unread when creating entries in your table from the second step, and then when user click on the file link in your application whether to view or download it, send a POST request to your server updating the file status to be marked as read.
Let me know if this helps!

wso2 Gov Registry indexing looping

I created a custom RXT asset to register my own objects. The system works fine when I add and browse objects.
But if I delete an object instance using the Management Console by Resources/Browse, browse to /system/governance/[objects]/[object], and try and create a new one, the system starts looping and displays the message:
"Please wait while the asset is being indexing"
This message does not disappear. From this start point, all new objects do not show in publisher and store, but do exist in /system/governance/[objects]/[object].
Please do a re-indexing resources
Follow the steps below to re-index the resources.
Delete the /solr/ directory.
Change the name (e.g. lastaccesstime to lastaccesstime_1) of the file in the registry which tracks the last access time of indexing the resources, by changing the value of the lastAccessTimeLocation property in the /repositiry/conf/registry.xml file as follows.
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1
Restart the G-Reg server and wait for around 30 minutes. This time duration depends on number of resources that are there in the registry.
Ref: https://docs.wso2.com/display/Governance540/Upgrading+from+a+Previous+Release#UpgradingfromaPreviousRelease-Re-indexingresources

CRM Dynamics 2013: A record required by this process could not be found

I am trying to build a OOTB workflow in CRM Dynamics 2013, although I get the above mentioned error message when I try.
A record required by this process could not be found.(Cannot create a lookup without the required parameters.)
My workflow basically is trying to assign a team to a record when its created. My workflow definition is showed below.
I'm a guessing that error is occurring when the workflow is actually run.
I think your mapping is wrong, it looks like you are trying to assign the record owner, to the team who already own the record. If that team isn't set (which it presumably isn't, and that's why you are assigning it) then you are likely to get A record required by this process could not be found.
Your mapping needs to reference a field that is populated.

MS Access Split Database - Run time error with backend in new drive location?

I have an MS Access split database and I'm trying to get it to work with the backend on a new, more secure drive on our network. I've used a UNC path for the backend location.
This database has been running without problem on another drive which is totally public to everyone in the company (~4k people, not secure) for about a year. We have a generic account for users to access the database throughout our factory and haven't encountered this problem before I tried to switch it to the new drive. I've contacted our IT department and they've given myself and all my user's accounts read/write access to the drive, but only I can run it.
Other user accounts get these problems...
All of my forms with objects bound to a table immediately throw a runtime error before even getting to Form_Load.
My userforms will run DLookup functions and execute message boxes but throw a runtime error when they go to execute a query.
I've tried using an 'On Error goto' to try and actually find the problem but it throws a runtime error before that.
I can't think at all what the problem might be. IT have told me I have the same permissions as the other users. Any suggestions on what to do?
In the front end of your DB you may have to change the path which links to the tables. The tables are in the back end of the database, which you have now moved and therefore changed the filepath. the frontend wont find them. You can delete the table links in the front end then use External data > import and Link > Access. In order to re-link the tables using their new filepath. You would then need to redistribute your frontend to the users.
I'm not sure if you have already done this, however, you haven't suggested so in your question. hope this helps, apologies if it was helpless.
I hope you have found an answer to this. I wanted to answer as well, because I had the exact same problem yesterday. It turned out that the location was referring other users back to the folder's shortcut name versus it's full length name (it is called the W drive on my computer and the K drive on theirs). I solved the issue by spelling out the name in full.

error when adding tableadapter to dataset

I am developing a winforms application in Vb.net
when i try to add a tableadapter to an existing dataset I am receiving the error:
Failed to open a connection to the database.
"An attempt to attach an auto-named database for file ###Filelocation### failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share." Check the connection and try again.
This same dataset has 2 other tableadapters using the same dataconnection (as I am selecting the already existing dataconnection) which work fine
this connection is using application connection strings:
Data Source=(LocalDB)\v11.0;AttachDbFilename=|DataDirectory|\test.mdf;Integrated
server=localhost;user id=root;password=password;database=testuser;persistsecurityinfo=True
this error pops up every time I select the dataconnector on the first window that pops up.
Why is this happening
additional info:
The other 2 tableadapters were added to this dataset using a different computer
this is for a mysql connection
I found this post on the MSDN forum:
An attempt to attach an auto-named database ....\aspnetdb.mdf failed
One of the suggestions is as Mr. DonBoitnott correctly says, add User Instance=True;. But, there's also another suggestion posted by Luke A.
"For the record, not one single error message given to me during the course of trying to fix this was relevant to the actual problem. Upon first receiving "An attempt to attached an auto-named database..." I looked online for every suggestion I could find: use an absolute path to the MDF, reorder TCP/IP and named pipes in the server configuration, disable/enable UserInstance (depending on where you looked), change security settings, reconfigure authentication, give specific login credentials.
None of these worked. All of these led to different vague/ambiguous error messages, which led to another problem which required a solution which led back to the original error message... an endless loop of problems, completely unrelated to what was actually wrong.
_Also, posts about setting correct permissions on the App_Data folder are deceptive, as they imply the default permissions were not sufficient for SQL Express (in fact, they are). The whole point of this VS environment is that you can develop a web application and plop it right onto an IIS/SQLExpress setup and have it work. This makes the applications more portable (within IIS, of course) and secure. Of course, everything configuration-related has been obfuscated enough to make it several orders of magnitude more difficult than it has to be."_
So: Try prepend Initial Catalog=uniquenamehere to your connection string.
Though he says "... where 'uniquenamehere' is some name for your project.", try replacing uniquenamehere with the actual name of the database where the table exists.
the file is a .mdf file so it is a mssql file
my guess would be that as you stated you are using mysql the connection string has to be a tad different and therefore the Dataset can't connect to Mysql
Try adding User Instance=True; to the connection string.