How to resolve an Error after importing a package in Enterprse Architect Sparx Systems - sql

Everytime I want to change some properties in some class I get the following error messages:
:Microsoft Cursor Engine [-2147217864]
Row cannot be located for updating. Some values may have been changed since it was last read.
ADODB.Recordset[-2146825069]
Operation is not allowed in this context.
How can I solve them??

Even if this question was posted a long time ago:
Now and then this error occurs in my projects, too.
Every time I try to edit specific elements in Enterprise Architect projects i get exactly the same error messages. The only solution to this is to delete the element completely and create it again.
#TomO:
When you are importing a package, is this from XMI or are you import a source code directory?
I import only via XMI file.
What are you using as a repository?
I'm using a PostgreSQL-Server based repository, which I access via ODBC Driver.
In your ODBC Data Source Configuration do you have "Return matched rows instead of affected rows" and "Allow big result sets".
Could specify where I can find these options? Perhaps this is outdated, becaus I can't find any of these options under the Options/Datasource Menu in my ODBC driver.
If you are importing form XMI are you stripping the GUIDs on import, this is always a good idea if you are making a copy of an existing folder in your model as having two elements with the same GUID is not ideal ;-)
I strip GUIDs when I'm exporting and again when I'm importing XMI files.
I would really apprechiate any help concerning this topic.

If possible i might need a little more information. When you are importing a package, is this from XMI or are you import a source code directory? What are you using as a repository? Given the error I am assuming it is not the local EAP file.
In your ODBC Data Source Configuration do you have "Return matched rows instead of affected rows" and "Allow big result sets"
If you are importing form XMI are you stripping the GUIDs on import, this is always a good idea if you are making a copy of an existing folder in your model as having two elements with the same GUID is not ideal ;-)
I have also noticed that you asked this on Apr 14th - sorry it has taken me so long to find your request. I hope this helps!

Are you accessing your ea repository as a cloud repository please? If so, you could try to switch to access the repository as an odbc datasource, and this problem might be solved. I think it is a bug of the Sparx enterprise architect cloud service.

Related

Is there a way to extract Access Modules without opening the file?

I ended up corrupting my database to where every time I attempt to open it, I get error 3022, "changes you requested to the table were not successful because they would create duplicate values in the index."
Recovery of the file does not seem possible and my previous back up is a month ago. I have been able to extract everything but the Modules, which is what I need to recover the most. None of the standard ways I have found work because they require the ability to open the database (For example, trying to set it as a VBA reference still give the same error.)
Is there any way to get the modules or code out of the file without opening it?
Edit:
Was finally able to get access to the file. Using DBEngine.CompactDatabase it was able to do a compact and repair. The issue has boiled down to the "MSysAccessStorage" table is corrupt, and says "Id is not an index in this table". I know have access to everything, except the modules, which I can't open without the MSysAccessStorage working.
I'm going to keep poking at it but I'm not sure what options I have for fixing a system table. Any ideas would be helpful.
Unfortunately, the Visual Basic for Applications project has been corrupted. The original database doesn't even have any VBProjects when listing a count. I'm going to call this one a lost cause. Thanks everyone that tried to help.

SQL Database in GitHub

I am building a Java app that uses an SQLite database to hold most of its data. For the end-user, the database would be almost entirely read-only, with very occasional edits. I'll (theoretically) be displaying/distributing it through my GitHub page, so my question is:
What's the best way to load the database into GitHub? (I'm using IntelliJ with DataGrip.)
I'd prefer to be able to update the database when I commit/push, instead of having to overwrite the whole file. The closest question I can find is How to include MySQL database schema on GitHub? but there could potentially be hundreds or thousands of entries, so I can't just rebuild the tables when the user installs the app.
I'm applying for entry-level developer jobs, and this project is going to be my main portfolio piece during job-hunting. I'm trying to make sure it is not only functional but also makes a good impression. Any help is (very) greatly appreciated.
EDIT:
After moving my .db file into the folder connected to GitHub (same level as my src folder) apparently I can now commit/push it with the rest of my files. How do I make sure that the connection from my Java code to the database stays valid once it is loaded onto another user's system? Can I just stick with
connection = DriverManager.getConnection("jdbc:sqlite:mydatabase.db");
or do I need to rework the path?
Upon starting, if your application can't find a corresponding sqlite database file, have it create one. Then do initial load of your tables from either CSV, JSON or XML files.
You can upload these files to Git, as they are text formats.

Lightswitch lost data provider for data source connection

I have an odd situation on a VS upgrade from 2013 > 2015 outside of runtime.
App Type: MS Lightswitch HTML Client
DB Type: Oracle
Framework: 4.5
Story: I upgraded VS and replaced OPD.Net to the 2015 version. Works fine.
Then I converted my application. There were a lot of things to fix, but most were pretty easily remedied. I tested the application and it works as expected so I published to test server and everything checks out. Success! So I thought.
I want to continue developing the site. As I make db changes, they need to be reconciled to the intrinsic db in my project.
After clicking 'Update Database' I see this. So far so good.
What's expected is that after I hit 'Finish', all changes to the selected table should pull in to the lsml files. But this is what I get.
I've read a few places like The given key was not present in the dictionary, what key? [closed] but these all look like runtime remediations.
If I go back to the update screen and hit 'Previous', I get this.
I sifted through every freakin lsml file in text editor looking for where the provider is assigned. No luck. I also created a new proj to compare, nothing stood out. I also tried adding another data source which works fine. So ODP.net is not the issue. I am lost on what to do now. I searched all over the site, Google, for every error message with various tags. At this point I reach out to you, or anyone that may know what this is about.
Thanks ahead of time!
Note for future users upgrading a VS LS project with Oracle db.
Since a new version ODP.Net is required (in my case 2015), the provider name is going to change. To ensure LS knows the new provider, the data source lsml file needs to be updated. In my case I used GIT to help out. This is how I resolved it.
Steps:
After converting your project and replacing ODP.net to the current version.
Create a new data source using the new provider.
Save the project and re-open. This will cause lightswitch to recompile.
Open File Explorer and navigatye to the ProjectName.server folder. In a text editor (I used notepad) open the lsml files under ProjectName.server and there should be two lsml files (1 for the pre-existing and another for the new) or more if you have multiple sources.
Copy the connection properties of the new datasource to a new temp file on your desktop.
Roll back the entire solution using GIT or other source control.
Use text editor to open the lsml file for the original data source.
Update the GUID for DataProviderName with the values from the temp file in step 4.
Note: The connection string GUID should be left alone as it should match your GUID in the web.config file.
<DataService.ConnectionProperties>
<ConnectionProperty
Name="DataProviderName"
Value="9d8fdbb9-xxxx-4787-xxxx-49831d34ad4b" />
<ConnectionProperty
Name="ProviderInvariantName"
Value="Oracle.ManagedDataAccess.Client" />
<ConnectionProperty
Name="ConnectionStringGuid"
Value="36e67aca-xxxx-41a7-xxxx-a4546761b30d" />
<ConnectionProperty
Name="ProviderManifestToken"
Value="12.1" />
</DataService.ConnectionProperties>
Finally reload project and the changes should take effect allowing you to once again update your data source.
Thanks

How to separate the latest file from Multiple files in Mule

I have 5000 files in a folder and on daily basis new file keep loaded in same file. I need to get the latest file only on daily basis among all the files.
Will it be possible to achieve the scenario in Mule out of box.
Tried keeping file component inside Poll component( To make use of waterMark) but not working.
Is there any way we can achieve this. If not please suggest the best way ( Any possible links).
Mule Studio: 5.3, RunTime 3.7.2.
Thanks in advance
Short answer: Not really any extremely quick out of the box solution. But there are other ways. Im not saying this is the right or only way of solving it, but I've earlier implemented a similar scenario in this way:
A Normal File inbound with a database table as file-log. Each time a new file is processed, a component checks if its name appears in the table. By choice or filter I only continue if it isn't in there already - and after processing I add the filename to the table.
This is a quite "heavy" solution though. A simpler access would be to use an idempotent filter with a object store. For example a Redis server: https://github.com/mulesoft/redis-connector/blob/master/src/test/resources/redis-objectstore-tests-config.xml
It is actually very simple if your incoming file contains timestamp........you can configure the file inbound connector by setting file:filename-regex-filter pattern="myfilename_#[function:timestamp].csv". I hope this helps
May be you can use a quartz scheduler( mention the time in cron expression), followed by a groovy script in which you can start the file connector . Keep the file connector in another flow.

Can't migrate custom Plone file types to Blobs

We have custom content types that were created as extensions of the ATTypes, two of them extend the ATFile type and one extends the ATImage type. We recently upgraded from Plone 4.2 to Plone 4.3.2. Just discovered we are not using Blob storage at all. No wonder our Data.fs is HUGE. So, I have been trying to migrate these custom types.
I have followed all of the steps explained in this example and the product's notes from pypi, these Plone instructions, and used the example from the pypi page for archetypes.schemaextender (Sorry, since I'm still a noob my reputation won't let me post more than 2 links).
In the end, I created an extender script that just extends the ATFile type changing the FileField to BlobField. It seems to be working for new items. I can add a new CustomFileType and it appears to be uploading the file to blob, and my new upload field is showing (I changed the description as a quick way to verify which one it was using).
However, I am having a problem migrating all existing content items to move the binary files over to blob. I tried the generic migrate() script, then I created my own migrate and walker as suggested in the above resources. It doesn't seem like it is doing anything though. When printing results for each item it tries merging, I do see this returned for each item:
DEBUG ATCT.migration Migrating /site/path/to/custom/file/filename.ext (CustomFile -> Blob)
When I navigate to the custom file type in the site, where it usually shows the link to the file, it is just empty. Then going to edit, it treats it as if there is no file there. As a check, I disabled the extender, restarted, and reloaded the custom file. The file was there now. So it looks like the script I am running just isn't moving that file over to where it should be now.
I feel like I am missing something simple, and it is right there, but I can't seem to find it. All of this is learn as I go and a bit over my head, so hopefully someone can easily set me straight.
If I need to provide any additional information leave a comment and I will try to provide what you need.
UPDATE
I used the Red Turtle objects as examples to migrate my custom types as suggested by keul. I still was not able to get the file to migrate to blob within the type itself. So, I tried a different approach. I created a new custom type "CustomBlob", that is a mimic setup of my CustomFile type, and only extended this new blob type to be blob aware. Then I migrated the CustomFiles to CustomBlob, did a complete clear and rebuild, and packed the zeo. The migration seemed to work for the most part, the blobstorage grew by an expected amount, the new types worked. However, the Data.fs didn't go down in size. I would have thought that the binary files that were stored in Data.fs would be removed during the migration. Am I understanding this incorrectly? How can I remove these files so the Data.fs size goes down appropriately?
Not sure if this is the best solution, but here is how I was able to get this to work.
I created temporary content types parallel of each type (for CustomImage I made CustomImageBlob, and so on). I made the new types blob-aware only, migrated all types to their parallel. Then I enabled the extender for the original types to make them blob-aware, and migrated back. It is a little redundant and time consuming, but I just could not get the files to migrate to blob when migrating to itself.
Providing this as the best answer so far in case it helps someone else, or might encourage someone to find a better solution. Thanks for the tip keul, it definitely helped me get to this solution.