In ApacheDS, I'm trying to load the sample LDAP data provided in ldif format and with instructions here.
I followed the same pattern I used to load the context record by specifying an ldif file in server.xml for the partition (directions, albiet lousy ones, located here).
So I have...
<apacheDS id="apacheDS">
<ldapServer>#ldapServer</ldapServer>
<ldifDirectory>sevenSeasRoot.ldif</ldifDirectory>
<ldifDirectory>apache_ds_tutorial.ldif</ldifDirectory>
</apacheDS>
The sevenSeasRoot.ldif file seems to have loaded properly, because I can see an entry for it in LdapBrowser. But there are no records under it.
What am I doing wrong? How do I configure server.xml to load the child records for sevenSeas?
Just a quick note, the config says ldif*Directory* but you are passing a file.
OTOH, I guess you are using 1.5.7 this is very old, it would be better to try the latest version for better support.
Related
Using ezplatform with "kaliop/ezmigrationbundle": "^3.0".
I read and re-read the manual but cannot see anything about auto-generating yml files from exsiting content types; does anyone have any experience with this and happen to know if/where the docs might live?
We have a list of content types that were created in the backend via the gui and now we need to create migration files for them for better development with the dev team.
Update: This is available on v4+ https://github.com/kaliop-uk/ezmigrationbundle/issues/34#issuecomment-317524072#issuecomment-317524072
This is available on v4+ and answers the question
https://github.com/kaliop-uk/ezmigrationbundle/issues/34#issuecomment-317524072#issuecomment-317524072
I guess that is not how it works!
the bundle generate just an empty yaml file for you and you should fill up the content types (or any other Backend Changes you want) yourself in that yaml file and then take it and apply to your stage or Live Environment.
so unlike Symfony DoctorineMigrationBunlde, this bundle does not read the difference and produce stuff itself.
and when shall I use it? How is it configured can anyone please tell me in detail?
The data-config.xml file is an example configuration file for how to use the DataImportHandler in Solr. It's one way of getting data into Solr, allowing one of the servers to connect through JDBC (or through a few other plugins) to a database server or a set of files and import them into Solr.
DIH has a few issues (for example the non-distributed way it works), so it's usually suggested to write the indexing code yourself (and POST it to Solr from a suitable client, such as SolrJ, Solarium, SolrClient, MySolr, etc.)
It has been mentioned that the DIH functionality really should be moved into a separate application, but that hasn't happened yet as far as I know.
I am trying run SOA Suite and when I execute startWeblogic.sh I got the following message error:
Unresolved reference to WseeFileStore by [<domain name>]/SAFAgents[ReliableWseeSAFAgent]/Store
at weblogic.descriptor.internal.ReferenceManager.resolveReferences(ReferenceManager.java:310)
at weblogic.descriptor.internal.DescriptorImpl.validate(DescriptorImpl.java:322)
at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:332)
at weblogic.management.provider.internal.DescriptorManagerHelper.loadDescriptor(DescriptorManagerHelper.java:68)
at weblogic.management.provider.internal.RuntimeAccessImpl$IOHelperImpl.parseXML(RuntimeAccessImpl.java:690)
at weblogic.management.provider.internal.RuntimeAccessImpl.parseNewStyleConfig(RuntimeAccessImpl.java:270)
at weblogic.management.provider.internal.RuntimeAccessImpl.<init>(RuntimeAccessImpl.java:115)
... 7 more
Does anyone know how to fix this error?
I am running the system over 64 bits Suse
The quick and dirty way to get your admin server back up:
cd to <domain name>/config
Back up config.xml just in case
Edit config.xml, find and remove the <saf-agent> tags that point to your non-existent WseeFileStore
When you have the admin server back up. You can look at the Store-and-Forward Agents and Persistent Stores links to see what is already configured there. It sounds like a SAF agent was somehow created but the backing Persistent Store was not.
You can always created the Persistent Store later and add that SAF agent back in if you need it.
This happens simply because the automated tool used to adapt the config.xml file to the new cluster structure is... well, far from efficient.
It can create all other relevant structures ok, but the <saf-agent> entry is wrongly created.
Just open and look briefly to the config.xml file and you should see that something is not right with this entry.
I will use my environment as an example for this situation:
I have a single cluster with two managed servers named osb1 and osb2. Both are administered by the cluster's AdminServer and all these components are in a single machine called rdaVM. The whole domain was created with the Configuration wizard and, upon the first AdminServer start, I've got that dreadful error for quite some time.
The solution does reside in the config.xml file located in <DOMAIN_HOME>/config/config.xml
When I opened this file in the editor and did a quick search for WseeFileStore I got some curious entries:
<jms-server>
<name>WseeJmsServer_auto_1</name>
<target>osb1</target>
<persistent-store>WseeFileStore_auto_1</persistent-store>
</jms-server>
<jms-server>
<name>WseeJmsServer_auto_2</name>
<target>osb2</target>
<persistent-store>WseeFileStore_auto_2</persistent-store>
</jms-server>
and
<file-store>
<name>WseeFileStore_auto_1</name>
<directory>WseeFileStore_auto_1</directory>
<target>osb1</target>
</file-store>
<file-store>
<name>WseeFileStore_auto_2</name>
<directory>WseeFileStore_auto_2</directory>
<target>osb2</target>
</file-store>
but looking at the offending entry:
<saf-agent>
<name>ReliableWseeSAFAgent</name>
<store>WseeFileStore</store>
</saf-agent>
Obviously there's something missing here. Looking at the <DOMAIN_HOME> I could see two folders there: WseeFileStore_auto_1 and WseeFileStore_auto_2. So no WseeFileStore and hence that annoying error. Also, the saf-agent element doesn't have a target.
Solution: using just the underlining logic, I adapted the <saf-agent> entry to:
<saf-agent>
<name>ReliableWseeSAFAgent_auto_1</name>
<target>osb1</target>
<store>WseeFileStore_auto_1</store>
</saf-agent>
<saf-agent>
<name>ReliableWseeSAFAgent_auto_2</name>
<target>osb2</target>
<store>WseeFileStore_auto_2</store>
</saf-agent>
I.e, created a <saf-agent> for each of the cluster's managed servers, targeted each entry to a managed server and added the _auto_# suffix, where # is the ordering number for each managed server, to the <name> and <persistent-store> entries.
After it, I was able to run the startWebLogic.sh script without problems (from this source at least...)
Is it possible to load values for infinispan-config.xml file from some property file so that we can get rid of hard coded values. If possible then can somebody show me the way how i load property file in infinispan-config.xml file because there is no Pre defined tag for configuration.
This is possible by setting respective system properties.
For example here is one specific Infinispan configuration file which is using this approach: https://github.com/infinispan/infinispan/blob/master/core/src/test/resources/configs/string-property-replaced.xml
and here is a test which is working with that file: https://github.com/infinispan/infinispan/blob/master/core/src/test/java/org/infinispan/config/StringPropertyReplacementTest.java
This looks to be the most straightforward way how to achieve this.
The last thing which needs to be done is to simply read all lines in your configuration file and put them correctly to system properties.
Everytime I want to change some properties in some class I get the following error messages:
:Microsoft Cursor Engine [-2147217864]
Row cannot be located for updating. Some values may have been changed since it was last read.
ADODB.Recordset[-2146825069]
Operation is not allowed in this context.
How can I solve them??
Even if this question was posted a long time ago:
Now and then this error occurs in my projects, too.
Every time I try to edit specific elements in Enterprise Architect projects i get exactly the same error messages. The only solution to this is to delete the element completely and create it again.
#TomO:
When you are importing a package, is this from XMI or are you import a source code directory?
I import only via XMI file.
What are you using as a repository?
I'm using a PostgreSQL-Server based repository, which I access via ODBC Driver.
In your ODBC Data Source Configuration do you have "Return matched rows instead of affected rows" and "Allow big result sets".
Could specify where I can find these options? Perhaps this is outdated, becaus I can't find any of these options under the Options/Datasource Menu in my ODBC driver.
If you are importing form XMI are you stripping the GUIDs on import, this is always a good idea if you are making a copy of an existing folder in your model as having two elements with the same GUID is not ideal ;-)
I strip GUIDs when I'm exporting and again when I'm importing XMI files.
I would really apprechiate any help concerning this topic.
If possible i might need a little more information. When you are importing a package, is this from XMI or are you import a source code directory? What are you using as a repository? Given the error I am assuming it is not the local EAP file.
In your ODBC Data Source Configuration do you have "Return matched rows instead of affected rows" and "Allow big result sets"
If you are importing form XMI are you stripping the GUIDs on import, this is always a good idea if you are making a copy of an existing folder in your model as having two elements with the same GUID is not ideal ;-)
I have also noticed that you asked this on Apr 14th - sorry it has taken me so long to find your request. I hope this helps!
Are you accessing your ea repository as a cloud repository please? If so, you could try to switch to access the repository as an odbc datasource, and this problem might be solved. I think it is a bug of the Sparx enterprise architect cloud service.