I'm trying to use a custom realm into my webapp deployed in weblogic version 12.2.1.4.0 to manage login (through a form based authentication), but even if the new realm is correctly defined and also the SQLAuthenticator provider, when i try to set the realm name, according to Oracle guide, the results is that at deploy or starts of the application weblogic logs that the configuration is ignored:
<Warning: Ignore the realm name: myrealm in
deployment descriptor.>
This problems occurs even when i configure the name of the standard WL realm "myrealm".
The SQLAuthenticator provider (readonly) works good and if i configure it in the default realm the login works, but the users and groups specific of the application are mixed with the system users and can be inherited by others application eventually deployed on the same WL instance and I want to avoid this.
weblogic-application.xml is correctly contained in META-INF dir of the EAR that contain the WAR and this is the content:
<?xml version="1.0" encoding="UTF-8"?>
<wls:weblogic-application
xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-application"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/javaee_6.xsd http://xmlns.oracle.com/weblogic/weblogic-application http://xmlns.oracle.com/weblogic/weblogic-application/1.8/weblogic-application.xsd">
<wls:security>
<wls:realm-name>myrealm</wls:realm-name>
</wls:security>
</wls:weblogic-application>
I'm trying to get Application Insights added to an existing project, which only flags the following as capabilities:
<Capability Name="ID_CAP_LOCATION" />
<Capability Name="ID_CAP_NETWORKING" />
<Capability Name="ID_CAP_PHONEDIALER" />
<Capability Name="ID_CAP_MAP" />
I've added the call in my App's constructor to:
WindowsAppInitializer.InitializeAsync();
And of course, I've checked the ApplicationInsights.config file to check the InstrumentationKey matches that shown on my portal.
Do I need to add additional capabilities to allow these to work, as I'm not seeing anything show up on the Azure Portal for the subscription, and I'm not seeing anything in the debug output to suggest that any diagnostics are being attempted to be sent?
The only required capability appears to be ID_CAP_NETWORKING.
One other thing to watch out for; when using the UI to associate with your applciation insights it will add include a schema in the applicationInsights.config file, which stops it working.
So, instead of:
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights>
<InstrumentationKey xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</InstrumentationKey>
</ApplicationInsights>
It should look more like:
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights>
<InstrumentationKey>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</InstrumentationKey>
</ApplicationInsights>
I created a custom search page using the default sitecore_web_index and everything seemed to work until I migrated to my test environment that has separate content management and content delivery servers. The index on the CD server is not getting updated on publish (the CM server does), if I rebuild the index from the control panel, I see updates. So I believe the index and the search page are working correctly.
The index is using the onPublishEndAsync strategy. The Sitecore Search and Index Guide (http://sdn.sitecore.net/upload/sitecore7/70/sitecore_search_and_indexing_guide_sc70-usletter.pdf) section 4.4.2 states:
This strategy does exactly what the name implies. During the initialization, it subscribes to the
OnPublishEnd event and triggers an incremental index rebuild. With separate CM and CD servers, this
event will be triggered via the EventQueue object, meaning that the EventQueue object needs to be
enabled for this strategy to work in such environment.
My web.config has <setting name="EnableEventQueues" value="true"/>
Also from the Search and Index Guide:
Processing
The strategy will use the EventQueue object from the database it was initialized with:
<param desc="database">web</param>
This means that there are multiple criteria towards successful execution for this strategy:
This database must be specified in the <databases /> section of the configuration file.
The EnableEventQueues setting must be set to true.
The EventQueue table within the preconfigured database should have entries dated later than
index's last update timestamp.
I'm not sure of the <param desc="database">web</param> setting, because the publishing target (and database ID) for the CD server is pub1. I tried changing web to pub1, but then neither servers' index was updated on a publish (so it's changed back to web).
The system was recently upgraded from Sitecore 6.5 to 7.2, so there are a couple indexes using Sitecore.Search API and these indexes are updated on publish.
Is the database param on the EventQueue wrong considering the multiple publishing targets? Is there something else I'm missing, or perhaps a working example of a CM -> CD environment I could compare to?
TIA
EDIT:
If I wouldn't have had a co-worker sitting next to me both Friday and today who can confirm, I would think I'm going crazy. But now, the CD server is getting updates to the index, but the CM server is not getting updates. What would make the CM server not get updates now?
I ran into this same issue last night and have a more predictable resolution than creating a new IIS site:
The resolve was to set a distinct InstanceName in ScalabilitySettings.config for each CD server, instead of relying on the auto-generated name.
Setting this value immediately resolved the issue and restored the index update functionality upon Publish End Remote events.
Note: If you already have an InstanceName defined in your config, then you need to change it for this to work. I just increment the InstanceName with the date to force the change.
This is effectively fixing the same issue in the same way as the original poster did by changing to a new IIS site, as the OP's fix would have modified the auto-generated Instance Name based on the new IIS site name.
I believe the core problem with the OP (and also in my instance) is related to the EventQueue databases going out of sync with the CD instances and none of the servers being able to determine that an event has been generated / what content needs to update in the index. By changing the Instance Name (using either method) the servers appear to be new instances and start from scratch with their EventQueue tracking.
Every time I've seen issues like this in the past it's been related to major manipulations of Sitecore databases. Such as restorations, backup/restore to a new DB name, or rollbacks of databases due to deployment problems. I believe something in the above operations causes the EventQueues to get out of sync and the servers stop responding to the expected events.
I had this issue and it drove me nuts for a few months. I figured out that the answer lied in the rebuild strategy of the Lucene Index. The only way for Lucene to know to rebuild itself when the CM and CD are in separate instances of IIS, is for lucene to watch the EventQueue table and recognize that a change happened to an item that is either at the root, or child of the root that you specify in the crawler node. The strategy that you'll need to specify as the rebuild strategy to guarantee this behavior is below
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexUpdateStrategies/remoteRebuild" />
</strategies>
If you use any other rebuild strategy with a remote instance of a content delivery server, the index will only be rebuilt in the CM instance's file system.
In case anyone runs into this in the future, the solution that worked for me, was creating a new site in IIS manager.
I submitted a ticket to Sitecore support, but after a week of not getting a response, I attempted to recreate my dev environment on my test server. I copied my local/dev files to the test CM server, created a new site and AppPool in IIS, pointed to the newly copied files, and updated the connectionstrings.config to point to the test environment database. This worked (publishing updated the CM web index).
After trying to point the existing IIS site to my new files, and use the new AppPool, publishing from this site would not update the CM web index.
I then pointed my new site to the pre-existing files and pre-existing AppPool, and it still worked. I disabled the pre-existing IIS site, edited the bindings on the new site to match the pre-existing one, and everything worked as it should.
I don't know what was "wrong" with the pre-existing site (I inherited the system, so I don't know how it was created), but comparing the bindings, basic settings, and advanced settings, they were a perfect match to the functional new IIS site. I wish I had the real "cause" of the issue to share, but at least I found a solution that worked for me.
Thanks to all for the responses.
[EDIT] While this solution did work for me, please use Laver's answer as the correct solution for this issue.
#Laver's fix did work for us, but since our InstanceName is generated through our build process I did not want to have to change it. I did some more digging and found the root cause of the issue was data stored in the core database's Properties table.
You can see the full documentation in this Sitecore Stack Exchange Q&A, but the solution is reproduced below.
The solution requires an AppPool recycle to take effect:
Execute the following SQL statement against the core database
DELETE FROM [Properties] WHERE [Key] LIKE '%_LAST_UPDATED_TIMESTAMP%'
Recycle the CD's AppPool
After this, you will want to rebuild the indexes on the CD so that
they pick up any changes that were missed while indexing was broken.
It seems like you are on the right track so far. I believe what is tripping you up is the publishing target. From what I understand you are using pub1 as your Content Delivery (CD) database. It is a best practice to have a separate index defined for each database. So you really should configure you CD server to be pointing to a sitecore_pub1_index and not the sitecore_web_index.
Your CM and CD servers should have your pub1 database configured. An example of what that would look like would be something like this Sitecore include patch config. It is a best practice t not edit the web.config directly if possible and use include config patches instead. This example shows a patched config that would go in your \App_Config\Include directory:
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<databases>
<database id="pub1" singleInstance="true" type="Sitecore.Data.Database, Sitecore.Kernel">
<param desc="name">$(id)</param>
<icon>Network/16x16/earth.png</icon>
<securityEnabled>true</securityEnabled>
<dataProviders hint="list:AddDataProvider">
<dataProvider ref="dataProviders/main" param1="$(id)">
<disableGroup>publishing</disableGroup>
<prefetch hint="raw:AddPrefetch">
<sc.include file="/App_Config/Prefetch/Common.config"/>
<sc.include file="/App_Config/Prefetch/Webdb.config"/>
</prefetch>
</dataProvider>
</dataProviders>
<proxiesEnabled>false</proxiesEnabled>
<proxyDataProvider ref="proxyDataProviders/main" param1="$(id)"/>
<archives hint="raw:AddArchive">
<archive name="archive"/>
<archive name="recyclebin"/>
</archives>
<cacheSizes hint="setting">
<data>20MB</data>
<items>10MB</items>
<paths>500KB</paths>
<itempaths>10MB</itempaths>
<standardValues>500KB</standardValues>
</cacheSizes>
</database>
</databases>
</sitecore>
</configuration>
You will then want to configure a pub1 search index on both your CM and CD servers. Assuming you are using lucene that patch config would look like this:
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<contentSearch>
<configuration type="Sitecore.ContentSearch.ContentSearchConfiguration, Sitecore.ContentSearch">
<indexes hint="list:AddIndex">
<index id="sitecore_pub1_index" type="Sitecore.ContentSearch.LuceneProvider.LuceneIndex, Sitecore.ContentSearch.LuceneProvider">
<param desc="name">$(id)</param>
<param desc="folder">$(id)</param>
<!-- This initializes index property store. Id has to be set to the index id -->
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<configuration ref="contentSearch/indexConfigurations/defaultLuceneIndexConfiguration" />
<strategies hint="list:AddStrategy">
<!-- NOTE: order of these is controls the execution order -->
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsync" />
</strategies>
<commitPolicyExecutor type="Sitecore.ContentSearch.CommitPolicyExecutor, Sitecore.ContentSearch">
<policies hint="list:AddCommitPolicy">
<policy type="Sitecore.ContentSearch.TimeIntervalCommitPolicy, Sitecore.ContentSearch" />
</policies>
</commitPolicyExecutor>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>pub1</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
</index>
</indexes>
</configuration>
</contentSearch>
</sitecore>
</configuration>
You now have a pub1 database and search index setup. You should already have pub1 setup as a remote publishing target in Sitecore. You also stated you have the EnableEventQueues setting configured to true on both CM and CD servers.
This is all you should need. The onPublishEndAsync will keep an eye on the EventQueue table in your pub1 database. When you publish to your pub1 publishing target you should see entries on your CD server's Sitecore log*.txt file with something similar to this:
ManagedPoolThread #7 23:21:00 INFO Job started: Index_Update_IndexName=sitecore_pub1_index
ManagedPoolThread #7 23:21:00 INFO Job ended: Index_Update_IndexName=sitecore_pub1_index (units processed: )
Note: Units processed never seems to be accurately updated and is typically blank. I assume this is a Sitecore bug but have never dug into enough to determine why it is not displaying in the logs correctly. You can use Luke (again if you are using Lucene) to verify the index has updated as expected.
Check yours publish:end:remote event and see if there is any handler there. If so, try to remove all handlers to make sure they are not causing any error.
I had a similar issue when migrating from a Sitecore 6 to 7. The EventArgs for the remote publish in Sitecore 7 is different. The new type is PublishEndRemoteEventArgs.
Here is the Solution we did in our applicaiton, We have setup Web and Pub database and created addition publishingStrategy pointing it to the pub
<onPublishEndAsyncPub type="Sitecore.ContentSearch.Maintenance.Strategies.OnPublishEndAsynchronousStrategy,
Sitecore.ContentSearch">
<param desc="database">pub</param>
<!-- whether full index rebuild should be triggered if the number of items in Event Queue exceeds
Config.FullRebuildItemCountThreshold -->
<CheckForThreshold>true</CheckForThreshold>
</onPublishEndAsyncPub>
in the index section set the newly created strategy to pub index
<index id="sitecore_pub_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
<param desc="name">$(id)</param>
<param desc="core">itembuckets</param>
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsyncPub" />
<!--<strategy ref="contentSearch/indexUpdateStrategies/remoteRebuild" />-->
</strategies>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>pub</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
</index>
If you are using the Sitecore Scalability settings, please make sure this is correct.
The reason why the indexing is not being triggered on your CD servers is mainly due to your event queue. One quick check that you can perform is to see if there are events in the EventQueue table of the Core database which says that publishing has completed.
Also, check the Sitecore.ContentSearch.config, since when the publishing ends, it will trigger the rebuild index.
Thanks
I've removed the 'Admin' role from root and noone has the permissions to put it back! Is there a way to assign the admin role to root on the command line or somewhere else outside of the application (UI)?
This is a Windows 7 installation and YouTrack is running as a Windows Service.
I've succeeded with restore reset password for youtrack windows 5.1.2 setting up -Djetbrains.charisma.restoreRootPassword=true on the key "Options" in the registry
Edit:
On 64-bit Windows, the registry key is:
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software Foundation\Procrun 2.0\YouTrack\Parameters\Java
32bit Windows
HKEY_LOCAL_MACHINE\SOFTWARE\Apache Software Foundation\Procrun 2.0\YouTrack\Parameters\Java
Thanks to the guy from
https://stackoverflow.com/a/8610301/1384477
About your only option as far as I'm aware is to boot into a live environment and edit the user:groups files manually, assuming the file system isn't encrypted.
On windows installation you can try the following:
Stop YouTrack service
Navigate to ${YOUTRACK_HOME}/webapps/ROOT/WEB-INF/web.xml. Backup it.
Modify config file as follows:
<web-app>
<display-name>YouTrack</display-name>
<servlet>
<servlet-name>MainServlet</servlet-name>
<servlet-class>jetbrains.charisma.main.ServletImpl</servlet-class>
<!-- Add this parameter -->
<init-param>
<param-name>jetbrains.charisma.restoreRootPassword</param-name>
<param-value>true</param-value>
</init-param>
<!-- End -->
...
</servlet>
...
</web-app>
Now start YouTrack, your root account should have everything reset to default, including credentials and permissions.
It makes sense to restore the config file back to initial state afterwards, not to have your account reset on every application restart.
I'm porting a legacy application from JBoss 4.2.3 to JBoss 7 (the web profile version). They used a custom login module and used a valve to capture the login failure reason into j_exception. They did this by putting context.xml into the web-inf directory of the war, with the following contents:
<!-- Add the ExtendedFormAuthenticator to get access to the username/password/exception ->
<Context cookies="true" crossContext="true">
<Valve className="org.jboss.web.tomcat.security.ExtendedFormAuthenticator"
includePassword="true" ></Valve>
</Context>
The login is working for me, but not that valve. When there's a login exception, the j_exception is still empty and the logic that depends on analyzing why the login was rejected fails. According to this link: http://community.jboss.org/wiki/ExtendedFormAuthenticator, everything looks right. However that link is very old, and it's possible things have changed since then. What's the new way?
It seems that security valves are now defined directly in jboss-web.xml, like this:
<jboss-web>
<security-domain>mydomain</security-domain>
<valve>
<class-name>org.jboss.web.tomcat.security.ExtendedFormAuthenticator</class-name>
<param>
<param-name>includePassword</param-name>
<param-value>true</param-value>
</param>
</valve>
</jboss-web>
However, the ExtendedFormAuthenticator class wasn't ported to JBoss 7.0.1. A ticket has been opened for me, so it should be present in JBoss 7.1.0:
https://issues.jboss.org/browse/AS7-1963