Need help using ldap to convert ${CCNetModifyingUsers} to list of email addresses - ldap

I am trying to configure an email publisher to send email upon failure to the user[s] that contributed to a failed build. If that's not possible because it's a list, then perhaps I can configure tasks that do a forced build, in which case I could use ${CCNetUser}.
This is my attempt at configuring it because I could not find anything helpful other than the LDAP Email Converter page in the cc.net documentation.
<converters>
<ldapConverter domainName="xxxxxx.com" />
<!--not sure if needed: ldapLogOnUser="LdapQuery" ldapLogOnPassword="****"-->
</converters>
<users>
<cb:define userEmail="${CCNetModifyingUsers}" />
<user name="buildmaster" group="buildmaster" address="$(userEmail)" />
</users>
Any suggestions would be greatly appreciated.

I finally figured this out.
The solution, which was not clear from the documentation, was to use this type of user node:
<user name="${CCNetFailureUsers}" group="failure" address="" />
The user name uses the dynamic variable that resolves to the list of users that contributed to the failed build, the group defines notification for Failed builds (and exceptions in my configuration) and a blank address is what triggers the ldapConverter.

Related

Why would my Mule project suddenly start complaining about "Invalid keystore format"

While trying to learn how to use the Imap connector in Mule studio, I came across this tutorial. I downloaded it and imported it into AnyPoint studio as a project, and as it were in that tutorial, it worked great (after sending the connector to the right host/port etc.) I then did a quick edit to change the flow to something like this:
IMAP response -> Email to String transormer -> Log this string in the console
And all was well and good. I then went in to the properties of the connector configuration, and changed it so that emails would not be deleted after they're read in, and everything broke. When trying to run the Mule project, I get a long list of errors, starting with:
java.io.IOException: Invalid keystore format
And later down the list:
org.mule.module.launcher.DeploymentInitException: IOException: Invalid keystore format
Which is relentlessly frustrating because really I didn't do anything to the connector's configuration aside from allowing it to keep emails in the inbox of the email that is being used in the IMAP connector. Even if doing something like that were to throw this kind of exception, after changing the configuration back to the way it was when the tutorial was working fine, I still get the same errors and the project fails to deploy.
I suspect that you edited the flow in visual mode instead of XML and that Studio has transformed this (which came from the download):
<imaps:connector checkFrequency="100" doc:name="IMAP" name="imapsConnector" validateConnections="true">
</imaps:connector>
into that:
<imaps:connector checkFrequency="100" doc:name="IMAP" name="imapsConnector" validateConnections="true">
<imaps:tls-client path="" storePassword="" />
<imaps:tls-trust-store path="" storePassword="password" />
</imaps:connector>
i.e. empty tls element(s) got injected thus messing your configuration up.

MobileFirst Server on Liberty Profile - LDAP authentication

I want to be able to log into mobilefirst console on my MobileFirst v6.3 Server which is running on a Liberty Profile using accounts from an LDAP repository.
I have edited my server.xml with the following LDAP Registry and LTPA configuration:
<ldapRegistry id="AD_Example" realm="WASLTPARealm"
host="example.com" port="389" ignoreCase="true"
baseDN="dc=example,dc=com,dc=ar"
bindDN="cn=binduser,cn=Users,dc=example,dc=com,dc=ar"
bindPassword="ThisIsAnExample"
ldapType="Microsoft Active Directory">
<activedFilters userFilter="sAMAccountName=%v"
userIdMap="user:sAMAccountName">
</activedFilters>
<group name="worklightadmingroup">
<member name="user1"/>
</group>
<group name="worklightdeployergroup">
<member name="user1"/>
</group>
<group name="worklightmonitorgroup"/>
<group name="worklightoperator"/>
</ldapRegistry>
<ltpa keysFileName="ltpa.keys" keysPassword="WebAS" expiration="120"/>
I took some info from the following places:
ftp://ftp.software.ibm.com/software/products/en/MobileFirstPlatform/docs/v630/mobilefirst_platform_foundation_doc.pdf (Page 127)
worklight server authentication with Ldap
But I can't seem to get this running. There is also going to be a DataPower integration scenario, but I need to test the LDAP connection first and I thought this might be the best approach. Any suggestions?
EDIT: Here you can take a look at the full logs (Console, Messages and ffdc). There is an "LDAPConnection" exception, but I can't understand the info it is giving to me.
I have succeeded logging into worklightconsole by using a user in a LDAP registry (in my case an OpenLDAP, not a Microsoft Active Directory). What I find strange is that you are specifying groups as <ldapRegistry> children, shouldn't your groups be defined in your LDAP registry (and not in the server.xml) ?
And then you can use the group activedFilters
<activedFilters
userFilter="(&(sAMAccountName=%v)(objectcategory=user))"
groupFilter="(&(cn=%v)(objectcategory=group))"
userIdMap="user:sAMAccountName"
groupIdMap="*:cn"
groupMemberIdMap="memberOf:member" >
</activedFilters>
given in example there (there is a part on Microsoft Active Directory Server, of course you'll have to adapt to your case).
Also according to this doc (in Feature configuration elements click on ldapRegistry; you'll have all the attributes and children nodes that can be used), <ldapRegistry> doesn't seem to possess <group> child.
The following LDAP exception is emitted : javax.naming.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 775, v1db1
data 775 means that your user account (apmovil1) is locked. You must ask the LDAP administrator to unlock your user and then use the right password in the server.xml file since it is probable it was locked due to too many connection attempts with a wrong password.

Sitecore Lucene: content delivery server index not updating on publish

I created a custom search page using the default sitecore_web_index and everything seemed to work until I migrated to my test environment that has separate content management and content delivery servers. The index on the CD server is not getting updated on publish (the CM server does), if I rebuild the index from the control panel, I see updates. So I believe the index and the search page are working correctly.
The index is using the onPublishEndAsync strategy. The Sitecore Search and Index Guide (http://sdn.sitecore.net/upload/sitecore7/70/sitecore_search_and_indexing_guide_sc70-usletter.pdf) section 4.4.2 states:
This strategy does exactly what the name implies. During the initialization, it subscribes to the
OnPublishEnd event and triggers an incremental index rebuild. With separate CM and CD servers, this
event will be triggered via the EventQueue object, meaning that the EventQueue object needs to be
enabled for this strategy to work in such environment.
My web.config has <setting name="EnableEventQueues" value="true"/>
Also from the Search and Index Guide:
Processing
The strategy will use the EventQueue object from the database it was initialized with:
<param desc="database">web</param>
This means that there are multiple criteria towards successful execution for this strategy:
This database must be specified in the <databases /> section of the configuration file.
The EnableEventQueues setting must be set to true.
The EventQueue table within the preconfigured database should have entries dated later than
index's last update timestamp.
I'm not sure of the <param desc="database">web</param> setting, because the publishing target (and database ID) for the CD server is pub1. I tried changing web to pub1, but then neither servers' index was updated on a publish (so it's changed back to web).
The system was recently upgraded from Sitecore 6.5 to 7.2, so there are a couple indexes using Sitecore.Search API and these indexes are updated on publish.
Is the database param on the EventQueue wrong considering the multiple publishing targets? Is there something else I'm missing, or perhaps a working example of a CM -> CD environment I could compare to?
TIA
EDIT:
If I wouldn't have had a co-worker sitting next to me both Friday and today who can confirm, I would think I'm going crazy. But now, the CD server is getting updates to the index, but the CM server is not getting updates. What would make the CM server not get updates now?
I ran into this same issue last night and have a more predictable resolution than creating a new IIS site:
The resolve was to set a distinct InstanceName in ScalabilitySettings.config for each CD server, instead of relying on the auto-generated name.
Setting this value immediately resolved the issue and restored the index update functionality upon Publish End Remote events.
Note: If you already have an InstanceName defined in your config, then you need to change it for this to work. I just increment the InstanceName with the date to force the change.
This is effectively fixing the same issue in the same way as the original poster did by changing to a new IIS site, as the OP's fix would have modified the auto-generated Instance Name based on the new IIS site name.
I believe the core problem with the OP (and also in my instance) is related to the EventQueue databases going out of sync with the CD instances and none of the servers being able to determine that an event has been generated / what content needs to update in the index. By changing the Instance Name (using either method) the servers appear to be new instances and start from scratch with their EventQueue tracking.
Every time I've seen issues like this in the past it's been related to major manipulations of Sitecore databases. Such as restorations, backup/restore to a new DB name, or rollbacks of databases due to deployment problems. I believe something in the above operations causes the EventQueues to get out of sync and the servers stop responding to the expected events.
I had this issue and it drove me nuts for a few months. I figured out that the answer lied in the rebuild strategy of the Lucene Index. The only way for Lucene to know to rebuild itself when the CM and CD are in separate instances of IIS, is for lucene to watch the EventQueue table and recognize that a change happened to an item that is either at the root, or child of the root that you specify in the crawler node. The strategy that you'll need to specify as the rebuild strategy to guarantee this behavior is below
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexUpdateStrategies/remoteRebuild" />
</strategies>
If you use any other rebuild strategy with a remote instance of a content delivery server, the index will only be rebuilt in the CM instance's file system.
In case anyone runs into this in the future, the solution that worked for me, was creating a new site in IIS manager.
I submitted a ticket to Sitecore support, but after a week of not getting a response, I attempted to recreate my dev environment on my test server. I copied my local/dev files to the test CM server, created a new site and AppPool in IIS, pointed to the newly copied files, and updated the connectionstrings.config to point to the test environment database. This worked (publishing updated the CM web index).
After trying to point the existing IIS site to my new files, and use the new AppPool, publishing from this site would not update the CM web index.
I then pointed my new site to the pre-existing files and pre-existing AppPool, and it still worked. I disabled the pre-existing IIS site, edited the bindings on the new site to match the pre-existing one, and everything worked as it should.
I don't know what was "wrong" with the pre-existing site (I inherited the system, so I don't know how it was created), but comparing the bindings, basic settings, and advanced settings, they were a perfect match to the functional new IIS site. I wish I had the real "cause" of the issue to share, but at least I found a solution that worked for me.
Thanks to all for the responses.
[EDIT] While this solution did work for me, please use Laver's answer as the correct solution for this issue.
#Laver's fix did work for us, but since our InstanceName is generated through our build process I did not want to have to change it. I did some more digging and found the root cause of the issue was data stored in the core database's Properties table.
You can see the full documentation in this Sitecore Stack Exchange Q&A, but the solution is reproduced below.
The solution requires an AppPool recycle to take effect:
Execute the following SQL statement against the core database
DELETE FROM [Properties] WHERE [Key] LIKE '%_LAST_UPDATED_TIMESTAMP%'
Recycle the CD's AppPool
After this, you will want to rebuild the indexes on the CD so that
they pick up any changes that were missed while indexing was broken.
It seems like you are on the right track so far. I believe what is tripping you up is the publishing target. From what I understand you are using pub1 as your Content Delivery (CD) database. It is a best practice to have a separate index defined for each database. So you really should configure you CD server to be pointing to a sitecore_pub1_index and not the sitecore_web_index.
Your CM and CD servers should have your pub1 database configured. An example of what that would look like would be something like this Sitecore include patch config. It is a best practice t not edit the web.config directly if possible and use include config patches instead. This example shows a patched config that would go in your \App_Config\Include directory:
<configuration xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<databases>
<database id="pub1" singleInstance="true" type="Sitecore.Data.Database, Sitecore.Kernel">
<param desc="name">$(id)</param>
<icon>Network/16x16/earth.png</icon>
<securityEnabled>true</securityEnabled>
<dataProviders hint="list:AddDataProvider">
<dataProvider ref="dataProviders/main" param1="$(id)">
<disableGroup>publishing</disableGroup>
<prefetch hint="raw:AddPrefetch">
<sc.include file="/App_Config/Prefetch/Common.config"/>
<sc.include file="/App_Config/Prefetch/Webdb.config"/>
</prefetch>
</dataProvider>
</dataProviders>
<proxiesEnabled>false</proxiesEnabled>
<proxyDataProvider ref="proxyDataProviders/main" param1="$(id)"/>
<archives hint="raw:AddArchive">
<archive name="archive"/>
<archive name="recyclebin"/>
</archives>
<cacheSizes hint="setting">
<data>20MB</data>
<items>10MB</items>
<paths>500KB</paths>
<itempaths>10MB</itempaths>
<standardValues>500KB</standardValues>
</cacheSizes>
</database>
</databases>
</sitecore>
</configuration>
You will then want to configure a pub1 search index on both your CM and CD servers. Assuming you are using lucene that patch config would look like this:
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<contentSearch>
<configuration type="Sitecore.ContentSearch.ContentSearchConfiguration, Sitecore.ContentSearch">
<indexes hint="list:AddIndex">
<index id="sitecore_pub1_index" type="Sitecore.ContentSearch.LuceneProvider.LuceneIndex, Sitecore.ContentSearch.LuceneProvider">
<param desc="name">$(id)</param>
<param desc="folder">$(id)</param>
<!-- This initializes index property store. Id has to be set to the index id -->
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<configuration ref="contentSearch/indexConfigurations/defaultLuceneIndexConfiguration" />
<strategies hint="list:AddStrategy">
<!-- NOTE: order of these is controls the execution order -->
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsync" />
</strategies>
<commitPolicyExecutor type="Sitecore.ContentSearch.CommitPolicyExecutor, Sitecore.ContentSearch">
<policies hint="list:AddCommitPolicy">
<policy type="Sitecore.ContentSearch.TimeIntervalCommitPolicy, Sitecore.ContentSearch" />
</policies>
</commitPolicyExecutor>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>pub1</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
</index>
</indexes>
</configuration>
</contentSearch>
</sitecore>
</configuration>
You now have a pub1 database and search index setup. You should already have pub1 setup as a remote publishing target in Sitecore. You also stated you have the EnableEventQueues setting configured to true on both CM and CD servers.
This is all you should need. The onPublishEndAsync will keep an eye on the EventQueue table in your pub1 database. When you publish to your pub1 publishing target you should see entries on your CD server's Sitecore log*.txt file with something similar to this:
ManagedPoolThread #7 23:21:00 INFO Job started: Index_Update_IndexName=sitecore_pub1_index
ManagedPoolThread #7 23:21:00 INFO Job ended: Index_Update_IndexName=sitecore_pub1_index (units processed: )
Note: Units processed never seems to be accurately updated and is typically blank. I assume this is a Sitecore bug but have never dug into enough to determine why it is not displaying in the logs correctly. You can use Luke (again if you are using Lucene) to verify the index has updated as expected.
Check yours publish:end:remote event and see if there is any handler there. If so, try to remove all handlers to make sure they are not causing any error.
I had a similar issue when migrating from a Sitecore 6 to 7. The EventArgs for the remote publish in Sitecore 7 is different. The new type is PublishEndRemoteEventArgs.
Here is the Solution we did in our applicaiton, We have setup Web and Pub database and created addition publishingStrategy pointing it to the pub
<onPublishEndAsyncPub type="Sitecore.ContentSearch.Maintenance.Strategies.OnPublishEndAsynchronousStrategy,
Sitecore.ContentSearch">
<param desc="database">pub</param>
<!-- whether full index rebuild should be triggered if the number of items in Event Queue exceeds
Config.FullRebuildItemCountThreshold -->
<CheckForThreshold>true</CheckForThreshold>
</onPublishEndAsyncPub>
in the index section set the newly created strategy to pub index
<index id="sitecore_pub_index" type="Sitecore.ContentSearch.SolrProvider.SolrSearchIndex, Sitecore.ContentSearch.SolrProvider">
<param desc="name">$(id)</param>
<param desc="core">itembuckets</param>
<param desc="propertyStore" ref="contentSearch/databasePropertyStore" param1="$(id)" />
<strategies hint="list:AddStrategy">
<strategy ref="contentSearch/indexUpdateStrategies/onPublishEndAsyncPub" />
<!--<strategy ref="contentSearch/indexUpdateStrategies/remoteRebuild" />-->
</strategies>
<locations hint="list:AddCrawler">
<crawler type="Sitecore.ContentSearch.SitecoreItemCrawler, Sitecore.ContentSearch">
<Database>pub</Database>
<Root>/sitecore</Root>
</crawler>
</locations>
</index>
If you are using the Sitecore Scalability settings, please make sure this is correct.
The reason why the indexing is not being triggered on your CD servers is mainly due to your event queue. One quick check that you can perform is to see if there are events in the EventQueue table of the Core database which says that publishing has completed.
Also, check the Sitecore.ContentSearch.config, since when the publishing ends, it will trigger the rebuild index.
Thanks

Blackboard with Struts

I am a beginner here, and this is my first time creating a building block for Blackboard. I understand that I could use Struts in building block thus I used Struts 1.3 to develop the building block for Blackboard version 9.
I am confuse while doing Dispatch Action, Blackboard doesn't seem to be able to find my forwarded page, and I keep ended up in this error "The specified resource was not found, or you do not have permission to access it".
Link in my jsp:
This is a test
struts-config.xml setup:
<action path="/teststruts" type="com.test.action.TestAction" parameter="execute" scope="request" validate="false">
<forward name="success" path="./thistest.jsp" />
<forward name="error" path="./index.jsp" />
My dispatch action simply mapping.findforward to one of the path.
Really scratching my head here.
Fix the relative path by removing the ./ from the front of your link.
Also verify that your Blackboard Building Block is starting up correctly by looking at the
blackboard/logs/tomcat/sdtout-stderr log after you "disable and enable" the code from the Building Blocks management page. Also verify that your servlet contains a error.jsp as sometimes the 404 error comes from struts forwarding the error on to an error page that does not exist.
Try to use
<permission name="suppressAccessChecks" type="java.lang.reflect.ReflectPermission" />
But i don't think is a good idea to use struts or another framework to develop Blackboard BB. It can generate conflicts with the libraries used by Blackboard, if not with the current version, when updating Blackboard version.
One approach we took when trying to do complex modules in blackboard is to create a full webapp for Blackboard tomcat's instead of a BB. Using this technique it is possible to use whatever you want, since it is an independent application, but at the same time you can comunicate with Tomcat through the context. Yoy have to add the application to the server.xml and add some permissions in catalina.policy to do so... but maybe it could be a little bit tricky...

How to use WIX to deploy and run WCF service

I am trying to make an installer which deploys my wcf service, at the moment it is creating the virtual directory, but when I try connect my app to it, I get a
CommunicationException was unhandled
by user code The remote server
returned an error: NotFound.
I notice that if I create a virtual directory manually that it will connect and work, so I'm assuming IIS is doing something behind my back which is making it work.
This is the code I am using to create the virtual directory,please note this is inside a iis:WebSite tag if more information is needed please let me know.
<iis:WebVirtualDir Id="VAWebService" Directory="VAWebService" Alias="VAWebService">
<iis:WebApplication Id="VAWebService" Name="VAWebService"
AllowSessions="yes" WebAppPool="VA_AppPool" />
<iis:WebDirProperties Id="MyWebSite_Properties" AnonymousAccess="yes"
WindowsAuthentication="no" DefaultDocuments="service1.svc"
AccessSSL="yes" AccessSSL128="yes" AccessSSLMapCert="yes"
AccessSSLNegotiateCert="yes" AccessSSLRequireCert="yes"
Read="yes" Write="yes" Execute="yes" Script="yes" />
</iis:WebVirtualDir>
Does any one know how to fix this? any help would be appreciated.
Thanks
I'm pretty sure you don't need Write or Execute set to yes. You probably don't need AccessSSLMapCert or AccessSSLNegotiateCert or AccessSSLRequireCert either, unless you are using client certificates to authenticate to the site. Are you setting these when you configure the site using IIS?