C# : RedisCache Manager changing database number - redis

I am using RedisCache for caching the data in c# application. so while configuring the RedisCache in app.config , we are setting the database='0', so if we change the database to other number between 1 to 15, its always taking 0 as defaultĀ  (ie) db0 as default.
Is there any way to change the Database number.
<redisCacheClient allowAdmin="false" ssl="false" connectTimeout="3000" poolSize = "5"
database="2" syncTimeout ="1000" abortOnConnectFail="true">
<serverEnumerationStrategy mode="All" targetRole="Any"
unreachableServerAction="IgnoreIfOtherAvailable" />
<hosts>
<add host="localhost" cachePort="6379" />
</hosts>
</redisCacheClient>
Thanks,
Raajesh K A

Related

Visual Studio 2015 - Change Setting or Variable at time of Publish

Is it possible to change a setting or variable when the application is published
or is there some sort of condition to run an IF THEN against?
for example, I want to change the way the log files are written when I publish and I often forget to make the change when I publish it
Web.Live.Config:
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<!--
In the example below, the "SetAttributes" transform will change the value of
"connectionString" to use "ReleaseSQLServer" only when the "Match" locator
finds an attribute "name" that has a value of "MyDB".
-->
<appSettings>
<add key="ClaimPackPath" value="C:\\inetpub\\wwwroot\\Application\\ClaimPacks\\" xdt:Locator="Match(key)" xdt:Transform="Replace" />
</appSettings>
</configuration>
Wg.Debug.Config:
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<!--
In the example below, the "SetAttributes" transform will change the value of
"connectionString" to use "ReleaseSQLServer" only when the "Match" locator
finds an attribute "name" that has a value of "MyDB".
-->
<appSettings>
<add key="ClaimPackPath" value="C:\\Debug\\wwwroot\\Application\\ClaimPacks\\" xdt:Locator="Match(key)" xdt:Transform="Replace" />
</appSettings>
</configuration>
Then in the application you can request the variable like so :
string filepath = ConfigurationManager.AppSettings["ClaimPackPath"];
And it will change for whatever publish profile you choose at time of publish :)

SPWebConfigModification - Features over writing each other modification

I am using log4Net with share point 2010. I have a feature which automatically adds the log4net config when my soultion is deployed in Error mode using following code
SPWebService service = SPWebService.ContentService;
service.WebConfigModifications.Clear();
//ADD log4Net config section
service.WebConfigModifications.Add(new SPWebConfigModification()
{
Path = "configuration/configSections",
Name = "section[#name='log4net']",
Sequence = 0,
Owner = CREATE_NAME,
Type = SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode,
Value = string.Format(#"<section name='log4net' type='log4net.Config.Log4NetConfigurationSectionHandler, log4net, Version={0}, Culture=neutral, PublicKeyToken={1}' />", LOG4NET_VERSION, LOG4NET_PUBLIC_KEY_TOKEN)
});
string log4netConfig = #"<log4net>
<appender name='RollingFileAppender' type='log4net.Appender.RollingFileAppender'>
<file value='C:\\logs\\Logger.log' />
<appendToFile value='true' />
<rollingStyle value='Composite' />
<datePattern value='yyyyMMdd' />
<maxSizeRollBackups value='200' />
<maximumFileSize value='50MB' />
<layout type='log4net.Layout.PatternLayout'>
<conversionPattern value='%d [%t] %-5p %c [%x] <%X{auth}> - %m%n' />
</layout>
</appender>
<root>
<level value='ERROR' />
<appender-ref ref='RollingFileAppender' />
</root>
</log4net>";
//add error default config
service.WebConfigModifications.Add(new SPWebConfigModification()
{
Path = "configuration",
Name = "log4net",
Sequence = 0,
Owner = CREATE_NAME,
Type = SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode,
Value = log4netConfig
});
service.Update();
service.ApplyWebConfigModifications();
I wanted to create another feature which over writes Error mode of the log4net and change it to debug, so that the end user does not have to modifiy web config manually.
The problem is when the second feature is activated, it deletes everything added by the first feature.
Is this standard behaviour ?? Any feature that would activate would delete the changes by other feature.
EDIT 2
Steps to replicate
Create 2 feature. both of them should add some different entries in web config.
Activate feature 1 - Feature 1 changes are in web config
Activate feature 2 - Feature 2 changes are in web config but feature 1
changes are gone
Deactivate both the features
Activate feature 2 - Feature 2 changes are in web config
Activate feature 1 - Feature 1 changes are in web config but feature
2 changes are gone
The reason why the first web configuration is removed is because of the code block:
service.WebConfigModifications.Clear();
Basically you're saying that you will be clearing all other configurations set in your web config prior to adding the items in your feature. Removing the said code block should fix your issue.
If you need to do some other things when activating or deactivating a feature try using Feature Receivers. A good example to start with web configurations and feature receivers can be found here.
Also to give you more idea on the code block above is giving you problems you can check out the problem another guy which had the similar problem here.
if I understand this correctly, what's happening is you add a webconfigmodification, this isn't technically "added" yet as it is hasn't been implemented just yet. When you created the 2nd modification, it overwrites the first modification you created, which in turns, erases the first modification that was made, and this was the one that was applied.

How to read CSV file and insert data into PostgreSQL using Mule ESB, Mule Studio

I am very new to Mule Studio.
I am facing a problem. I have a requirement where I need to insert data from a CSV file to PostgreSQL Database using Mule Studio.
I am using Mule Studio CE (version: 1.3.1). I check ed in the Google and find that we can use Data-mapper for doing so. But it works only for EE .So I cannot use it.
Also I am checking in the net and found an article Using Mule Studio to read Data from PostgreSQL(Inbound) and write it to File (Outbound) - Step by Step approach.
That seems feasible but my requirement is just the opposite of the article given. I need File as Inbound data while Databse as Outbound component.
What is the way to do so?
Any step by step help (like what components to use) and guidance will be greatly appreciated.
Here is an example that inserts a two columns CSV file:
<configuration>
<expression-language autoResolveVariables="true">
<import class="org.mule.util.StringUtils" />
<import class="org.mule.util.ArrayUtils" />
</expression-language>
</configuration>
<spring:beans>
<spring:bean id="jdbcDataSource" class=" ... your data source ... " />
</spring:beans>
<jdbc:connector name="jdbcConnector" dataSource-ref="jdbcDataSource">
<jdbc:query key="insertRow"
value="insert into my_table(col1, col2) values(#[message.payload[0]],#[message.payload[1]])" />
</jdbc:connector>
<flow name="csvFileToDatabase">
<file:inbound-endpoint path="/tmp/mule/inbox"
pollingFrequency="5000" moveToDirectory="/tmp/mule/processed">
<file:filename-wildcard-filter pattern="*.csv" />
</file:inbound-endpoint>
<!-- Load all file in RAM - won't work for big files! -->
<file:file-to-string-transformer />
<!-- Split each row, dropping the first one (header) -->
<splitter
expression="#[rows=StringUtils.split(message.payload, '\n\r');ArrayUtils.subarray(rows,1,rows.size())]" />
<!-- Transform CSV row in array -->
<expression-transformer expression="#[StringUtils.split(message.payload, ',')]" />
<jdbc:outbound-endpoint queryKey="insertRow" />
</flow>
In order to read CSV file and insert data into PostgreSQL using Mule all you need to follow following steps:
You need to have following things as pre-requisite
PostgreSQL
PostgreSQL JDBC driver
Anypoint Studio IDE and
A database to be created in PostgreSQL
Then configure Postgre SQL JDBC Driver in Global Element Properties inside Studio
Create Mule Flow in Anypoint Studio as follows:
Step 1: Wrap CSV file source in File component
Step 2: Convert between object arrays and strings
Step 3: Split each row
Step 4: Transform CSV row in array
Step 5: Dump into the destination Database
I would like to suggest Dataweave.
Steps
read the file using FTP connector / endpoint.
Transform using Data weave.
Use database connector , store the data in DB.

Expiration of NHibernate query cache

Is it possible to configure expiration of NHibernate's query cache?
For second level cache I can do it from nhibernate.cfg.xml, but I can't find a way for SQL query cache.
EDIT:
ICriteria query = CreateCriteria()
.Add(Expression.Eq("Email", identifiant))
.SetCacheable(true)
.SetCacheRegion("X");
<syscache>
<cache region="X" expiration="10" priority="1" />
</syscache>
Yes, we can set cache expiration via region. Adjust the query like this:
criteria.SetCacheable(true)
.SetCacheMode(CacheMode.Normal)
.SetCacheRegion("LongTerm");
And put similar configuration into web.config file
<configSections>
<section name="syscache" type="NHibernate.Caches.SysCache.SysCacheSectionHandler, NHibernate.Caches.SysCache" requirePermission="false" />
</configSections>
<syscache>
<cache region="LongTerm" expiration="180" priority="5" />
<cache region="ShortTerm" expiration="60" priority="3" />
</syscache>
EDIT: I am just adding this link Class-cache not used when getting entity by criteria
To be sure what I mean by SQL Query cache. In the linked answer I am explaining that topic
Just for a clarity. The configuration of the NHibernate "session-factory" must contain:
<property name="cache.use_query_cache">true</property>
This switch will make query cache working. More details: http://nhibernate.info/doc/nh/en/index.html#performance-querycache

EclipseLink very slow on inserting data

I'm using the latest EclipseLink version with MySQL 5.5 (table type InnoDB). I'm inserting about 30900 records (which could be also more) at a time.
The problem is, that the insert performance is pretty poor: it takes about 22 seconds to insert all records (compared with JDBC: 7 seconds). I've read that using batch writing should help - but doesn't!?
#Entity
public class TestRecord {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
public Long id;
public int test;
}
The code to insert the records:
factory = Persistence.createEntityManagerFactory("xx_test");
EntityManager em = factory.createEntityManager();
em.getTransaction().begin();
for(int i = 0; i < 30900; i++) {
TestRecord record = new TestRecord();
record.test = 21;
em.persist(record);
}
em.getTransaction().commit();
em.close();
And finally my EclipseLink configuration:
<persistence-unit name="xx_test" transaction-type="RESOURCE_LOCAL">
<class>com.test.TestRecord</class>
<properties>
<property name="javax.persistence.jdbc.driver" value="com.mysql.jdbc.Driver" />
<property name="javax.persistence.jdbc.url" value="jdbc:mysql://localhost:3306/xx_test" />
<property name="javax.persistence.jdbc.user" value="root" />
<property name="javax.persistence.jdbc.password" value="test" />
<property name="eclipselink.jdbc.batch-writing" value="JDBC" />
<property name="eclipselink.jdbc.cache-statements" value="true"/>
<property name="eclipselink.ddl-generation.output-mode" value="both" />
<property name="eclipselink.ddl-generation" value="drop-and-create-tables" />
<property name="eclipselink.logging.level" value="INFO" />
</properties>
</persistence-unit>
What I'm doing wrong? I've tried several setting, but nothing seems the help.
Thanks in advance for helping me! :)
-Stefan
Another thing is to add ?rewriteBatchedStatements=true to the data URL used by the connector.
This caused executing about 120300 inserts down to about 30s which was about 60s before.
JDBC Batch writing improves performance drastically; please try it
Eg: property name="eclipselink.jdbc.batch-writing" value="JDBC"
#GeneratedValue(strategy = GenerationType.IDENTITY)
Switch to TABLE sequencing, IDENTITY is never recommended and a major performance issue.
See,
http://java-persistence-performance.blogspot.com/2011/06/how-to-improve-jpa-performance-by-1825.html
I seem to remember that MySQL may not support batch writing without some database config as well, there was another post on this, I forget the url but you could probably search for it.
Probably the biggest difference besides the mapping conversion is the caching. By default EclipseLink is placing each of the entities into the persistence context (EntityManager) and then during the finalization of the commit it needs to add them all to the cache.
One thing to try for now is:
measure how long an em.flush() call takes after the loop but before the commit. Then if you want you could call em.clear() after the flush so that the newly inserted entities are not merged into the cache.
Doug