I found this url for best practices for Azure resource naming conventions. It has list of prefixes based on resource type. I don't find prefix for Azure SQL Server in it. It has prefix for Azure SQL Database but not Server. Is there a 'standard' naming convention for Azure SQL Server?
I think even though naming convention for Azure SQL Server is not explicitly specified in the url, by extrapolating naming conventions for the mentioned resources we can say that prefix for Azure SQL Server resource should be sqls-. And accordingly the format for the resource should be sqls-<App Name>-<Environment>.
Related
I have been working on ApacheIgnite in my organization.
Somewhere i have read that apacheignite is tightly coupled with h2 database .
Can I change h2 database to some other database in apache Ignite.
Ignite does not use H2 to store or index data, it just uses H2's query parser, query planner and serialization, with a lot of customization over either one.
So you can't replace H2 dependency, but if what you really want is storing data in 3rd party database, then you can certainly do that via 3rd Party Persistence.
I am new to Magnolia CMS and the Apache Jackrabbit content repository concepts.
There is a web application which is using Magnolia CMS. Magnolia is using SQL SERVER 2012 database as persistence manager.
Here Apache Jackrabbit content repository implementation is done. There are two separate configurations of the Magnolia CMS which are used for the application, referred to as the public and author instances.
Now here we are trying to replace the existing Magnolia CMS with a custom ASP.NET MVC 5 application with all the functionalities.
I analysed the tables in the SQL SERVER database and found that data stored in format of Node_ID and Bundle_Data which is very difficult to analyse.
In short, it is not easy to interpret.
Based on the custom CMS a new database model for author instance (SQL SERVER 2012) is developed.
Hence as part of migration task ,I am trying to migrate the old data that is stored in the SQL SERVER with the Apache Jackrabbit content repository implementation to a normal SQL SERVER 2012 (as per the new database model).
Can anyone help me to know are there are any proven methods or tools available to accomplish this task.
The question is more on the jackrabbit-side, not so much on the Magnolia side, especially since you want to replace Magnolia entirely, not just the persistence layer:
Now here we are trying to replace the existing Magnolia CMS with a
custom ASP.NET MVC 5 application with all the functionalities.
although my question really is whether you really want to replace Jackrabbit entirely, or still use Jackrabbit with your ASP.NET application but with a MS SQL Server datastore (which would be my personal suggestion)? Otherwise you will be getting rid of all the benefits that Jackrabbit has.
Jackrabbit does support SQL Server and I would suggest to use it.
https://wiki.apache.org/jackrabbit/DataStore#Configuration-1:
Currently supported are: db2, derby, h2, mssql, mysql, oracle,
sqlserver.
Developing a WebCMS with just ASP.NET and SQL Server and without a content repository layer in between sounds like developing everything that a WebCMS usually comes with from scratch, especially if you want to have all the functionality that Magnolia offers (versioning, history, search, etc.).
You can check details regarding Jackrabbit data store here: http://wiki.apache.org/jackrabbit/DataStore although I am wondering why you or your customer would want to change the data store of the content repository to SQL Server. I guess you are not speaking of using MySQL for the persistence of the meta data, but really to store the binary content (a mistake that by the way OpenCms, another Java-based open source WebCMS, made in their architecture design - imho).
Note that usually large files are not stored in the database itself (with Magnolia), but on the file system.
https://wiki.magnolia-cms.com/display/WIKI/Setting+up+a+Jackrabbit+persistence+manager#SettingupaJackrabbitpersistencemanager-Datastorageandbackup:
BLOBs are not by default stored in the database when they exceed a
certain threshold definied in your Jackrabbit configuration - instead
they are saved on the file system. The default threshold used by a
Magnolia installation is 1024 bytes. All files above the defined
threshold are put onto the filesystem and not in the database.
In case you really want to get rid of Jackrabbit entirely and only use SQL Server as the persistence layer and store all binary content in it regardless of size (not recommended), I would write a custom export/import script for it, which queries the Jackrabbit repo (standard CMIS protocol) and takes the content from the file system, reading as FileInputStream and writing it to the Oracle DB (Example: http://www.java2s.com/Code/Java/Database-SQL-JDBC/StoreBLOBsdataintodatabase.htm). This would be my suggested method.
I don't think there are any out-of-the-box tools for that.
Im setting up a multi tenant database and came across the following blog post on federations: SQL Azure Multi Tenant
They write about assigning a predicate to filter data between tenants:
In a single-tenant app, the query logic in application is coded with the assumption that all data in a database belongs to one tenant. With multi-tenant apps that work with identical schemas, refactored code simply injects tenant_id into the schema (tables, indexes etc) and every query the app issues, contains the tenant_id=? predicate. In a federation, where tenant_id is the federation key, you are asked to still implement the schema changes. However federations provide a connection type called a FILTERING connection that automatically injects this tenant_id predicate without requiring app refactoring. Our data-dependent routing sets up a FILTERING connection by default. Here is how;
1: USE FEDERATION orders_federation(tenant_id=155) WITH RESET, FILTERING=ON
My question is, is this just a SQL azure thing? Or can this be accomplished with any sql server instance?
Thanks in advance
Federations are available only on SQL Azure.
Is it possible to search the metadata of SQL Databases (example: extended properties) via SharePoint? maybe using FAST?
You can use SharePoint Business Connectivity Services (BCS) to define the structure of your remote sql database and then you can crawl it with search. An intro to this is here: http://blogs.msdn.com/b/taj/archive/2010/08/24/searching-external-systems-using-sharepoint-2010-business-connectivity-services-bcs-within-throttling-limits.aspx
I'm in the process of designing a new database for an application. I'd like to be mindful of the security from the start (which should be the norm!). Anyone got a link to a resource describing the best way to use schemas to implement good security?
By using schemas, I mean not just dumping everything under the default dbo schema. Surely there's a schema best practises out there? Can't find it if there is...
Security Enhancements in SQL Server 2005: Schema
http://www.sql-server-performance.com/articles/dba/authorization_2005_p1.aspx
SQL Server Best Practices: User-Defined Schemas
http://blogs.msdn.com/buckwoody/archive/2009/06/18/sql-server-best-practices-user-defined-schemas.aspx
...And the obligatory MSDN reference:
http://msdn.microsoft.com/en-us/library/ms190387.aspx