When I was trying to install Next Cloud on a web host server. I get this error message when finally creating the admin account and configuring the database information.
Error while trying to create admin user: An exception occurred while
executing 'CREATE TABLE oc_migrations (app VARCHAR(255) NOT NULL,
version VARCHAR(255) NOT NULL, PRIMARY KEY(app, version)) DEFAULT
CHARACTER SET UTF8 COLLATE utf8_bin ENGINE = InnoDB': SQLSTATE[42000]:
Syntax error or access violation: 1071 Specified key was too long; max
key length is 1000 bytes
Maybe there is a way to fix this problem. I am using InfinityFree.net as web host to test next cloud.
Thank you
From Nextcloud's installation manual:
The following is currently required if you’re running Nextcloud together with a MySQL / MariaDB database:
InnoDB storage engine (MyISAM is not supported)
From InfinityFree's knowledge base:
It’s not possible to create InnoDB tables. The InnoDB storage engine for MySQL is not supported on InfinityFree. Only the MyISAM storage engine can be used.
If your script requires the InnoDB storage engine, you need to upgrade your account.
If you do decide to get a premium account, then you'll also need to make sure that innodb_large_prefix is enabled in their my.cnf file.
Related
I've configured Always Encrypted for my SQL installation, that is I've got a CMK pointing towards a Windows Keystore key, which in turn is used to decrypt the CEK.
Now I'm trying to think of some nice backup solutions for the CMK.
Currently I have the exact same RSA key configured in Azure, I've confirmed both keys to work (Windows Keystore key and Azure) by encrypting with the first and decrypting with the latter.
But the problem I'm having is, in case I lose the windows keystore key, I lose the ability to decrypt Always Encrypted keys.
The Azure key doesn't "expose" the key, meaning I can encrypt and decrypt with the key, but I can't export it.
When configuring key rotation in SQL you need the "original key".
I've tried to simply make a new CMK in SQL which points to the Azure environment by using "ALTER COLUMN ENCRYPTION KEY", but I get an error when I try to access the data.
My guess is that the CEK contains some metadata linking it to the key that is Windows based.
My question then is, is there a way to manually decrypt the column encryption key using a valid RSA key?
My question then is, is there a way to manually decrypt the column encryption key using a valid RSA key?
Yes, you can manually decrypt the column encryption key and master key using Always Encrypted with secure enclaves, but these features are only allowed in DC-series hardware configuration along with Microsoft Azure Attestation which are available only in few Locations. So, you need to select a location (an Azure region) that supports both the DC-series hardware and Microsoft Azure Attestation.
Note: DC-series is available in the following regions: Canada Central, Canada East, East US, North Europe, UK South, West Europe, West US.
Choose DC-series while deploying the SQL Database by following the steps below.
Make sure to SQL Server is deployed in DC-series supported location. Click on configure database.
Select hardware configuration
Select DC-series, click on OK, Apply and deploy the database.
Now create attestation provider using Azure Portal. Search for attestation in search bar and select Microsoft Azure Attestation.
On the Overview tab for the attestation provider, copy the value of the Attest URI property to clipboard and save it in a file. This is the attestation URL, you will need in later steps.
Select Policy on the resource menu on the left side of the window or on the lower pane.
Set Attestation Type to SGX-IntelSDK.
Select Configure on the upper menu.
Set Policy Format to Text. Leave Policy options set to Enter policy.
In the Policy text field, replace the default policy with the below policy.
[ type=="x-ms-sgx-is-debuggable", value==false ]
&& [ type=="x-ms-sgx-product-id", value==4639 ]
&& [ type=="x-ms-sgx-svn", value>= 0 ]
&& [ type=="x-ms-sgx-mrsigner", value=="e31c9e505f37a58de09335075fc8591254313eb20bb1a27e5443cc450b6e33e5"]
=> permit(); }; ```
Configure your database in SSMS. Click on Options and give attestation URL which you have copied in step 5.
Using the SSMS instance from the previous step, in Object Explorer, expand your database and navigate to Security > Always Encrypted Keys.
Provision a new enclave-enabled column master key:
Right-click Always Encrypted Keys and select New Column Master Key....
Select your column master key name: CMK1.Make sure you select either Windows Certificate Store (Current User or Local Machine) or Azure Key Vault.
Select Allow enclave computations.
Now simply encrypt your column. See below example to encrypt.
ALTER TABLE [HR].[Employees]
ALTER COLUMN [SSN] [char] (11) COLLATE Latin1_General_BIN2
ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
WITH
(ONLINE = ON);
ALTER TABLE [HR].[Employees]
ALTER COLUMN [Salary] [money]
ENCRYPTED WITH (COLUMN_ENCRYPTION_KEY = [CEK1], ENCRYPTION_TYPE = Randomized, ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256') NOT NULL
WITH
(ONLINE = ON);
ALTER DATABASE SCOPED CONFIGURATION CLEAR PROCEDURE_CACHE;
Verify the encrypted data.
To decrypt using customer encrypt key, see below example.
ALTER TABLE [HR].[Employees]
ALTER COLUMN [SSN] [char](11) COLLATE Latin1_General_BIN2
WITH (ONLINE = ON);
GO
I am using Hive based on HDInsight Hadoop cluster -- Hadoop 2.7 (HDI 3.6).
We have some old Hive tables that point to some very storage accounts that don't exist any more. But these tables still point to these storage locations , basically the Hive Metastore still contain references to the deleted storage accounts. If I try to drop such a hive table , I get an error
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.fs.azure.AzureException org.apache.hadoop.fs.azure.AzureException: No credentials found for account <deletedstorage>.blob.core.windows.net in the configuration, and its container data is not accessible using anonymous credentials. Please check if the container exists first. If it is not publicly available, you have to provide account credentials.)
Manipulating the Hive Metatstore directly is risky as it could land the Metastore in an invalid state.
Is there any way to get rid of these orphan tables?
I'm trying to create a search index on my table in DSE 6.8. This is my table in the test keyspace:
CREATE TABLE users (
username text,
first_name text,
last_name text,
password text,
email text,
last_access timeuuid,
PRIMARY KEY(username));
I tried this query:
CREATE SEARCH INDEX ON test.users;
and this is the response:
InvalidRequest: Error from server: code=2200 [Invalid query] message="Search statements are not supported on this node"
I think there must be something that I should change in some file for DSE to support search statements. I've already set the SOLR_ENABLED in /etc/default/dse to 1. I'm totally new to this and I don't know if there's something wrong with my table or anything else.
can anyone suggest what might be causing this error? Thanks in advance.
As the error message suggests, you can only create a Search index on DSE nodes running in Search mode.
Check the node's workload by running the command below. It will tell you if the node is running in pure Cassandra mode or Search mode.
$ dsetool status
If you have installed DSE using the binary tarball, it doesn't use /etc/default/dse. Instead start DSE as a standalone process with the -s flag to start it in Search mode:
$ dse cassandra -s
Cheers!
I am creating External table on hive which is mapped to Azure Blob storage
CREATE EXTERNAL TABLE test(id bigint, name string, dob timestamp,
salary decimal(14,4), line_number bigint) STORED AS PARQUET LOCATION
'wasb://(container)#(Stroage_Account).blob.core.windows.net/test'
getting below exception
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got
exception: org.apache.hadoop.fs.azure.AzureException
com.microsoft.azure.storage.StorageException: Server failed to
authenticate the request. Make sure the value of Authorization header
is formed correctly including the signature.)
The Storage Account that i am using here is not primary storage account that is attached to hdinsight cluster
Could some one help me how to solve this issue?
I am able to resolve this issue by adding configuration below, i have done this through Ambari server
HDFS >>Custom core-site
fs.azure.account.key.(storage_account).blob.core.windows.net=(Access
Key)
fs.azure.account.keyprovider.(storage_account).blob.core.windows.net=org.apache.hadoop.fs.azure.SimpleKeyProvider
Hive >> Custom hive-env
AZURE_STORAGE_ACCOUNT=(Storage Account name)
AZURE_STORAGE_KEY=(Access Key)
after install of wso2 mgr and a small change in the jdbi to use our mysql server everything "seems" to work except oauth.
In carbon web UI, when i click on the left menu on "oauth" i get an error message saying
System Error Occurred - Error occurred while reading OAuth application data
I looked at other post and saw this. i looked at ${WSO2_IS_HOME}/repository/conf/identity.xml where i saw that i had an entry <skipdbschemacreation>true</skipdbschemacreation>
I try to change it to false and no change...
Anybody had this problem with wso2 apu mgr ?
Any idea how to setup oauth in api mgr ?
Do i have to install wso2 identity manager ?
----update1 ------
it seems that changing to false this flag is creating a problem with our db as we get now an error message. When you reset this flag to true, the db error is still there ...
the error message says
[2012-08-19 15:40:13,649] ERROR - JDBCResourceDAO Failed to delete the resource with id 688. Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.
java.sql.SQLException: Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.
and later the wso2 launch script said :
[2012-08-19 15:40:13,654] FATAL - CarbonServerManager WSO2 Carbon initialization Failed
org.wso2.carbon.registry.core.exceptions.RegistryException: Failed to delete the resource with id 688. Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.
little bit later in the same launch script we have
org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
Caused by: java.sql.SQLException: Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.
The skipdbschemacreation property was specifically introduced after the introduction of the WSO2 APIManager. This was because the WSO2 APIManager has a seperate sql script to create all the required tables (including the OAuth2.0 related tables). Therefore it was necessary to skip the schema creation step of the OAuth component on server startup. Thus it is correct that this property is set to 'true' by default in the WSO2 APIManager and to false in WSO2 Identity Server.
The problem you are facing should be because your configurations that point to your MySQL database are incorrect. If you are using a MySQL database, the place where you should configure its settings is {WSO2_APIMANAGER_HOME}/repository/conf/datasources/master-datasources.xml. Change the setting of the datasource whose name is WSO2AM_DB. This is the datasource which has the OAuth2.0 related tables.
E.g.
<datasource>
<name>WSO2AM_DB</name>
<description>The datasource used for API Manager database</description>
<jndiConfig>
<name>jdbc/WSO2AM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/wso2am_db</url>
<username>admin</username>
<password>admin#123</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
I presume you've already created the required database schema by running the mysql.sql script provided in {WSO2_APIMANAGER_HOME}/dbscripts/apimgt/mysql.sql.
To create a MySQL database schema for WSO2 APIManager:
Navigate to the location where you have the mysql script for creating the WSO2AM_DB.
Open a command prompt from that location and login to mysql from cmd prompt
mysql -u root -p
Create a database. Create user and grant access.
create database apimgt;
GRANT ALL ON apimgt.* TO admin#localhost IDENTIFIED BY "admin";
Run the mysql.sql script. This will configure the database.
use apimgt;
source mysql.sql;