wso2 api mgr: oauth not working - api

after install of wso2 mgr and a small change in the jdbi to use our mysql server everything "seems" to work except oauth.
In carbon web UI, when i click on the left menu on "oauth" i get an error message saying
System Error Occurred - Error occurred while reading OAuth application data
I looked at other post and saw this. i looked at ${WSO2_IS_HOME}/repository/conf/identity.xml where i saw that i had an entry <skipdbschemacreation>true</skipdbschemacreation>
I try to change it to false and no change...
Anybody had this problem with wso2 apu mgr ?
Any idea how to setup oauth in api mgr ?
Do i have to install wso2 identity manager ?
----update1 ------
it seems that changing to false this flag is creating a problem with our db as we get now an error message. When you reset this flag to true, the db error is still there ...
the error message says
[2012-08-19 15:40:13,649] ERROR - JDBCResourceDAO Failed to delete the resource with id 688. Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.
java.sql.SQLException: Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.
and later the wso2 launch script said :
[2012-08-19 15:40:13,654] FATAL - CarbonServerManager WSO2 Carbon initialization Failed
org.wso2.carbon.registry.core.exceptions.RegistryException: Failed to delete the resource with id 688. Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.
little bit later in the same launch script we have
org.eclipse.osgi.framework.eventmgr.EventManager$EventThread.run(EventManager.java:340)
Caused by: java.sql.SQLException: Cannot execute statement: impossible to write to binary log since BINLOG_FORMAT = STATEMENT and at least one table uses a storage engine limited to row-based logging. InnoDB is limited to row-logging when transaction isolation level is READ COMMITTED or READ UNCOMMITTED.

The skipdbschemacreation property was specifically introduced after the introduction of the WSO2 APIManager. This was because the WSO2 APIManager has a seperate sql script to create all the required tables (including the OAuth2.0 related tables). Therefore it was necessary to skip the schema creation step of the OAuth component on server startup. Thus it is correct that this property is set to 'true' by default in the WSO2 APIManager and to false in WSO2 Identity Server.
The problem you are facing should be because your configurations that point to your MySQL database are incorrect. If you are using a MySQL database, the place where you should configure its settings is {WSO2_APIMANAGER_HOME}/repository/conf/datasources/master-datasources.xml. Change the setting of the datasource whose name is WSO2AM_DB. This is the datasource which has the OAuth2.0 related tables.
E.g.
<datasource>
<name>WSO2AM_DB</name>
<description>The datasource used for API Manager database</description>
<jndiConfig>
<name>jdbc/WSO2AM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:mysql://localhost:3306/wso2am_db</url>
<username>admin</username>
<password>admin#123</password>
<driverClassName>com.mysql.jdbc.Driver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
I presume you've already created the required database schema by running the mysql.sql script provided in {WSO2_APIMANAGER_HOME}/dbscripts/apimgt/mysql.sql.
To create a MySQL database schema for WSO2 APIManager:
Navigate to the location where you have the mysql script for creating the WSO2AM_DB.
Open a command prompt from that location and login to mysql from cmd prompt
mysql -u root -p
Create a database. Create user and grant access.
create database apimgt;
GRANT ALL ON apimgt.* TO admin#localhost IDENTIFIED BY "admin";
Run the mysql.sql script. This will configure the database.
use apimgt;
source mysql.sql;

Related

Backup database - SQL Server

I need make a backup of my SQL Server database. When I try, I get this error:
System.Data.SqlClient.SqlError: Read on "c:..." failed: 23(...)(Data error (cyclic redundancy error))
Now, I'm trying to run this command:
DBCC CheckDB ('MYDATABASE') WITH NO_INFOMSGS, ALL_ERRORMSGS
But I get this error
Msg 8921, Level 16, State 1, Line 18
Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors.
What can I do? I just need make a backup.
I'm using Microsoft SQL Server Management Studio.
First of all, check the Service Account used for the SQL Server Instance from Services.
Ensure the service account have enough permission for read/write at the exact location for Backup in Physical Disk.
Ensure the the user (the user you using to login in SQL Instance) have enough permission to perform backup.
Final option to recover the data from the database is create another database with same tables (blank) in different machine in different SQL instance, then Export all the database to new database using Management studio (Right click on the Database > task > Export Data)

getting started with liquibase on snowflake

I am trying to get started with liquibase on snowflake.
I think I am almost there with the liquibase.properties file
driver: net.snowflake.client.jdbc.SnowflakeDriver
classpath: ./liquibase-snowflake-1.0.jar
url: jdbc:snowflake://XXXXXX.us-east-1.snowflakecomputing.com
username: YYYYYYYYY
password: ZZZZZZZZZZ
changeLogFile: mySnowflakeChangeLog.xml
Unfortunately, liquibase complains about not having a "current database" when trying to create the tables databasechangelog and/or databasechangeloglock.
Since, I do not have access to the sql script creating these database tables, how do I instruct liquibase which DATABASE to use?
I pinged an internal team here #Snowflake. They recommended:
adding db=mydb database connection parameter to the URL.. or set
default namespace for the user.. alter user mike set
default_namespace=mydb
Hope that helps!
I am not an expert in liquibase, but JDBC standard allows custom connection properties being passed in. If liquibase support that, you can specify database as a custom connection property, and Snowflake JDBC will pass the database information with create connection request sending to the server.

SQL server Warning: Fatal error 829 occurred at Oct 10 2019 12:48 PM. Note the error and time, and contact your system administrator

The 2 table not insert or select or delete or drop table command execute then show error below:
The error I'm receiving
Warning: Fatal error 829 occurred at Oct 10 2019 12:48PM. Note the
error and time, and contact your system administrator.
DROP TABLE [dbo].[tbl_SalesMaster_tmp]
GO
Just a quick search on Google and find a similar thread here. However, I extracted the possible solution for an easy reference.
Means there's an I/O subsystem problem. Is something called a 'hard I/O error'. SQL Server asks the OS to read a page and it says no - this means the I/O subsystem couldn't read the page in question.
The CHECKDB output means that it couldn't create the internal database snapshot that it uses to get a transactionally-consistent point-in-time view of the database. There are a number of different causes of this:
There may not be any free space on the volume(s) storing the data files for the database
The SQL service account might not have create-file permissions in the directory containing the data files for the database
If neither of these are the case, you can create your own database snapshot and run DBCC CHECKDB on that. One you have, run the following:
DBCC CHECKDB (yourdbname) WITH NO_INFOMSGS, ALL_ERRORMSGS
Whatever the results are, you're looking at either restoring from a backup, extracting data to a new database, or running repair. Each involves varying amounts of downtime and data-loss. You're also going to have to do some root-cause analysis to figure out what happened to cause the corruption in the first place.
By the way - do you have page checksums enabled? Have you looked in the SQL error log or Windows application event log for any signs of corruption or things going wrong with the I/O subsystem?

DACPAC deployment fails on EXTERNAL DATA SOURCEd schemas

(Here is a problem with a similar issue:
Publish to SQL Azure fails with 'Cannot drop the external data source' message)
There is this new pattern called Sql Db Elastic Query (Azure).
The gist of it is captured here:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-elastic-query-overview
Now, I have a SqlProj that defines:
External Data Source (EDS) and
Database Scoped Credentials (DSC)
To prevent passwords being in the script (I am using SqlCmd variables)
I probably have a gazillion views on "external" tables, based on the elastic query pattern.
And during DACPAC deployments to SQL Azure, I always get an error on:
Error SQL72014: .Net SqlClient Data Provider: Msg 33165, Level 16, State 1, Line 1 Cannot drop the external data source 'XXXX' because it is used by an external table.
Error SQL72045: Script execution error. The executed script:
DROP EXTERNAL DATA SOURCE [XXXX];
Checking the logs, I realize that there are all these Views/Tables that exist and use these EDS/DSC combo.
The work around comes with a price that's ever deepening.
So the question is, has anyone else hit this problem and found the root cause of this?

Connecting to Oracle11g database from Websphere message broker 6

I am trying a simple insert command from websphere message broker 6 from a compute node.
The data source name which is provided in the odbc.ini file in the message broker is specified in the node property of the compute node. And have wrote the following ESQL code.
SET TABLE = 'MYTABLE';
SET MYVALUE = 'TESTVALUE';
INSERT INTO Database.TABLE VALUES(MYVALUE);
The connection url is provided in the tnsnames.ora. The url is cluster url. Which points to 3 database instances.
When I am running the query i am getting exception that table or view does not exist in the trace.
But when i connect to db using any of the 3 direct urls, i am able to see the table.
Note: database is oracle11g
Can any one explain me what is happening?
The problem was that my application was using the same DSN used by my broker. And while creating the broker, the username and password provided was pointing to different schema, which is not having the the tables for my application.
The solution was creating a new DSN, and using mqsisetdbparams to point it to the correct schema.