I'm new in programming and I've built projects on xampp. There I created the databases in SQL (MARIADB) in 2 separated files (DML and DDL).
Now, learning Symfony I found that looks like I must use ORM to create the database. Can't I just create DML and DDL and connect/upload them to Symfony instead of using ORM?
I've been 2 days looking for information on how to do that, and I've just found ORM documentation. I wish I just could do something like this function:
function mod001_conectoBD () {
$address= "localhost";
$user= "root";
$password= "";
$database = "projectDDL"; //(which I uploaded on phpmyadmin as projectDDL.sql)
$link = mysqli_connect( $address, $user, $password, $database );
if ( !$link ) {
echo "Fail connection";
}
return $link;
}
function mod001_disconnectBD ( $link ) {
mysqli_close( $link );
}
This ofc is just the example i used on my xampp project. With this I just used the uploaded projectDDL.sql and built the app around it. Thanks and sorry for my ignorance in this matter.
The reason why I want this is building composite Primary keys, and editting id names, which i find so difficult on symfony.
Imagine for example a table that requires 3 foreign keys and its own id to have a 4 fields primary key, dont know how to make that possible in Symfony.
to connect symfony to mariadb you must modify the .env file which is at the root of your project, for your case it will give something like this:
DATABASE_URL="mysql://root#127.0.0.1:3306/projectDDL"
symfony takes care of creating the database for you, and for each table you have to create an entity
you can create your database with this command on the shell:
php bin/console doctrine:database:create
if you want to create a primary key made up of several foreign keys, doctirne allows you to do that I have already used in a project, you create an entity with foreign keys from your tables and you specify at the beginning that it is also an ID, ex :
#[Id, ManyToOne(targetEntity: User::class)]
#[Id, ManyToOne(targetEntity: Comment::class)]
#[Id, ManyToOne(targetEntity: Video::class)]
Documentation:
https://www.doctrine-project.org/projects/doctrine-orm/en/2.14/tutorials/composite-primary-keys.html#use-case-1-dynamic-attributes
Related
I'm new to knex and haven't touched RDBMS in years (been in NoSQL land), so bear with me here.
I've got two migration files, one for tracks and one for users (tracks are owned by users). Below are the relevant files:
migrations/20190919103115_users.js
exports.up = function(knex) {
return knex.schema.createTable('users', table => {
table.increments('id');
table.string('email', 50);
table.string('first_name', 50);
table.string('last_name', 50);
}
};
exports.down = function(knex) {
return knex.schema.dropTable('users');
};
migrations/20190406112728_tracks.js
exports.up = function(knex) {
return knex.schema.createTable('tracks', table => {
table.increments('id');
table.string('name', 140).notNullable();
table.integer('owner_id').notNullable();
table
.foreign('owner_id')
.references('id')
.inTable('users')
.onDelete('CASCADE');
table.json('metadata');
});
};
exports.down = function(knex) {
return knex.schema.dropTable('tracks');
};
When I run yarn knex migrate:up, I get:
migration file "20190406112728_tracks.js" failed
migration failed with error: alter table "tracks" add constraint "tracks_owner_id_foreign" foreign key ("owner_id") references "users" ("id") on delete CASCADE - relation "users" does not exist
I find the official Knex documentation to be pretty lacking (it's more of a reference than anything else) and can't figure out what I'm missing. Obviously I need some way for users to be created before tracks, but don't know how.
EDIT:
It seems this is how it's done: https://github.com/tgriesser/knex/issues/938#issuecomment-131491877
But it seems wrong to just put the entire set of tables in a single migration file. I thought the point was to create one migration file per table?
Migration files are sorted by name before execution, so looks like your tracks file name has an earlier date, therefore it runs before creation of users.
just run npx knex migrate:make create_users, and then npx knex migrate:make create_tracks.
it will generate new files with the proper timestamp, copy your code to the new files, delete the old ones :]
In a webapp, I use Hibernate's #SQLDelete annotation in order to "soft-delete" entities (i.e. set a status column to a value that denotes their "deleted" status instead of actually deleting them from the table).
The entity code looks like this :
#Entity
#SQLDelete(sql="update pizza set status = 2 where id = ?")
public class Pizza { ... }
Now, my problem is that the web application doesn't use the owner of the schema to which the tables belong to connect to the DB. E.g. the schema (in Oracle) is called pizza, and the db user the webapp uses to connect is pizza_webapp. This is for security reasons. The pizza_webapp user only has select/update/delete rights, it can't modify the structure of the DB itself. I don't have any choice here, it is a policy that I can't change.
I specify the name of the schema where the tables actually are with the hibernate-default_schema parameter in hibernate config :
<property name="hibernate.default_schema">pizza</property>
This works fine for everything that goes through mapped entities, Hibernate knows how to add the schema name in front of the table name in the SQL it generates. But not for raw SQL, and the #SQLDelete contains raw SQL. This is executed 'as is' and results in a "table or view not found error".
So far we worked around the issue by adding synonyms to the pizza_webapp schema, pointing to the pizza schema. It works, but it is not fun to maintain across multiple DBs when entities are added.
So, is it possible to make #SQLDelete take the hibernate.default_schema parameter into account ?
(NB: Obviously I don't want to hard-code the schema name in the SQL either...)
Yes, it is possible:
#SQLDelete(sql="update {h-schema}pizza set status = 2 where id = ?")
I could not find any Hibernate solution to this problem. However I found a work-around based on an Oracle feature. I do this in to my session before using it :
//set the default schema at DB session level for raw SQL queries (see #SQLDelete)
HibernateUtil.currentSession().doWork(new Work() {
#Override
public void execute(Connection connection) throws SQLException {
connection.createStatement().execute("ALTER SESSION SET CURRENT_SCHEMA="+HibernateUtil.getDefaultSchema());
}
});
I works fine, but unfortunately only on Oracle (which is fine for us for now at least). Maybe there are different ways to achieve the same thing on other RDBMS as well ?
Edit: the the getDefaultSchema() method in my HibernateUtil class does this to get the default schema from Hibernate's config :
defaultSchema = config.getProperty("hibernate.default_schema");
where config is my org.hibernate.cfg.Configuration object.
Here's the code I found in Yii framework manual:
$auth=Yii::app()->authManager;
$auth->createOperation('createPost','create a post');
$auth->createOperation('readPost','read a post');
$auth->createOperation('updatePost','update a post');
$auth->createOperation('deletePost','delete a post');
$bizRule='return Yii::app()->user->id==$params["post"]->authID;';
$task=$auth->createTask('updateOwnPost','update a post by author himself',$bizRule);
$task->addChild('updatePost');
$role=$auth->createRole('reader');
$role->addChild('readPost');
$role=$auth->createRole('author');
$role->addChild('reader');
$role->addChild('createPost');
$role->addChild('updateOwnPost');
and so on.
The question is Where should I place the code for creating roles, tasks, etc?
You should use this code in protected/controllers/RbacController.php
After modifing protected/config/main.php
return array(
'components'=>array(
'db'=>array(
'class'=>'CDbConnection',
'connectionString'=>'sqlite:path/to/file.db',
),
'authManager'=>array(
'class'=>'CDbAuthManager',
'connectionID'=>'db',
),
),
);
This is the official documentation:
http://www.yiiframework.com/doc/guide/1.1/en/topics.auth#using-default-roles
This took me awhile to understand, so let me answer your questions as I understand how yii works.
You will first create the appropriate tables following the sql code found in framework/web/auth
You can use phpmyadmin to populate the database
You can also create a controller in which you will run all of code above. That gets run once, because you are just populating a database
The controller can be called myInitController.php and stored with your other controllers.
The controller can be as simple as
<?php
class myInitController extends Controller
{
public function actionRun()
{
$auth=Yii::app()->authManager;
$auth->createOperation('createPost','create a post');
echo "this is it";
}
}
Then you would run it by going to www.yourwebsite.com/myInit/Run
Verify what got written to the database. Don't push this controller to production. You don't want someone else running the command.
So your options are
hand enter data through something like phpmyadmin
create a customer controller which can store all of the php commands and execute
use gii to create models and CRUD functions (be careful of composite primary keys)
I hope this helps.
This piece of code will create the items in database. You have to execute it.
You can create an action in one of your controller and then run it.
localhost/myAppName/myController/myAction
Or you can create a php file as well. Then juste paste your piece of code inside and run it.
I have a database with 300 tables. I need to clean the tables, so I have written a script which drops all of the constraints, triggers, primary keys, and foreign keys.
I have managed to create scripts using the generate scripts feature, but they only include primary keys inside the CREATE statements.
I need to get them as ALTER TABLE statements so that I can drop the keys, clear database, insert new data, and restore all the keys, constraints, etc.
Powershell and SMO are going to be your friends here:
$option_drop = new-object Microsoft.SqlServer.Management.Smo.ScriptingOptions;
$option_drop.ScriptDrops = $true;
"" > drop_primary_keys.sql
"" > create_primary_keys.sql
$server = new-object Microsoft.SqlServer.Management.Smo.Server ".";
$db = $server.Databases['AdventureWorks'];
foreach ($table in $db.Tables) {
foreach ($index in $table.Indexes) {
if ($index.IndexKeyType -eq "DriPrimaryKey") {
$index.Script( $option_drop ) >> drop_primary_keys.sql
$index.Script() >> create_primary_keys.sql
}
}
}
A couple of notes here:
Running this script will nuke any existing files of the name "drop_primary_keys.sql" and "create_primary_keys.sql", so proceed with caution
The script doesn't take into account any foreign keys since you said you already have a way to do that.
You may have to tweak the ScriptingOptions object to fit your needs. Specifically, I'm using the defaults on the create, so you may need to create another ScriptingOptions object and set whichever options you think appropriate.
Other than that, good hunting.
Msdn has an article about disabling/enabling triggers and foreign keys:
http://msdn.microsoft.com/en-us/magazine/cc163442.aspx
rI've looked at the questions and indeed the RavenDb docs. There's a little at RavenDb Index Replication Docs but there doesn't seem any guidance on how/when/where to create the IndexReplicationDestination
Our use case is very simple (it's a spike). We currently create new objects (Cows) and store them in Raven. We have a couple of queries created dynamically using LINQ (e.g. from c in session.Query<Cows> select c).
Now I can't see where I should define the index to replicate. Any ideas? I've got hold of the bundle and added it to the server directory (I'm assuming it should be in RavenDB.1.0.499\server\Plugins where RavenDB.1.0.499\server contains Raven.Server.exe)
Edit: Thanks Ayende... the answer below and in the ravendb groups helped. There was a facepalm moment. Regardless here's some detail that may help someone else. It really is very easy and indeed 'just works':
a) Ensure that the plugins are being picked up. You can view these in the statistics - available via the /localhost:8080/stats url (assuming default settings). You should see entries in 'Extensions' regarding to the IndexReplication bundle.
If not present ensure the versions of the DLLs (bundle and server) are the same
b) Ensure the index you want to replicate has been created. They can be created via Client API or HTTP API.
Client API:
public class Cows_List : AbstractIndexCreationTask<Cow>
{
public Cows_List()
{
Map = cows => from c in cows select new { c.Status };
Index( x => x.Status, FieldIndexing.Analyzed);
}
}
HTTP API (in studio):
//Cows/List
docs.Cows
.Select(q => new {Status = q.Status})
c) create the replication document. The clue here is DOCUMENT. Like everything stored, it too is a document. So after creating it must be stored in the Db :
var replicationDocument = new Raven.Bundles.IndexReplication.Data.IndexReplicationDestination
{
Id = "Raven/IndexReplication/Cows_List", ColumnsMapping = { {"Status", "Status"} },
ConnectionStringName = "Reports", PrimaryKeyColumnName = "Id",
TableName = "cowSummaries"
};
session.Store(replicationDocument);
sesson.SaveChanges();
d) Ensure you have the following in the CLIENT (e.g. MVC app or Console)
e) Create the RDBMS schema. I have a table in 'cowReports' :
CREATE TABLE [dbo].[cowSummaries](
[Id] nvarchar NULL,
[Status] nchar NULL)
My particular problem was not adding the index document to the store. I know. facepalm. Of course everything is a document. Works like a charm!
You need to define two things.
a) An index that transform the document to the row shape.
b) A document that tell RavenDB what is the connection string name, the table name, and the columns to map