Meteor: Integration with Mongoose? - orm

I noticed you guys are planning on adding more ORM features into your platform, but in the meantime, is there an easy way to extend your Collections with Mongoose Collections?

You should be able to add:
npm install mongoose
To "admin/generate-dev-bundle.sh"
You can then create a new package and require mongoose, within that you can assign the method to: Meteor.mongoose too and connect to the MONGO_URL (This is Meteors database) for this command. Take a look round the other packages if you need some help.
I did the sample work in this branch:
https://github.com/jonathanKingston/meteor/tree/mongoose
This is 100% untested as I'm on a windows machine at the moment but it should open up:
Meteor.mongoose and just normal mongoose for the standard use as explained here but already connected:
https://github.com/LearnBoost/mongoose#readme

an issue with mongoose is that it won't work on the client. So you'll lose much of the benefit of using Meteor.
Take a look at Collections2 it offers validation and structure.
Good luck!

Related

How do I parse string FQL to a query using faunadb-js?

Let's say I have a string like const fql = "CreateCollection({ name: "users" })". How do I turn it into a faunadb Expr in JS?
You landed on the correct answer yourself. FQL is not a string-based language. This approach avoids problems like SQL injection, but it does mean you need to compose queries somewhere using a driver, the console, or a tool like Fauna Schema Migrate (FSM).
const fql = "CreateCollection({ name: "users" })"
The example you give leans toward schema/resource management. If that's your actual need here, consider FSM or the Fauna Serverless Framework plugin.
If you're building a front-end app using JavaScript, FSM is probably the right approach as it drops right into your app. It might also give you some more hints at how to transform strings into FQL. You would do the above in a single FQL file, e.g., fauna/resources/collections/users.fql as:
CreateCollection({ name: "users" })
If you're doing infrastructure as code in a separate pipeline or are already building with the Serverless Framework, the plugin might be a better fit.
If you want to see something else like a Pulumi or Terraform provider plugins, please submit a feature request in the forums!
The only way to do this at the moment seems to be evaluating the FQL as JavaScript.
The fauna-shell eval command is implemented using esprima to parse the FQL as ECMAScript, then running it through node's vm api with escodegen.
It's likely easier to just rewrite FQL files as JS files if you have the option!

Bitbucket - Add default task when creating a pull requst

I am looking into improving the workflow my colleague and myself are using for BitBucket. Something that is often forgotten is the documentation for the feature we are working on therefore I thought I good way to 'don't forget' would be to add a Task as soon as a Pull request is created for a particular branch.
The first think a developer should do after creating the Pull Request would be:
- Add a comment, something like WIP (Work in Progress)
- Create a task underneath, something like 'Add documentation'
In this way, we won't be able to 'Merge' the branch into 'Develop' if All tasks are not completed (this is how it is currenly configured).
Rather than having the developer to do so, it would be good if we can have the system to do so as soon as we create the Pull Request.
Is that possible?
I had searchd on Internet, to be honest I didn't understand if taht functionality comes with like the Premium package or if it is an Add-On...who knows.
Thanks :)
Atlassian recently added a 'Default Pull Request Tasks' feature to Bitbucket Cloud.
The same functionality was previously available as a Bitbucket app, but it was removed in May 2020. It's now a native feature.
Product announcement: https://bitbucket.org/blog/bitbucket-cloud-product-updates-august-2022
Feature details: https://bitbucket.org/blog/default-pull-request-tasks
You can try this. It is free for 30 days.
https://marketplace.atlassian.com/apps/1225598/default-tasks-for-pull-requests?tab=pricing&hosting=datacenter
I did not find any free solutions.

How to start ArangoDB-GraphQL-Express?

I looked at the support from ArangoDB, and google search, but it did not help me much...I am fresh in these topic, (but Polish proverb says that you should not be ashamed to ask questions).
my situation is as follows, I have quite a very extensive database, which I created by GUI-HTTP-ArangoDB (by importing further crafted JSONs, as collections of Verexs & Edges) I would like to link this database and dynamically depending on the query, display the resutat, only hmm I do not know how to connect it. is like a tutorial on the arango page to Node, but there is nothing to write like where and what to create, just they only described the next command that do something .. ech ...
I am looking for examples, or a step-by-step guide/tutorial..
I am asking you for help / support..
how in it, to find himself..
Well, there are two options I would use to connect Arango to GraphQL:
1 Use the Foxx micro services that live within Arango to create a Rest API. Then you can use wrap the Rest api in GraphQL. Here is the tutorial for creating the Foxx micro services :
https://docs.arangodb.com/3.3/Manual/Foxx/GettingStarted.html
And here is the tutorial to wrap the the rest api in GraphQL:
https://www.prisma.io/blog/how-to-wrap-a-rest-api-with-graphql-8bf3fb17547d/
2 Have the GraphQL Server be part of the Foxx microservices instead of the Rest Api as described here
https://docs.arangodb.com/3.3/Manual/Foxx/GraphQL.html
And here
https://mikewilliamson.wordpress.com/2017/03/24/arangodb-and-graphql/
Hope this helps!

Mean.io - Best practice to extend User model

Mean.io comes with a built in user model within the user package. What is the best practice for extending that user model if I want to attach additional data to it?
My experience with Django had me creating a "profile" that had a foreign key pointing towards the user object it belonged to. I like this approach because I don't touch the user package that way. But is this a best practice? If this is, how can I ensure the creation of a profile doc at the creation of a user doc? If not, what is?
I'm not sure qm69's solution would be the best for future compatibility with mean. In the mean.io documentation http://learn.mean.io/ it states the developer shouldn't alter any core packages, including the user package.
The mean.io pattern is to implement any and all extensions as a custom package. And override default views using the $viewPathProvider.override method.
Secondly the User package is fundamentally a security/authentication feature and not a profile implementation which regularly receives updates. Altering this will most likely break future fixes and risk introducing security bugs.
My advice would be to implement a profile using means package system and add a service dependency for the User service. I've done this in previous projects and it works well.
To implement a profile package, follow the below steps:
1) Create a custom package called profile using mean package profile.
2) Implement model/view/control for all profile requirements in the custom package. DONT ALTER ANYTHING IN THE USER package.
2) Use dependency injection to include the Global service service. This will give you access to Global.user data so you most likely don't even need to use the User services.
3) Override any User views using the $override method mentioned in the above doco.
Hope this helps ;)

OAuth2 w/ Google Client API 1.8.1

I have been using the Google Client API in a .NET web application - but need to update to the latest version (both to use the most recent code but also to lose the need for the DotNetOpenAuth.dll.) The latest version (1.8.1) has a totally redesigned OAuth interface (using google.apis.auth) and I can't seem to even get started w/ it.
Previously I had written code that handled generating an AuthorizationURL (as needed) and creating IAuthenticator and IAuthorizationState objects - storing the refresh token in a sql database as needed. I was also about to retrieve "UserInfo" about the user as needed (once authenticated.)
Now - I'm unclear on how to handle the generation of the AuthURL (do I have to do it 100% manually?) and how/what I need to pass to the BaseClientService.Initializer when working w/ client API (such as Google Drive.)
Also - previously I wrote methods to "store" and "retrieve" credentials from the database - now it seems I would need to write a class based on IDataStore? But I'm not sure if this is even correct (let alone find a decent sample/doc anywhere.)
Finally - it doesn't seem like google.apis.auth handles anything w/ regards to UserInfo - I have to grab google.apis.oauth2 - but that .dll has even LESS documentation/sample code out there.
Any advice on where to start? The google.apis sample code seems decent for performing basic api tasks but all the Oauth2 information is very basic, uses file data storage and seems glossed over.
Thanks!
First of all, take a look at https://developers.google.com/api-client-library/dotnet/guide/aaa_oauth. All the documentation that you might need is there, and if something is missing let us know!
You are right, you already have an implementation of FileDataStore, and we are planning to create EFDataStore as well for the next release.