I was following through on Alan Storm's tutorial on Magento's Model and ORM basics and I've run into a bit of a problem. When I get to the portion where I load from the Model for the first time I get this error "Fatal error: Call to a member function load() on a non-object...". I've reset everything already and tried again from scratch but I still get the same problem. My code looks like this:
$params = $this->getRequest()->getParams();
$blogpost = Mage::getModel('weblog/blogpost');
var_dump($blogpost);
echo("Loading the blogpost with an ID of ".$params['id']);
$blogpost->load($params['id']);
As you can see I dumped the contents of $blogpost and it shows that it is just a boolean false. My guess is that there's either a problem with the connection to the database or, for some reason, the code for Mage::getModel() didn't get installed correctly.
EDIT - Adding Code
There's so many that I just decided to pastebin them lol
app/code/local/Ahathaway/Weblog/controllers/IndexController.php
app/code/local/Ahathaway/Weblog/etc/config.xml
app/code/local/Ahathaway/Weblog/Model/Blogpost.php
app/etc/modules/Ahathaway_Weblog.xml
Your Model/Blogpost.php file should actually be Model/Mysql4/Blogpost.php, and you are missing the real Model/Blogpost.php.
My guess is that Mage cannot find your model class. Double check the module/model name and also verify if the model is in a correct place in the filesystem (it should be in app/code/local/Weblog/Model/Blogpost.php).
You also need to check if your config.xml correctly defines your model classes. It would be best if you could past your config.xml and your model class...
A quick glance reveals you're missing the model resource. Go back to the section around the following code example
File: app/code/local/Alanstormdotcom/Weblog/Model/Mysql4/Blogpost.php
class Alanstormdotcom_Weblog_Model_Mysql4_Blogpost extends Mage_Core_Model_Mysql4_Abstract{
protected function _construct()
{
$this->_init('weblog/blogpost', 'blogpost_id');
}
}
Related
Is there a way to make the logged user (on superset) to make the queries on impala?
I tried to enable the "Impersonate the logged on user" option on Databases but with no success because all the queries run on impala with superset user.
I'm trying to achieve the same! This will not completely answer this question since it does not still work but I want to share my research in order to maybe help another soul that is trying to use this instrument outside very basic use cases.
I went deep in the code and I found out that impersonation is not implemented for Impala. So you cannot achieve this from the UI. I found out this PR https://github.com/apache/superset/pull/4699 that for whatever reason was never merged into the codebase and tried to copy&paste code in my Superset version (1.1.0) but it didn't work. Adding some logs I can see that the configuration with the impersonation is updated, but then the actual Impala query is with the user I used to start the process.
As you can imagine, I am a complete noob at this. However I found out that the impersonation thing happens when you create a cursor and there is a constructor parameter in which you can pass the impersonation configuration.
I managed to correctly (at least to my understanding) implement impersonation for the SQL lab part.
In the sql_lab.py class you have to add in the execute_sql_statements method the following lines
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
where cursor_kwargs is defined in db_engine_specs/impala.py as the following
#classmethod
def get_configuration_for_impersonation(cls, uri, impersonate_user, username):
logger.info(
'Passing Impala execution_options.cursor_configuration for impersonation')
return {'execution_options': {
'cursor_configuration': {'impala.doas.user': username}}}
#classmethod
def get_cursor_configuration_for_impersonation(cls, uri, impersonate_user,
username):
logger.debug('Passing Impala cursor configuration for impersonation')
return {'configuration': {'impala.doas.user': username}}
Finally, in models/core.py you have to add the following bit in the get_sqla_engine def
params = extra.get("engine_params", {}) # that was already there just for you to find out the line
self.cursor_kwargs = self.db_engine_spec.get_cursor_configuration_for_impersonation(
str(url), self.impersonate_user, effective_username) # this is the line I added
...
params.update(self.get_encrypted_extra()) # already there
#new stuff
configuration = {}
configuration.update(
self.db_engine_spec.get_configuration_for_impersonation(
str(url),
self.impersonate_user,
effective_username))
if configuration:
params.update(configuration)
As you can see I just shamelessy pasted the code from the PR. However this kind of works only for the SQL lab as I already said. For the dashboards there is an entirely different way of querying Impala that I did not still find out.
This means that queries for the dashboards are handled in a different way and there isn't something like this
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
My gut (and debugging) feeling is that you need to first understand the sqlalchemy part and extend a new ImpalaEngine class that uses a custom cursor with the impersonation conf. Or something like that, however it is not simple (if we want to call this simple) as the sql_lab part. So, the trick is to find out where the query is executed and create a cursor with the impersonation configuration. Easy, isnt'it ?
I hope that this could shed some light to you and the others that have this issue. Let me know if you did find out another way to solve this issue, or if this comment was useful.
Update: something really useful
A colleague of mine succesfully implemented impersonation with impala without touching any superset related, but instead working directly with the impyla lib. A PR was open with the code to change. You can apply the patch directly in the impyla src used by superset. You have to edit both dbapi.py and hiveserver2.py.
As a reminder: we are still testing this and we do not know if it works with different accounts using the same superset instance.
I am trying the fusionauth-node-client and following the wiki https://fusionauth.io/docs/v1/tech/client-libraries/node. But I am getting the following error
const client = new FusionAuthClient('6b87a398-39f2-4692-927b-13188a81a9a3', 'http://localhost:9011');
^
TypeError: FusionAuthClient is not a constructor
at Object.<anonymous>
I have pasted the exact code mentioned in the doc still it is not working. Can anyone help me in identifying what I am missing here.
I dug around in the library and noticed that we are exporting several objects and our example is no longer correct.
To get the client you need to change your syntax a little bit to get the correct object.
const {FusionAuthClient} = require('fusionauth-node-client');
This translates to: require the library fusionauth-node-client and give me the FusionAuthClient from inside it. There are also a RESTClient and JWTManager available in the library but you shouldn't need either of those to code with FusionAuth.
I will also update our example to correct this discrepancy.
I'm making an open data website powered by BigQuery. How do I make a Bigquery dataset public using command line tool or Python?
Note I tried to make every dataset in my project public but got an unexplained error. In project permission settings via WebUI under "Add members" I put
allAuthenticatedUsers and did the permission Data Viewer. The error was "Error
Sorry, there’s a problem. If you entered information, check it and try again. Otherwise, the problem might clear up on its own, so check back later."
I wasn't able to find any command line examples for updating permissions. I also can't find a JSON string to pass to https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/update
To achieve this programatically, you need to use a dataset patch request and use the specialGroup item with the value allAuthenticatedUsers, like so:
{
"datasetReference":{
"projectId":"<removed>",
"datasetId":"<removed>"
},
"access":[
... //other access roles
{
"specialGroup":"allAuthenticatedUsers",
"role":"READER"
}
]
}
Note: You should use a read-modify-write cycle as described here & here:
Note about arrays: Patch requests that contain arrays replace the existing array with the one you provide. You cannot modify, add, or delete items in an array in a piecemeal fashion.
According to Phillip Riand (see: discussion on openNTF) this is not possible... They need to know the design element to find out who signed it. Therefore, it is only available in SSJS.
There are 2 ways that I know of to use the sessionAsSigner object in Java beans:
1 By resolving the sessionAsSigner object:
FacesContext context = FacesContext.getCurrentInstance();
Session sessionAsSigner = context.getApplication().getVariableResolver().
resolveVariable(context, "sessionAsSigner");
2 By using the getCurrentSessionAsSigner() function from the com.ibm.xsp.extlib.util.ExtLibUtil class in the Extension Library.
To be able to use it (in Java as wel as SSJS) you'll want to make sure that all design elements were signed by the same user ID. If that's not the case, the sessionAsSigner object will not be available ('undefined').
I found that the solution is right at hand :-)
I changed my XPage (in this example an XAgent) to:
<xp:view xmlns:xp="http://www.ibm.com/xsp/core" rendered="false">
This is an xAgent returning json data...
<xp:this.afterRenderResponse><![CDATA[#{javascript:Controller.verify(sessionAsSigner)}]]></xp:this.afterRenderResponse>
and in the bean I simply used the session in the argument when I needed to open a database/document as signer. Sometimes the solution is so simple :-)
/John
This is quite an old post that I just stumbled upon. Tried some of the solutions mentioned above:
resolveVariable did not work for me, at least not for sessionAsSigner as this throws a runtime error (I can resolve plain old session, though...)
to be honest I didn't quite understand the Controller.verify(sessionAsSigner) method; is Controller something specific to XAgents? If so, I don't have an XAgent here, so can't use it
didn't feel like importing extra ExtLib classes here...
So I came up with another solution that appears to be very simple:
created a method in my javaBean that takes a session object as argument; since sessionAsSigner belongs to the same class as session I don't have to import something new.
Javacode is:
public void testSession(Session s) throws Exception{
System.out.println(" > test effective user for this session = "
+ s.getEffectiveUserName());
}
This is called from SSJS as either
mybean.testSession(session);
or
myBean.testSession(sessionAsSigner);
Maybe helps others, too
I'm having some issues with a Backbone project that I am working on.
I have the following model:
class App.Models.Purchaseorder extends Backbone.Model
url: ->
base = 'api/purchaseorders'
if this.isNew()
base
else
base + '/' + this.id;
urlRoot: 'api/purchaseorders'
When I run the following in the console:
po = new App.Models.Purchaseorders;
po.set({'po_number': '1234', 'locale': 'Home', 'po_date': '3/22/2012'});
it appears to set the attributes correctly. However, if I run
po.save()
I would expect it to do a POST request to the api/purchaseorders URL. When I debug through the save() and sync() functions in the Backbone JS, it looks like it is indeed running a POST, but at the last minute, it looks as if it is really doing a GET http://i.imgur.com/dQK88.png
I am a little confused as to why this would be happening. I am having similar issues when trying to do an update -- which should be doing a PUT. I am assuming something is funky in the model, but I have no clue what it could be.
Any help would be greatly appreciated.
Thanks!
I tested the code you have in the question (had to call new App.Models.Purchaseorder, without the s, though FYI) and it does a POST as expected.
What version of Backbone and Underscore are you using?
Here's a fiddle.