I have an existing project. I have configured this project on my personal pc but when i try to configure it on other pc i am getting following error.
( ! ) Fatal error: Class 'Controller' not found in C:\wamp\www\gal\protected\controllers\SiteController.php on line 3
Call Stack
# Time Memory Function Location
1 0.0000 131192 {main}( ) ...\index.php:0
2 0.0110 1167128 CApplication->run( ) ...\index.php:15
3 0.0110 1167752 CWebApplication->processRequest( ) ...\CApplication.php:185
4 0.0120 1295328 CWebApplication->runController( ) ...\CWebApplication.php:141
5 0.0120 1295496 CWebApplication->createController( ) ...\CWebApplication.php:276
6 0.0120 1334120 require( 'C:\wamp\www\gal\protected\controllers\LoginController.php' ) ...\CWebApplication.php:354
7 0.0120 1334280 spl_autoload_call ( ) ...\CWebApplication.php:8
8 0.0120 1334304 YiiBase::autoload( ) ...\CWebApplication.php:8
9 0.0120 1335464 CApplication->handleError( ) ...\CWebApplication.php:442
10 0.0160 1800752 CErrorHandler->handle( ) ...\CApplication.php:834
11 0.0160 1792904 CErrorHandler->handleError( ) ...\CErrorHandler.php:133
12 0.0160 1801992 CErrorHandler->renderError( ) ...\CErrorHandler.php:296
13 0.0160 1802024 CWebApplication->runController( ) ...\CErrorHandler.php:368
14 0.0160 1802024 CWebApplication->createController( ) ...\CWebApplication.php:276
15 0.0160 1819904 require( 'C:\wamp\www\gal\protected\controllers\SiteController.php' ) ...\CWebApplication.php:354
My folder structure is
myproject
protected
command
components
config
controllers
data
extensions
messages
migrations
models
modules
test
views
Main.php has
'import'=>array(
'application.models.*',
'application.components.*'
),
Regards,
Ashok
Check "Controller" for Strict Errors.
For example in my case help corrected from
public function beforeRender()
to
public function beforeRender($view)
or turn off error_reporting for STRICT and check if it helps, then you know for sure that are similar problem.
Related
I have "version" and "Index" variables
"Version" contains values like: 150,160,170...
"Index" contains values like: 1,2,3,4,5
Basically I want to know if there's an implementation for this kind of condition:
max(Index) by Version
Meaning for the below 6 events:
150 , 5
140 , 1
140 , 2
130 , 1
130 , 2
130 , 3
Ill get only those 3:
150 , 5
140 , 2
130 , 3
Because it has only the highest index for every version
Thanks!
Couldn't implement this from Splunk documentation:
Aggregation
max([by=<grp>])
When you call max(by=<grp>), it returns one maximum for each value of the property or properties specified by <grp>. For example, if the input stream contains 5 different values for the property named datacenter, max(by='datacenter') outputs 5 maximums.
You can use the stats command to find the maximum Index value for each Version.
| stats max(Index) by Version
Basically I want to know if there's an implementation for this kind of condition:
max(Index) by Version
What you're asking for seem to be precisely how you would call max with stats:
index=ndx sourcetype=srctp Version=* datacenter=*
| stats max(Version) by datacenter
Does that not do what you're looking for?
Thanks guys!
Eventually what was the trick was "eventstats", because I wanted the highest index for each event
(I could not think of a better title for this question. Suggestions welcome.)
(In case versions matter, I'm using SQLAlchemy 1.4.4 and Postgresql 13.1.)
I have a table ('test') of multiple instances of boolean values for multiple persons, representing test results (pass or fail), and I want to create a query returning a result set representing pass/fail ratios for each of them.
I.e., for this table:
id | person | passed
----+--------+--------
1 | p1 | t
2 | p1 | f
3 | p1 | f
4 | p2 | t
5 | p2 | t
6 | p2 | t
7 | p2 | t
8 | p2 | t
9 | p2 | f
10 | p2 | f
11 | p2 | f
the query should return:
person | pass_fail_ratio
-------+-------------------
p1 | 0.5
p2 | 1.6666666666666667
Here is the solution I have been able to come up with so far. (I'm appending a complete MWE to the end.)
results_count = (
sa.select(
test.person,
test.passed,
sa.func.count(test.passed).label('count')
).group_by(test.person).group_by(test.passed)
).subquery()
pass_count = (
sa.select(results_count.c.person, results_count.c.count)
.filter(results_count.c.passed == True) # noqa
).subquery()
fail_count = (
sa.select(results_count.c.person, results_count.c.count)
.filter(results_count.c.passed == False) # noqa
).subquery()
pass_fail_ratio = (
sa.select(
pass_count.c.person,
(
sa.cast(pass_count.c.count, sa.Float)
/ sa.cast(fail_count.c.count, sa.Float)
).label('success_failure_ratio')
)
).filter(fail_count.c.person == pass_count.c.person)
To me, this looks overly complicated for what would seem to be a conceptually rather simple thing. Is there a better solution?
MWE:
# To change database name, modify 'dbname'.
# Expected output:
# ('p1', 0.5)
# ('p2', 1.6666666666666667)
# Lots of constraints and checks omitted for brevity.
# To view generated SQL, uncomment the line containing "echo" below.
import sqlalchemy as sa
import sqlalchemy.orm as orm
import sqlalchemy.types as types
dbname = 'test'
base = orm.declarative_base()
class test(base):
__tablename__ = 'test'
id = sa.Column(sa.Integer, primary_key=True)
person = sa.Column(sa.String)
passed = sa.Column(types.Boolean)
pass
engine = sa.create_engine(
f"postgresql://localhost:5432/{dbname}", future=True
)
base.metadata.drop_all(engine)
base.metadata.create_all(engine)
session = orm.Session(engine)
# Add some data.
session.add(test(person='p1', passed=True))
session.add(test(person='p1', passed=False))
session.add(test(person='p1', passed=False))
session.add(test(person='p2', passed=True))
session.add(test(person='p2', passed=True))
session.add(test(person='p2', passed=True))
session.add(test(person='p2', passed=True))
session.add(test(person='p2', passed=True))
session.add(test(person='p2', passed=False))
session.add(test(person='p2', passed=False))
session.add(test(person='p2', passed=False))
session.commit()
results_count = (
sa.select(
test.person,
test.passed,
sa.func.count(test.passed).label('count')
).group_by(test.person).group_by(test.passed)
).subquery()
pass_count = (
sa.select(results_count.c.person, results_count.c.count)
.filter(results_count.c.passed == True) # noqa
).subquery()
fail_count = (
sa.select(results_count.c.person, results_count.c.count)
.filter(results_count.c.passed == False) # noqa
).subquery()
pass_fail_ratio = (
sa.select(
pass_count.c.person,
(
sa.cast(pass_count.c.count, sa.Float)
/ sa.cast(fail_count.c.count, sa.Float)
).label('success_failure_ratio')
)
).filter(fail_count.c.person == pass_count.c.person)
# engine.echo = True
with orm.Session(engine) as session:
res = session.execute(pass_fail_ratio)
for row in res:
print(row)
pass
pass
pass
That is soooo complicated. I wouldn't use subqueries. One method is:
select person,
count(*) filter (where passed) * 1.0 / count(*) filter (where not passed)
from test t
group by person;
You might find it more convenient to express this "in the old-fashioned way" without filter:
select person,
sum( passed::int ) * 1.0 / sum( (not passed)::int )
from test t
group by person;
Note that the pass ratio is more commonly used than the ratio of passes to fails. That is simply:
select person,
avg( passed::int ) as pass_ratio
from test t
group by person;
Got Gordon Linoff's answer working in SQLAlchemy. Here is my final solution:
import sqlalchemy as sa
pass_fail_ratio_query = sa.select(
test.person,
(
sa.cast(
sa.funcfilter(sa.func.count(), test.passed == True), # noqa
sa.Float
)
/ sa.cast(
sa.funcfilter(sa.func.count(), test.passed == False), # noqa
sa.Float
)
)
).group_by(test.person)
I have loaded a sql table to a tbl using dplyr (rr is my db connection) :
s_log=rr%>%tbl("s_log")
then extracted three columns and put them in a new tbl :
id_date_amount=s_log%>%select(id,created_at,amount)
when i run head (id_date_amount) it works properly:
id created_at amount
1 34101 2016-07-20 10:41:23 19750
2 11426 2016-07-20 10:38:15 19750
3 26694 2016-07-20 10:38:18 49750
4 25656 2016-07-20 10:42:05 49750
5 23987 2016-07-20 10:40:31 19750
6 24564 2016-07-20 10:38:35 19750
now , i need to filter the id_date_amount in a way that it only contains the past 21 days:
filtered_ADP=subset(id_date_payment,as.Date('2016-08-22')- as.Date(created_at) > 0 & as.Date('2016-08-22')- as.Date(created_at)<=21)
i get the following error:
Error in as.Date(created_at) : object 'created_at' not found
i think that's because i don't have id_date_payment locally , but how can i shape that subset to sent it to id_date_payment and get back the results?
i tried using deployer::filter :
id_date_amount=id_date_amount %>% filter( as.Date('2016-08-22') - as.Date(created_at) > 0 & as.Date('2016-08-22')- as.Date(created_at)<=21 )
but i get this error :
Error in postgresqlExecStatement(conn, statement, ...) :
RS-DBI driver: (could not Retrieve the result : ERROR: syntax error at or near "AS"
LINE 3: WHERE "status" = 'success' AND AS.DATE('2016-08-22') - AS.DA...
^
)
Is there any way I can store the last iterated row result and use that for next row iteration?
For example I have a table say(Time_Table).
__ Key type timeStamp
1 ) 1 B 2015-06-28 09:00:00
2 ) 1 B 2015-06-28 10:00:00
3 ) 1 C 2015-06-28 11:00:00
4 ) 1 A 2015-06-28 12:00:00
5 ) 1 B 2015-06-28 13:00:00
Now suppose I have an exceptionTime of 90 minutes which is constant.
If I start checking my Time_Table then:
for the first row, as there is no row before 09:00:00, it will directly put this record into my target table. Now my reference point is at 9:00:00.
For the second row at 10:00:00, the last reference point was 09:00:00 and TIMESTAMPDIFF(s,09:00:00,10:00:00) is 60 which is less than the required 90. I do not add this row to my target table.
For the third row, the last recorded exception was at 09:00:00 and the TIMESTAMPDIFF(s,09:00:00,11:00:00) is 120 which is greater than the required 90 so I choose this record and set reference point to 11:00:00.
For the fourth row the TIMESTAMPDIFF(s,11:00:00,12:00:00). Similarly it will not be saved.
This one is again saved.
Target table
__ Key type timeStamp
1 ) 1 B 2015-06-28 09:00:00
2 ) 1 C 2015-06-28 11:00:00
3 ) 1 B 2015-06-28 13:00:00
Is there any way that I can solve this problem purely in SQL?
My approach:
SELECT * FROM Time_Table A WHERE NOT EXISTS(
SELECT 1 FROM Time_Table B
WHERE A.timeStamp > B.timeStamp
AND abs(TIMESTAMPDIFF(s,B.timeStamp,A.timeStamp)) > 90
)
But this will not actually working.
This is not possible using just pure SQL in Vertica. To do this in pure SQL you need to be able to perform a recursive query which is not supported in the Vertica product. In other database products you can do this using a WITH clause. For Vertica you are going to have to do it in the application logic. This is based on the statement "Each WITH clause within a query block must have a unique name. Attempting to use same-name aliases for WITH clause query names within the same query block causes an error. WITH clauses do not support INSERT, DELETE, and UPDATE statements, and you cannot use them recursively" from Vertica 7.1.x documentation
Definitely YES, (Not in pure SQL) either use LAG (since 7.1.x) depend on which version of Vertica you use
or create a custom UDx (User-Defined Extensions)
UDx in Java to access previous row which acts like LAG with only one step (hastag # performance)
(github full of udx examples)
public class UdxTestFactory extends AnalyticFunctionFactory {
#Override
public AnalyticFunction createAnalyticFunction(ServerInterface srvInterface) {
return new Test();
}
#Override
public void getPrototype(ServerInterface srvInterface, ColumnTypes argTypes,
ColumnTypes returnType) {
argTypes.addInt();
argTypes.addInt();
returnType.addInt();
}
#Override
public void getReturnType(ServerInterface srvInterface, SizedColumnTypes argTypes,
SizedColumnTypes returnType) throws UdfException {
returnType.addInt();
}
private class Test extends AnalyticFunction {
#Override
public void processPartition(ServerInterface srvInterface, AnalyticPartitionReader inputReader, AnalyticPartitionWriter outputWriter)
throws UdfException, DestroyInvocation {
SizedColumnTypes inTypes = inputReader.getTypeMetaData();
ArrayList<Integer> argCols = new ArrayList<Integer>();
inTypes.getArgumentColumns(argCols);
outputWriter.setLongNull(0);
while (outputWriter.next()) {
long v1 = inputReader.getLong(argCols.get(0)); // previous row
inputReader.next();
long v2 = inputReader.getLong(argCols.get(0)); // curent row
outputWriter.setLong(0, v2 - v1);
}
}
}
}
compile & combine compiled classes into single jar, named it TestLib.jar for simplicity
$ javac -classpath /opt/vertica/bin/VerticaSDK.jar /opt/vertica/sdk/BuildInfo.java UdxTestFactory.java -d .
$ jar -cvf TestLib.jar com/vertica/sdk/BuildInfo.class com/vertica/JavaLibs/*.class
Load library & function
CREATE OR REPLACE LIBRARY TestFunctions AS '/home/dbadmin/TestLib.jar' LANGUAGE 'JAVA';
CREATE OR REPLACE ANALYTIC FUNCTION lag1 AS LANGUAGE 'java' NAME 'com.vertica.JavaLibs.UdxTestFactory' LIBRARY TestFunctions;
And.. use it
SELECT
lag1(col1, null) OVER (ORDER BY col2) AS col1_minus_col2
FROM ...
Admin panel works fine, but on front-end I get following error.
SQLSTATE[23000]: Integrity constraint violation: 1052 Column 'position' in order clause is ambiguous
Any idea what could this be?
Here is the solution i came up with, Many Thanks to Vern Burton.
Located table eav_attribute in phpmyadmin, which was related to catalog_eav_attribute.
Located column position in table eav_attribute and dropped it.
Cleared all cache and reindexed all data, visited front page and got a new error: SQLSTATE[42S22]: Column not found: 1054 Unknown column ‘main_table.include_in_menu’ in ‘where clause’
Located and opened file app/code/core/Mage/Catalog/Model/Resource/Category/Flat.php
Commented out following line:267 in my case ----
->where('main_table.is_active = ?', '1')
// ->where('main_table.include_in_menu = ?', '1')
->order('main_table.position');
I'm not really sure how esthetic this method is to fix this issue but, certainly works for me for now, If anyone has better way of fixing this I'd appreciate if you'll post your solution.
Hope this will help someone out, Cheers.
Next exception 'Zend_Db_Statement_Exception' with message 'SQLSTATE[23000]: Integrity constraint violation: 1052 Column 'position' in order clause is ambiguous' in /chroot/home/user/my_domain.com/magento_root/lib/Zend/Db/Statement/Pdo.php:234
Stack trace:
0 /chroot/home/user/my_domain.com/magento_root/lib/Varien/Db/Statement/Pdo/Mysql.php(110): Zend_Db_Statement_Pdo->_execute(Array)
1 /chroot/home/user/my_domain.com/magento_root/lib/Zend/Db/Statement.php(300): Varien_Db_Statement_Pdo_Mysql->_execute(Array)
2 /chroot/home/user/my_domain.com/magento_root/lib/Zend/Db/Adapter/Abstract.php(479): Zend_Db_Statement->execute(Array)
3 /chroot/home/user/my_domain.com/magento_root/lib/Zend/Db/Adapter/Pdo/Abstract.php(238): Zend_Db_Adapter_Abstract->query('SELECT main_ta...', Array)
<br>4 /chroot/home/user/my_domain.com/magento_root/lib/Varien/Db/Adapter/Pdo/Mysql.php(419): Zend_Db_Adapter_Pdo_Abstract->query('SELECTmain_ta...', Array)
5 /chroot/home/user/my_domain.com/magento_root/lib/Zend/Db/Adapter/Abstract.php(734): Varien_Db_Adapter_Pdo_Mysql->query('SELECT main_ta...', Array)
<br>6 /chroot/home/user/my_domain.com/magento_root/lib/Varien/Data/Collection/Db.php(734): Zend_Db_Adapter_Abstract->fetchAll('SELECTmain_ta...', Array)
7 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Core/Model/Resource/Db/Collection/Abstract.php(521): Varien_Data_Collection_Db->_fetchAll('SELECT `main_ta...', Array)
8 /chroot/home/user/my_domain.com/magento_root/lib/Varien/Data/Collection/Db.php(566): Mage_Core_Model_Resource_Db_Collection_Abstract->getData()
9 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Catalog/Model/Layer.php(232): Varien_Data_Collection_Db->load()
10 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Catalog/Block/Layer/View.php(163): Mage_Catalog_Model_Layer->getFilterableAttributes()
11 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Catalog/Block/Layer/View.php(122): Mage_Catalog_Block_Layer_View->_getFilterableAttributes()
12 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Core/Block/Abstract.php(238): Mage_Catalog_Block_Layer_View->_prepareLayout()
13 /chroot/home/user/my_domain.com/magento_root/app/code/local/Mage/Core/Model/Layout.php(430): Mage_Core_Block_Abstract->setLayout(Object(Mage_Core_Model_Layout))
14 /chroot/home/user/my_domain.com/magento_root/app/code/local/Mage/Core/Model/Layout.php(446): Mage_Core_Model_Layout->createBlock('catalog/layer_v...', 'catalog.leftnav')
15 /chroot/home/user/my_domain.com/magento_root/app/code/local/Mage/Core/Model/Layout.php(238): Mage_Core_Model_Layout->addBlock('catalog/layer_v...', 'catalog.leftnav')
16 /chroot/home/user/my_domain.com/magento_root/app/code/local/Mage/Core/Model/Layout.php(204): Mage_Core_Model_Layout->_generateBlock(Object(Mage_Core_Model_Layout_Element), Object(Mage_Core_Model_Layout_Element))
17 /chroot/home/user/my_domain.com/magento_root/app/code/local/Mage/Core/Model/Layout.php(209): Mage_Core_Model_Layout->generateBlocks(Object(Mage_Core_Model_Layout_Element))
18 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Core/Controller/Varien/Action.php(344): Mage_Core_Model_Layout->generateBlocks()
19 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Catalog/Helper/Product/View.php(73): Mage_Core_Controller_Varien_Action->generateLayoutBlocks()
20 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Catalog/Helper/Product/View.php(144): Mage_Catalog_Helper_Product_View->initProductLayout(Object(Mage_Catalog_Model_Product), Object(Mage_Catalog_ProductController))
21 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Catalog/controllers/ProductController.php(132): Mage_Catalog_Helper_Product_View->prepareAndRender(28491, Object(Mage_Catalog_ProductController), Object(Varien_Object))
22 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Core/Controller/Varien/Action.php(419): Mage_Catalog_ProductController->viewAction()
23 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Core/Controller/Varien/Router/Standard.php(250): Mage_Core_Controller_Varien_Action->dispatch('view')
24 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Core/Controller/Varien/Front.php(176): Mage_Core_Controller_Varien_Router_Standard->match(Object(Mage_Core_Controller_Request_Http))
25 /chroot/home/user/my_domain.com/magento_root/app/code/core/Mage/Core/Model/App.php(354): Mage_Core_Controller_Varien_Front->dispatch()
26 /chroot/home/user/my_domain.com/magento_root/app/Mage.php(683): Mage_Core_Model_App->run(Array)
27 /chroot/home/user/my_domain.com/magento_root/index.php(87): Mage::run('', 'store')
28 {main}
Have you tried rebuilding your indexes? I had a similar problem, and that fixed it.