qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
42,329,346
|
I am trying to learn Cloudformation im stuck with a senario where I need a second EC2 instance started after one EC2 is provisioned and good to go.
This is what i have in UserData of Instance one
```
"#!/bin/bash\n",
"#############################################################################################\n",
"sudo add-apt-repository ppa:fkrull/deadsnakes\n",
"sudo apt-get update\n",
"curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -\n",
"sudo apt-get install build-essential libssl-dev python2.7 python-setuptools -y\n",
"#############################################################################################\n",
"Install Easy Install",
"#############################################################################################\n",
"easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"#############################################################################################\n",
"#############################################################################################\n",
"GIT LFS Repo",
"#############################################################################################\n",
"curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\n",
"#############################################################################################\n",
"cfn-init",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --configsets InstallAndRun ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"#############################################################################################\n",
"# Signal the status from cfn-init\n",
"cfn-signal -e 0 ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --region ",
{
"Ref": "AWS::Region"
},
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
```
I have a WaitCondition , which i think is whats used to do this
```
"WaitHandleUIConfig" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {}
},
"WaitConditionUIConfig" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "UI",
"Properties" : {
"Handle" : { "Ref" : "WaitHandleUIConfig" },
"Timeout" : "500"
}
}
```
In the Instance i use the DependsOn in the second instance to wait for first instance.
```
"Service": {
"Type": "AWS::EC2::Instance",
"Properties": {
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "1ba546d0-2bad-4b68-af47-6e35159290ca"
},
},
"DependsOn":"WaitConditionUIConfig"
}
```
this isnt working. I keep getting the error
**WaitCondition timed out. Received 0 conditions when expecting 1**
Any help would be appreciated.
Thanks
|
2017/02/19
|
[
"https://Stackoverflow.com/questions/42329346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/907937/"
] |
Glad you figured it out! Posting my last comment as an answer below.
Thanks for the live example. So, I click on the submit button and I can see the browser sending a POST request. The server responds successfully, but with a redirect.
`POST https://win-marketing.sciencesupercrew.com/en/users/login` -> `301 Moved Permanently with Location: https://win-marketing.sciencesupercrew.com/en/users/login/`. The form data are sent correctly as `user[email]` and `user[password]`.
If you say that this request does not reach Rails, then you need to check your nginx configuration.
**UPDATED SOLUTION**
The issue was my nginx configuration. The one that works is the following:
```
server {
listen 80;
server_name win-marketing.sciencesupercrew.com www.win-marketing.sciencesupercrew.com;
return 301 https://www.win-marketing.sciencesupercrew.com$request_uri;
}
server {
listen 443 ssl;
server_name win-marketing.sciencesupercrew.com;
passenger_enabled on;
rails_env production;
root /home/demo/windhagermediahub/public;
ssl on;
ssl_certificate /home/demo/ssl/ssl-bundle.crt;
ssl_certificate_key /home/demo/ssl/ssc.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
```
After adding these lines it started working:
```
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
```
|
I am no expert but since your code is working on local just try the following to get an idea where the problem might be:
a) Run production environment on your local, see if the problem persists there.
b) Try to test run without any javascript enabled or atleast disable custom ones on the production server.
c) Try to redirect after devise authentication to an existing/static page.
I am sure you will get some hints or ideas
|
42,329,346
|
I am trying to learn Cloudformation im stuck with a senario where I need a second EC2 instance started after one EC2 is provisioned and good to go.
This is what i have in UserData of Instance one
```
"#!/bin/bash\n",
"#############################################################################################\n",
"sudo add-apt-repository ppa:fkrull/deadsnakes\n",
"sudo apt-get update\n",
"curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -\n",
"sudo apt-get install build-essential libssl-dev python2.7 python-setuptools -y\n",
"#############################################################################################\n",
"Install Easy Install",
"#############################################################################################\n",
"easy_install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"#############################################################################################\n",
"#############################################################################################\n",
"GIT LFS Repo",
"#############################################################################################\n",
"curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash\n",
"#############################################################################################\n",
"cfn-init",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --configsets InstallAndRun ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"#############################################################################################\n",
"# Signal the status from cfn-init\n",
"cfn-signal -e 0 ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource UI",
" --region ",
{
"Ref": "AWS::Region"
},
" ",
{
"Ref": "WaitHandleUIConfig"
},
"\n"
```
I have a WaitCondition , which i think is whats used to do this
```
"WaitHandleUIConfig" : {
"Type" : "AWS::CloudFormation::WaitConditionHandle",
"Properties" : {}
},
"WaitConditionUIConfig" : {
"Type" : "AWS::CloudFormation::WaitCondition",
"DependsOn" : "UI",
"Properties" : {
"Handle" : { "Ref" : "WaitHandleUIConfig" },
"Timeout" : "500"
}
}
```
In the Instance i use the DependsOn in the second instance to wait for first instance.
```
"Service": {
"Type": "AWS::EC2::Instance",
"Properties": {
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "1ba546d0-2bad-4b68-af47-6e35159290ca"
},
},
"DependsOn":"WaitConditionUIConfig"
}
```
this isnt working. I keep getting the error
**WaitCondition timed out. Received 0 conditions when expecting 1**
Any help would be appreciated.
Thanks
|
2017/02/19
|
[
"https://Stackoverflow.com/questions/42329346",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/907937/"
] |
Glad you figured it out! Posting my last comment as an answer below.
Thanks for the live example. So, I click on the submit button and I can see the browser sending a POST request. The server responds successfully, but with a redirect.
`POST https://win-marketing.sciencesupercrew.com/en/users/login` -> `301 Moved Permanently with Location: https://win-marketing.sciencesupercrew.com/en/users/login/`. The form data are sent correctly as `user[email]` and `user[password]`.
If you say that this request does not reach Rails, then you need to check your nginx configuration.
**UPDATED SOLUTION**
The issue was my nginx configuration. The one that works is the following:
```
server {
listen 80;
server_name win-marketing.sciencesupercrew.com www.win-marketing.sciencesupercrew.com;
return 301 https://www.win-marketing.sciencesupercrew.com$request_uri;
}
server {
listen 443 ssl;
server_name win-marketing.sciencesupercrew.com;
passenger_enabled on;
rails_env production;
root /home/demo/windhagermediahub/public;
ssl on;
ssl_certificate /home/demo/ssl/ssl-bundle.crt;
ssl_certificate_key /home/demo/ssl/ssc.key;
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
error_page 500 502 503 504 /500.html;
client_max_body_size 4G;
keepalive_timeout 10;
}
```
After adding these lines it started working:
```
ssl_session_timeout 5m;
ssl_protocols SSLv2 SSLv3 TLSv1;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
```
|
This sounds like an SSL issue to me.
Your form is definitely POSTing to the route. You can confirm this by adding `data-remote="true"` to your form HTML in the browser, then watching the console as you make the request:
```
XHR finished loading: POST "https://win-marketing.sciencesupercrew.com/en/users/login"
```
When you POST, however, you are immediately redirected. You can see this if you curl the route:
```
$ curl -i -X POST https://win-marketing.sciencesupercrew.com/en/users/login
HTTP/1.1 301 Moved Permanently
Server: nginx/1.10.2
Date: Thu, 23 Feb 2017 14:58:04 GMT
Content-Type: text/html
Content-Length: 185
Location: https://win-marketing.sciencesupercrew.com/en/users/login/
Connection: keep-alive
```
Here's where SSL comes in. [Rails returns a 301 (moved permanently) error](https://github.com/rails/rails/blob/c1f990cfb6d8bbb3b56c0a4ba23dcfe2037e2805/actionpack/lib/action_controller/metal/force_ssl.rb#L82) when a request is not SSL but it should be:
```
def force_ssl_redirect(host_or_options = nil)
unless request.ssl?
options = {
:protocol => 'https://',
:host => request.host,
:path => request.fullpath,
:status => :moved_permanently
}
```
I'm not sure how you've set up SSL on your server. Are you terminating SSL in a load balancer, for example? Whatever you're doing, it seems like SSL is the culprit. To confirm *temporarily* try:
```
# config/production.rb
config.force_ssl = false
```
and see if that solves it.
|
16,256,341
|
I am trying to go back to the top of a function (not restart it, but go to the top) but can not figure out how to do this. Instead of giving you the long code I'm just going to make up an example of what I want:
```
used = [0,0,0]
def fun():
score = input("please enter a place to put it: ")
if score == "this one":
score [0] = total
if score == "here"
if used[1] == 0:
score[1] = total
used[1] = 1
elif used[1] == 1:
print("Already used")
#### Go back to score so it can let you choice somewhere else.
list = [this one, here]
```
I need to be able to go back so essentially it forgets you tried to use "here" again without wiping the memory. All though I know they are awful, I basically need a go to but they don't exist in python. Any ideas?
\*Edit: Ah sorry, I forgot to mention that when it's already in use, I need to be able to pick somewhere else for it to go (I just didn't want to bog down the code). I added the score == "this one"- so if I tried to put it in "here", "here" was already taken, it would give me the option of redoing score = input("") and then I could take that value and plug it into "this one" instead of "here". Your loop statement will get back to the top, but doesn't let me take the value I just found and put it somewhere else. I hope this is making sense:p
|
2013/04/27
|
[
"https://Stackoverflow.com/questions/16256341",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2255589/"
] |
What you are looking for is a `while` loop. You want to set up your loop to keep going until a place is found. Something like this:
```
def fun():
found_place = False
while not found_place:
score = input("please enter a place to put it: ")
if score == "here"
if used[1] == 0:
score[1] = total
used[1] = 1
found_place = True
elif used[1] == 1:
print("Already used")
```
That way, once you've found a place, you set `found_place` to `True` which stops the loop. If you haven't found a place, `found_place` remains `False` and you go through the loop again.
|
As Ashwini correctly points out, you should do a `while` loop
```
def fun():
end_condition = False
while not end_condition:
score = input("please enter a place to put it: ")
if score == "here":
if used[1] == 0:
score[1] = total
used[1] = 1
elif used[1] == 1:
print("Already used")
```
|
44,628,435
|
I am making a request to my api.ai chatbot after following the instructions given on their official github website [here](https://github.com/api-ai/apiai-python-client/blob/master/examples/send_text_example.py). The following is the code for which I am getting an error, to which the solution is supposedly to call the function with proxy settings. I however do not know a way to do so.`
```
ai = apiai.ApiAI(CLIENT_ACCESS_TOKEN)
request = ai.text_request()
request.set_proxy('proxy1.company.com:8080','http')
question = input()
request.query = question
response = request.getresponse()`
```
I get the following error on the last line.
```
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
```
Please suggest how I use the proxy settings.
I am using Anaconda on Windows to run the script.
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7872066/"
] |
>
> I've tried various casting attempts
>
>
>
Have you tried this one?
```
.FirstOrDefault(ids => ids.Contains((T)propertyInfo.GetValue(item, null)))
```
Since `ids` is of type `IGrouping<TKey, TElement>` where `TElement` is of type `T` in your case, casting the value of property to `T` will allow for the comparison.
|
Ok, so I cracked it in the end. I needed to add more detail in my generic method header/signature.
```
public static IEnumerable<T> MixObjectsByProperty<T, U>(
IEnumerable<T> objects, string propertyName, IEnumerable<IEnumerable<U>> groupsToMergeByProperty = null)
where T : class
where U : class
{
...
```
Note, I added class constraints which allows me to use nullable operators, etc.
And the body became (after adding a propertyValue variable which is casted to the correct type!):
```
groups =
(from item in objects
let propertyInfo = item.GetType().GetProperty(propertyName)
where propertyInfo != null
let propertyValue = (U)propertyInfo.GetValue(item, null)
group item by
groupsToMergeByProperty
.FirstOrDefault(ids => ids.Contains(propertyValue))
?.First()
?? propertyValue
into itemGroup
select itemGroup.ToArray())
.ToList();
```
|
44,628,435
|
I am making a request to my api.ai chatbot after following the instructions given on their official github website [here](https://github.com/api-ai/apiai-python-client/blob/master/examples/send_text_example.py). The following is the code for which I am getting an error, to which the solution is supposedly to call the function with proxy settings. I however do not know a way to do so.`
```
ai = apiai.ApiAI(CLIENT_ACCESS_TOKEN)
request = ai.text_request()
request.set_proxy('proxy1.company.com:8080','http')
question = input()
request.query = question
response = request.getresponse()`
```
I get the following error on the last line.
```
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
```
Please suggest how I use the proxy settings.
I am using Anaconda on Windows to run the script.
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7872066/"
] |
Without seeing the whole method (because your type signatures clearly indicate it's not the whole method), here's an example implementation:
```
public class Ext
{
public static List<T[]> MixObjectsByProperty<T, TProp, U>(
IEnumerable<T> source,
Expression<Func<T, TProp>> property,
IEnumerable<IEnumerable<U>> groupsToMix = null)
where T : class
where U : TProp
{
var prop = (PropertyInfo)(property.Body as MemberExpression)?.Member;
if (prop == null) throw new ArgumentException("Couldn't determine property");
var accessor = property.Compile();
var groups =
from item in source
let value = (U)accessor(item)
group item by
groupsToMix.FirstOrDefault((ids => ids.Contains(value)))
into itemGroup
select itemGroup.ToArray();
return groups.ToList();
}
}
```
**For the love of god** stop passing property names and using reflection, the rest of Linq makes use of the gorgeous expression system, and you should too!
|
>
> I've tried various casting attempts
>
>
>
Have you tried this one?
```
.FirstOrDefault(ids => ids.Contains((T)propertyInfo.GetValue(item, null)))
```
Since `ids` is of type `IGrouping<TKey, TElement>` where `TElement` is of type `T` in your case, casting the value of property to `T` will allow for the comparison.
|
44,628,435
|
I am making a request to my api.ai chatbot after following the instructions given on their official github website [here](https://github.com/api-ai/apiai-python-client/blob/master/examples/send_text_example.py). The following is the code for which I am getting an error, to which the solution is supposedly to call the function with proxy settings. I however do not know a way to do so.`
```
ai = apiai.ApiAI(CLIENT_ACCESS_TOKEN)
request = ai.text_request()
request.set_proxy('proxy1.company.com:8080','http')
question = input()
request.query = question
response = request.getresponse()`
```
I get the following error on the last line.
```
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
```
Please suggest how I use the proxy settings.
I am using Anaconda on Windows to run the script.
|
2017/06/19
|
[
"https://Stackoverflow.com/questions/44628435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7872066/"
] |
Without seeing the whole method (because your type signatures clearly indicate it's not the whole method), here's an example implementation:
```
public class Ext
{
public static List<T[]> MixObjectsByProperty<T, TProp, U>(
IEnumerable<T> source,
Expression<Func<T, TProp>> property,
IEnumerable<IEnumerable<U>> groupsToMix = null)
where T : class
where U : TProp
{
var prop = (PropertyInfo)(property.Body as MemberExpression)?.Member;
if (prop == null) throw new ArgumentException("Couldn't determine property");
var accessor = property.Compile();
var groups =
from item in source
let value = (U)accessor(item)
group item by
groupsToMix.FirstOrDefault((ids => ids.Contains(value)))
into itemGroup
select itemGroup.ToArray();
return groups.ToList();
}
}
```
**For the love of god** stop passing property names and using reflection, the rest of Linq makes use of the gorgeous expression system, and you should too!
|
Ok, so I cracked it in the end. I needed to add more detail in my generic method header/signature.
```
public static IEnumerable<T> MixObjectsByProperty<T, U>(
IEnumerable<T> objects, string propertyName, IEnumerable<IEnumerable<U>> groupsToMergeByProperty = null)
where T : class
where U : class
{
...
```
Note, I added class constraints which allows me to use nullable operators, etc.
And the body became (after adding a propertyValue variable which is casted to the correct type!):
```
groups =
(from item in objects
let propertyInfo = item.GetType().GetProperty(propertyName)
where propertyInfo != null
let propertyValue = (U)propertyInfo.GetValue(item, null)
group item by
groupsToMergeByProperty
.FirstOrDefault(ids => ids.Contains(propertyValue))
?.First()
?? propertyValue
into itemGroup
select itemGroup.ToArray())
.ToList();
```
|
53,311,721
|
This question is killing me softly at the moment.
I am trying to learn python, lambda, and Dynamodb.
Python looks awesome, I am able to connect to MySQL while using a normal MySQL server like Xampp, the goal is to learn to work with Dynamodb, but somehow I am unable to get\_items from the Dynamodb. This is really kicking my head in and already taking the last two days.
Have watched tons of youtube movies and read the aws documentation.
Any clues what I am doing wrong.
My code till now;
```
import json
import boto3
from boto3.dynamodb.conditions import Key, Attr
#always start with the lambda_handler
def lambda_handler(event, context):
# make the connection to dynamodb
dynamodb = boto3.resource('dynamodb')
# select the table
table = dynamodb.Table("html_contents")
# get item from database
items = table.get_item(Key={"id": '1'})
```
Everywhere I look I see that I should do it like this.
But I keep getting the following error
```
{errorMessage=An error occurred (ValidationException) when calling the GetItem operation: The provided key element does not match the schema, errorType=ClientError, stackTrace=[["\/var\/task\/lambda_function.py",16,"lambda_handler","\"id\": '1'"],["\/var\/runtime\/boto3\/resources\/factory.py",520,"do_action","response = action(self, *args, **kwargs)"],["\/var\/runtime\/boto3\/resources\/action.py",83,"__call__","response = getattr(parent.meta.client, operation_name)(**params)"],["\/var\/runtime\/botocore\/client.py",314,"_api_call","return self._make_api_call(operation_name, kwargs)"],["\/var\/runtime\/botocore\/client.py",612,"_make_api_call","raise error_class(parsed_response, operation_name)"]]}
```
My database structure.
[](https://i.stack.imgur.com/DU7c9.png)
My DynamoDb settings
Table name html\_contents
Primary partition key id (Number)
Primary sort key -
Point-in-time recovery DISABLEDEnable
Encryption DISABLED
Time to live attribute DISABLEDManage TTL
Table status Active
What am I doing wrong here? I start to think I did something wrong with the aws configuration.
Thank you in advance.
Wesley
|
2018/11/15
|
[
"https://Stackoverflow.com/questions/53311721",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9821202/"
] |
Thats to @ippi.
It was the quotes that I am using.
```
table.get_item(Key={"id": '1'})
```
needed to be
```
table.get_item(Key={"id": 1})
```
As I am using a numeric and not a string.
Hope this helps for the next person(s) with the same problem.
|
You're facing this problem because you have created a table with a partition key whose data type is an integer.
Now you're performing a read an item operation specifying partition as a string which needs to be an integer that causes this issue.
I'm the author of Lucid-Dynamodb, a minimalist wrapper to AWS DynamoDB. It covers all the Dynamodb operations.
**Reference:** <https://github.com/dineshsonachalam/Lucid-Dynamodb#4-read-an-item>
|
61,973,288
|
currently im learning how to use Apache Airflow and trying to create a simple DAG script like this
```
from datetime import datetime
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from airflow.operators.python_operator import PythonOperator
def print_hello():
return 'Hello world!'
dag = DAG('hello_world', description='Simple tutorial DAG',
schedule_interval='0 0 * * *',
start_date=datetime(2020, 5, 23), catchup=False)
dummy_operator = DummyOperator(task_id='dummy_task', retries=3, dag=dag)
hello_operator = PythonOperator(task_id='hello_task', python_callable=print_hello, dag=dag)
dummy_operator >> hello_operator
```
i run those DAG using web server and run succesfully even checked the logs
```
[2020-05-23 20:43:53,411] {taskinstance.py:669} INFO - Dependencies all met for <TaskInstance: hello_world.hello_task 2020-05-23T13:42:17.463955+00:00 [queued]>
[2020-05-23 20:43:53,431] {taskinstance.py:669} INFO - Dependencies all met for <TaskInstance: hello_world.hello_task 2020-05-23T13:42:17.463955+00:00 [queued]>
[2020-05-23 20:43:53,432] {taskinstance.py:879} INFO -
--------------------------------------------------------------------------------
[2020-05-23 20:43:53,432] {taskinstance.py:880} INFO - Starting attempt 1 of 1
[2020-05-23 20:43:53,432] {taskinstance.py:881} INFO -
--------------------------------------------------------------------------------
[2020-05-23 20:43:53,448] {taskinstance.py:900} INFO - Executing <Task(PythonOperator): hello_task> on 2020-05-23T13:42:17.463955+00:00
[2020-05-23 20:43:53,477] {standard_task_runner.py:53} INFO - Started process 7442 to run task
[2020-05-23 20:43:53,685] {logging_mixin.py:112} INFO - Running %s on host %s <TaskInstance: hello_world.hello_task 2020-05-23T13:42:17.463955+00:00 [running]> LAPTOP-9BCTKM5O.localdomain
[2020-05-23 20:43:53,715] {python_operator.py:114} INFO - Done. Returned value was: Hello world!
[2020-05-23 20:43:53,738] {taskinstance.py:1052} INFO - Marking task as SUCCESS.dag_id=hello_world, task_id=hello_task, execution_date=20200523T134217, start_date=20200523T134353, end_date=20200523T134353
[2020-05-23 20:44:03,372] {logging_mixin.py:112} INFO - [2020-05-23 20:44:03,372] {local_task_job.py:103} INFO - Task exited with return code 0
```
but when i tried to test run a single task using this command
```
airflow test dags/main.py hello_task 2020-05-23
```
it shows this error
```
airflow.exceptions.AirflowException: dag_id could not be found: dags/main.py. Either the dag did not exist or it failed to parse.
```
where i went wrong ?
|
2020/05/23
|
[
"https://Stackoverflow.com/questions/61973288",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Put a ROW ID on your tables
```
df_1 <- read_table("A B C
2.3 5 3
12 3 1
0.4 13 2") %>%
rowid_to_column("ROW")
df_2 <- read_table("A B C
4.3 23 1
1 7 2
0.4 10 2") %>%
rowid_to_column("ROW")
df_3 <- read_table("A B C
1.3 3 3
2.2 4 2
12.4 10 1") %>%
rowid_to_column("ROW")
```
Bind them together in an ensemble
```
ensamb <- bind_rows(df_1, df_2, df_3)
```
`group_by` row and then summarize each one by its own method
```
ensamb %>%
group_by(ROW) %>%
summarise(A = mean(A), B = median(B),
C = C[which.max(C)])
# A tibble: 3 x 4
ROW A B C
<int> <dbl> <dbl> <dbl>
1 1 2.63 5 3
2 2 5.07 4 2
3 3 4.4 10 2
```
|
You can put all the dataframes in a list :
```
list_df <- mget(ls(pattern = 'df_\\d+'))
```
Then calculate the stats for each column separately.
```
data.frame(A = Reduce(`+`, lapply(list_df, `[[`, 1))/length(list_df),
B = apply(do.call(rbind, lapply(list_df, `[[`, 2)), 2, median),
C = apply(do.call(rbind, lapply(list_df, `[[`, 3)), 2, Mode),
row.names = NULL)
# A B C
#1 2.633333 5 3
#2 5.066667 4 2
#3 4.400000 10 2
```
where `Mode` function is taken from [here](https://stackoverflow.com/a/8189441/3962914) :
```
Mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
```
|
28,741,772
|
I am a novice writing a simple script to analyse a game. The data I would like to use describes "Items" and they have statistics associated with them (eg. "Attack Speed").
To clarify: The game is not something I have access to beyond being a player, my script is to compare combinations of the items. I will manually look up the information on each item, for example:
```
Name: Bloodforge
Price: 2715
Lifesteal: 0.15
Power: 40
```
These items will change as they are updated in the actual game, so I am looking for a way to store/update them manually (editing text) and easily access the statistics for these items using python.
I have looked into using XML and JSON, as well as MySQL. Are there any other suggestions that might fit this usage? Which libraries should I use?
|
2015/02/26
|
[
"https://Stackoverflow.com/questions/28741772",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4610057/"
] |
Without further info, I would say to use JSON, as it's easy to use and human-readable:
```
{
"Attack Speed": 5,
"Items": ["Dirt", "Flower", "Egg"]
}
```
|
Well, You got many more options. From least to most complicated :
* [Pickle](https://wiki.python.org/moin/UsingPickle)
* [Shelve](http://pymotw.com/2/shelve/)
* [SQLite](http://zetcode.com/db/sqlitepythontutorial/)
* [SQLAlchemy](http://www.sqlalchemy.org/)
What You should use really depends on what are Your needs exactly. If You are only starting developing game, and are novice - go for pickle or shelve (or both) for now.
They are simple and will let You keep focus on game mechanics and learning python stuff.
Later, when You will need something more complicated - You can move to using some relational database and go for the web with SQLAlchemy.
**EDIT:**
Information that You provided suggest that You do not need python at all. The thing You want coud be achieved in spreadsheet.
But analyzing data would be simpler in SQL, so I can recomend MySQL for it, or if You want something really simple SQLite with some managing tools, like for example [this FF plugin](https://addons.mozilla.org/pl/firefox/addon/sqlite-manager/). You can create table You need, manually create rows in it and then write some SQL query to give You statistics in the way You need.
|
19,228,516
|
Here is my argparse sample say sample.py
```
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-p", nargs="+", help="Stuff")
args = parser.parse_args()
print args
```
Python - 2.7.3
I expect that the user supplies a list of arguments separated by spaces after the -p option. For example, if you run
```
$ sample.py -p x y
Namespace(p=['x', 'y'])
```
But my problem is that when you run
```
$ sample.py -p x -p y
Namespace(p=['y'])
```
Which is neither here nor there. I would like one of the following
* Throw an exception to the user asking him to not use -p twice instead just supply them as one argument
* Just assume it is the same option and produce a list of ['x','y'].
I can see that python 2.7 is doing neither of them which confuses me. Can I get python to do one of the two behaviours documented above?
|
2013/10/07
|
[
"https://Stackoverflow.com/questions/19228516",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/44124/"
] |
>
> Note: python 3.8 adds an `action="extend"` which will create the desired list of ['x','y']
>
>
>
To produce a list of ['x','y'] use `action='append'`. Actually it gives
```
Namespace(p=[['x'], ['y']])
```
For each `-p` it gives a list `['x']` as dictated by `nargs='+'`, but `append` means, add that value to what the Namespace already has. The default action just sets the value, e.g. `NS['p']=['x']`. I'd suggest reviewing the `action` paragraph in the docs.
`optionals` allow repeated use by design. It enables actions like `append` and `count`. Usually users don't expect to use them repeatedly, or are happy with the last value. `positionals` (without the `-flag`) cannot be repeated (except as allowed by `nargs`).
[How to add optional or once arguments?](https://stackoverflow.com/questions/18544468/how-to-add-optional-or-once-arguments/18564559#18564559) has some suggestions on how to create a 'no repeats' argument. One is to create a custom `action` class.
|
I ran into the same issue. I decided to go with the custom action route as suggested by mgilson.
```
import argparse
class ExtendAction(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
if getattr(namespace, self.dest, None) is None:
setattr(namespace, self.dest, [])
getattr(namespace, self.dest).extend(values)
parser = argparse.ArgumentParser()
parser.add_argument("-p", nargs="+", help="Stuff", action=ExtendAction)
args = parser.parse_args()
print args
```
This results in
```
$ ./sample.py -p x -p y -p z w
Namespace(p=['x', 'y', 'z', 'w'])
```
Still, it would have been much neater if there was an `action='extend'` option in the library by default.
|
55,376,876
|
I would like to setup the local pgadmin in server mode behind the reverse proxy. The reverse proxy and the pgadmin could be on the same machine. I tried to set up but it always fails.
Here is mypgadmin conf:
```
Listen 8080
<VirtualHost *:8080>
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/pgadmin.crt
SSLCertificateKeyFile /etc/pki/tls/private/pgadmin.key
LoadModule wsgi_module modules/mod_wsgi.so
LoadModule ssl_module modules/mod_ssl.so
WSGIDaemonProcess pgadmin processes=1 threads=25
WSGIScriptAlias /pgadmin /usr/lib/python2.7/site-packages/pgadmin4-web/pgAdmin4.wsgi
<Directory /usr/lib/python2.7/site-packages/pgadmin4-web/>
WSGIProcessGroup pgadmin
WSGIApplicationGroup %{GLOBAL}
<IfModule mod_authz_core.c>
# Apache 2.4
Require all granted
</IfModule>
<IfModule !mod_authz_core.c>
# Apache 2.2
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ::1
</IfModule>
</Directory>
</VirtualHost>
```
and my reverse proxy conf
```
Listen 443
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
ErrorLog /var/log/httpd/reverse_proxy_error.log
CustomLog /var/log/httpd/reverse_proxy_access.log combined
SSLProxyEngine on
SSLProxyVerify require
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCACertificateFile "/etc/pki/tls/certs/ca-bundle.crt"
ProxyPreserveHost On
ProxyPass / https://localhost:8080/pgadmin
ProxyPassReverse / https://localhost:8080/pgadmin
</VirtualHost>
```
The httpd start but when I want to test it with
```
wget --no-check-certificate https://localhost/
```
it give me error 400
but the
```
wget --no-check-certificate https://localhost:8080/pgadmin
```
is working. Where is the problem in my config?
|
2019/03/27
|
[
"https://Stackoverflow.com/questions/55376876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2886412/"
] |
this work for me. I make pgadmin proxy to sub directory (https://localhost/pgadmin)
```
<VirtualHost *:80>
ServerName localhost
DocumentRoot "/var/www"
<Directory "/var/www">
AllowOverride all
</Directory
ProxyPass /ws/ ws://0.0.0.0:8888/
ProxyPass /phpmyadmin/ http://phpmyadmin/
<Location /pgadmin/>
ProxyPass http://pgadmin:5050/
ProxyPassReverse http://pgadmin:5050/
RequestHeader set X-Script-Name /pgadmin
RequestHeader set Host $http_host
</Location>
</VirtualHost>
```
|
Have you tried with latest version, I think it is fixed this commit Ref: [LINK](https://git.postgresql.org/gitweb/?p=pgadmin4.git;a=commit;h=f401def044c8b47974d58c71ff9e6f71f34ef41d)
Online Docs: <https://www.pgadmin.org/docs/pgadmin4/dev/server_deployment.html>
|
55,376,876
|
I would like to setup the local pgadmin in server mode behind the reverse proxy. The reverse proxy and the pgadmin could be on the same machine. I tried to set up but it always fails.
Here is mypgadmin conf:
```
Listen 8080
<VirtualHost *:8080>
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/pgadmin.crt
SSLCertificateKeyFile /etc/pki/tls/private/pgadmin.key
LoadModule wsgi_module modules/mod_wsgi.so
LoadModule ssl_module modules/mod_ssl.so
WSGIDaemonProcess pgadmin processes=1 threads=25
WSGIScriptAlias /pgadmin /usr/lib/python2.7/site-packages/pgadmin4-web/pgAdmin4.wsgi
<Directory /usr/lib/python2.7/site-packages/pgadmin4-web/>
WSGIProcessGroup pgadmin
WSGIApplicationGroup %{GLOBAL}
<IfModule mod_authz_core.c>
# Apache 2.4
Require all granted
</IfModule>
<IfModule !mod_authz_core.c>
# Apache 2.2
Order Deny,Allow
Deny from All
Allow from 127.0.0.1
Allow from ::1
</IfModule>
</Directory>
</VirtualHost>
```
and my reverse proxy conf
```
Listen 443
<VirtualHost *:443>
SSLEngine on
SSLCertificateFile /etc/pki/tls/certs/localhost.crt
SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
ErrorLog /var/log/httpd/reverse_proxy_error.log
CustomLog /var/log/httpd/reverse_proxy_access.log combined
SSLProxyEngine on
SSLProxyVerify require
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCACertificateFile "/etc/pki/tls/certs/ca-bundle.crt"
ProxyPreserveHost On
ProxyPass / https://localhost:8080/pgadmin
ProxyPassReverse / https://localhost:8080/pgadmin
</VirtualHost>
```
The httpd start but when I want to test it with
```
wget --no-check-certificate https://localhost/
```
it give me error 400
but the
```
wget --no-check-certificate https://localhost:8080/pgadmin
```
is working. Where is the problem in my config?
|
2019/03/27
|
[
"https://Stackoverflow.com/questions/55376876",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2886412/"
] |
this work for me. I make pgadmin proxy to sub directory (https://localhost/pgadmin)
```
<VirtualHost *:80>
ServerName localhost
DocumentRoot "/var/www"
<Directory "/var/www">
AllowOverride all
</Directory
ProxyPass /ws/ ws://0.0.0.0:8888/
ProxyPass /phpmyadmin/ http://phpmyadmin/
<Location /pgadmin/>
ProxyPass http://pgadmin:5050/
ProxyPassReverse http://pgadmin:5050/
RequestHeader set X-Script-Name /pgadmin
RequestHeader set Host $http_host
</Location>
</VirtualHost>
```
|
This config works,
use 0.0.0.0 for pgadmin docker, else use your ip
change port 5050 with your pgadmin port
```
<VirtualHost *:80>
ServerName pgadmin.yourdomain.com
RedirectMatch permanent ^/pgadmin4$ /pgadmin4/
ProxyPreserveHost On
ProxyPass / http://0.0.0.0:5050/
ProxyPassReverse / http://0.0.0.0:5050/
Header edit Location ^/ /pgadmin4/
Header always set X-Script-Name /pgadmin4
</VirtualHost>
```
Cofigure with SSL, replace yourdomain.com with valid SSL for your domain
```
<VirtualHost *:80>
ServerName pgadmin.yourdomain.com
RedirectMatch permanent ^/(.*)$ https://pgadmin.yourdomain.com/$1
</VirtualHost>
<VirtualHost *:443>
ServerName pgadmin.yourdomain.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/yourdomain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/yourdomain.com/privkey.pem
RedirectMatch permanent ^/pgadmin4$ /pgadmin4/
ProxyPreserveHost On
ProxyPass / http://0.0.0.0:5050/
ProxyPassReverse / http://0.0.0.0:5050/
Header edit Location ^/ /pgadmin4/
Header always set X-Script-Name /pgadmin4
</VirtualHost>
```
|
13,336,628
|
I have very simple web page example read from html file using python. the html called led.html as in bellow:
```
<html>
<body>
<br>
<p>
<p>
<a href="?switch=1"><img src="images/on.png"></a>
</body>
</html>
```
and the python code is:
```
import cherrypy
import os.path
import struct
class Server(object):
led_switch=1
def index(self, switch=''):
html = open('led.html','r').read()
if switch:
self.led_switch = int(switch)
print "Hellow world"
return html
index.exposed = True
conf = {
'global' : {
'server.socket_host': '0.0.0.0', #0.0.0.0 or specific IP
'server.socket_port': 8080 #server port
},
'/images': { #images served as static files
'tools.staticdir.on': True,
'tools.staticdir.dir': os.path.abspath('images')
},
'/favicon.ico': { #favorite icon
'tools.staticfile.on': True,
'tools.staticfile.filename': os.path.abspath("images/bulb.ico")
}
}
cherrypy.quickstart(Server(), config=conf)
```
The web page contain only one button called "on", when I click it I can see the text "Hello World " display on the terminal.
My question is how to make this text display on the web page over the "on" button after click on that button?
Thanks in advance.
|
2012/11/11
|
[
"https://Stackoverflow.com/questions/13336628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1813738/"
] |
If you require to swap *all* first (of pair) elements (and not just `(1, 36)` and `(0, 36)`), you can do
`fwd_count_sort=sorted(rvs_count.items(), key=lambda x: (x[0][1],-x[0][0]), reverse=True)`
|
I'm not exactly sure on the definition of your sorting criteria, but this is a method to sort the `pair` list according to the values in `fwd_count` and `rvs_count`. Hopefully you can use this to get to the result you want.
```
def keyFromPair(pair):
"""Return a tuple (f, r) to be used for sorting the pairs by frequency."""
global fwd_count
global rvs_count
first, second = pair
countFirstInv = -fwd_count[first] # use the negative to reverse the sort order
countSecond = rvs_count[second]
return (first, second)
pairs_sorted = sorted(pair, key = keyFromPair)
```
The basic idea is to use Python's in-built tuple ordering mechanism to sort on multiple keys, and to invert one of the values in the tuple so make it a reverse-order sort.
|
63,851,302
|
I need to execute below function based on user input:
>
> If `X=0`, then from line `URL ....Print('Success` should be written to a file & get saved as `test.py`.
>
>
>
At the backend, the saved file (`Test.py`) would automatically get fetched by Task scheduler from the saved location & would run periodically.
And yes, we have many example to write a file / run python from another file, but couldn't get any resemblance to write the python script from another file.
I am sure missing few basic steps.
```
if x=0:
### Need to write below content to a file & save as test.py######
URL = "https://.../login"
headers = {"Content-Type":"application/json"}
params = {
"userName":"xx",
"password":"yy"
}
resp = requests.post(URL, headers = headers, data=json.dumps(params))
if resp.status_code != 200:
print('fail')
else:
print('Success')]
else:
### Need to write below content to a file ######
URL = "https://.../login"
headers = {"Content-Type":"application/json"}
params = {
"userName":"RR",
"password":"TT"
}
resp = requests.post(URL, headers = headers, data=json.dumps(params))
if resp.status_code != 200:
print('fail')
else:
print('Success')
```
|
2020/09/11
|
[
"https://Stackoverflow.com/questions/63851302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13824611/"
] |
Given a scalar `x` and a vector `v` the expression `x <=quantile (v, .95)` can be written as `sum( x > v) < Q` where `Q = .95 * numel(v)` \*.
Also `A_1` can be splitted before the loop to avoid extra indexing.
Moreover the most inner loop can be removed in favor of vectorization.
```
Af_1 = A_1(:,1);
Af_2 = A_2(:,1);
Af_3 = A_3(:,1);
As_1 = A_1(:,2:end);
As_2 = A_2(:,2:end);
As_3 = A_3(:,2:end);
Q = .95 * (n -1);
for i=1:m
for j=1:m
if any (sum (Af_1(i) + Af_2(j) + Af_3 > As_1(i,:) + As_2(j,:) + As_3, 2) < Q)
check(i) = 1;
break;
end
end
end
```
More optimization can be achieved by rearranging the expressions involved in the inequality and pre-computation:
```
lhs = A_3(:,1) - A_3(:,2:end);
lhsi = A_1(:,1) - A_1(:,2:end);
rhsj = A_2(:,2:end) - A_2(:,1);
Q = .95 * (n - 1);
for i=1:m
LHS = lhs + lhsi(i,:);
for j=1:m
if any (sum (LHS > rhsj(j,:), 2) < Q)
check(i) = 1;
break;
end
end
end
```
* Note that because of the method that is used in the computation of [quantile](https://www.mathworks.com/help/stats/quantile.html) you may get a slightly different result.
|
Option 1:
Because all numbers are positive, you can do some optimizations. 95 percentile will be only higher if you add `A1` to the mix - if you find the `j` and `k` of greatest 95 percentile of `A2+A3` on the right side compared to the sum of the first 2 elements, you can simply take that for every `i`.
```
maxDif = -inf;
for j = 1 : m
for k = 1 : m
newDif = quantile(A_2..., 0.95) - A_2(j,1)-A_3(k,1);
maxDif = max(newDif, maxDif);
end
end
```
If even that is too slow, you can first get `maxDifA2` and `maxDifA3`, then estimate that `maxDif` will be for those particular `j` and `k` values and calculate it.
Now, for some numbers you will get that `maxDif > A_1`, then the `check` is 1. For some numbers you will get that `maxDif + quantile(A1, 0.95) < A_1`, here `check` is 0 (if you estimated `maxDif` by separate calculation of A2 and A3 this isn't true!). For some (most?) you will unfortunately get values in between and this won't be helpful at all. Then what remains is option 2 (it is also more straightforward):
Option 2:
You could save some time if you can save summation `A_2+A_3` on the right side, as that calculation repeats for every different `i`, but that requires A LOT of memory. But `quantile` is the more expensive operation anyway, so you aren't saving a lot of time. Something along the lines of
```
for j = 1 : m
for k = 1 : m
A23R(j,k,:) = A2(j,:)+A3(k,:); % Unlikely to fit in memory.
end
end
```
Then you can perform your loops, using A23R and avoiding to repeat that sum for every `i`.
|
19,479,644
|
I saw a python example today and it used -> for example this was what I saw:
```
spam = None
bacon = 42
def monty_python(a:spam,b:bacon) -> "different:":
pass
```
What is that code doing? I'm not quite sure I've never seen code like that I don't really get what
```
a:spam,b:bacon
```
is doing either, can someone explain this for me? I googled, "what does -> do in python" but no good searches came up that I found.
|
2013/10/20
|
[
"https://Stackoverflow.com/questions/19479644",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2805788/"
] |
It is function annotation for a return type. [`annotations`](https://stackoverflow.com/questions/3038033/what-are-good-uses-for-python3s-function-annotations) do nothing inside the code, they are there to help a user with code completion (in my experience).
Here is the [PEP](http://www.python.org/dev/peps/pep-3107/) for it.
Let me demonstrate, what I mean by "annotations do nothing inside the code". Here is an example:
```
def fun(a: str, b: int) -> str:
return 1
if __name__ == '__main__':
print(fun(10, 10))
```
The above code will run without any errors. but as you can see the first parameter *should* be a `string`, and the second an `int`. But, this only is a problem in my IDE, the code runs just fine:

|
They're [function annotations](http://ceronman.com/2013/03/12/a-powerful-unused-feature-of-python-function-annotations/). They don't really do anything by themselves, but they can be used for documentation or in combination with metaprogramming.
|
13,700,045
|
I'm trying to build a graph library in python (along with standard graph-algorithms). I've tried to implement DFS and this is what it looks like
```
def DFS(gr, s, path):
""" Depth first search
Returns a list of nodes "findable" from s """
if s in path: return False
path.append(s)
for each in gr.neighbors(s):
if each not in path:
DFS(gr, each, path)
```
This is working fine but I'm not happy with how it needs to be used. E.g. currently you need to do this
```
path = []
DFS(mygraph, "s", path)
print path
```
Instead of this, I want to DFS to be used in this manner
```
path = DFS(mygraph, "s")
print path
```
With the recursive DFS, I am unable to come up with the implementation that works like above. Can someone give me some pointers on how can I achieve this?
|
2012/12/04
|
[
"https://Stackoverflow.com/questions/13700045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/427069/"
] |
Just make a wrapper method that calls the one you already have:
```
def DFS(gr, s):
path = []
DFS2(gr, s, path)
return path
```
Here `DFS2` is the method you showed above.
|
Actually why don't you just set `path` to have a default of an empty list?
So using your same code but slightly different arguments:
```
# Original
def DFS(gr, s, path):
# Modified
def DFS(gr, s, path=[]):
# From here you can do
DFS(gr, s)
```
|
13,700,045
|
I'm trying to build a graph library in python (along with standard graph-algorithms). I've tried to implement DFS and this is what it looks like
```
def DFS(gr, s, path):
""" Depth first search
Returns a list of nodes "findable" from s """
if s in path: return False
path.append(s)
for each in gr.neighbors(s):
if each not in path:
DFS(gr, each, path)
```
This is working fine but I'm not happy with how it needs to be used. E.g. currently you need to do this
```
path = []
DFS(mygraph, "s", path)
print path
```
Instead of this, I want to DFS to be used in this manner
```
path = DFS(mygraph, "s")
print path
```
With the recursive DFS, I am unable to come up with the implementation that works like above. Can someone give me some pointers on how can I achieve this?
|
2012/12/04
|
[
"https://Stackoverflow.com/questions/13700045",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/427069/"
] |
Just make a wrapper method that calls the one you already have:
```
def DFS(gr, s):
path = []
DFS2(gr, s, path)
return path
```
Here `DFS2` is the method you showed above.
|
You can use an empty default value for the visited nodes, as suggested by chutsu, but be careful with using [mutable default arguments](https://stackoverflow.com/questions/1132941/least-astonishment-and-the-mutable-default-argument/52572954#52572954).
Also I would suggest using a set instead of a list for constant lookup.
```
def DFS(gr, s, visited=None):
""" Depth first search
Returns a list of nodes "findable" from s """
if visited == None:
visited = set([s])
for each in gr.neighbors(s):
if each not in visited:
visited.add(each)
DFS(gr, each, visited)
return visited
```
|
12,127,869
|
I'm trying to build a package from source by executing `python setup.py py2exe`
This is the section of code from setup.py, I suppose would be relevant:
```
if sys.platform == "win32": # For py2exe.
import matplotlib
sys.path.append("C:\\Program Files\\Microsoft Visual Studio 9.0\\VC\\redist\\x86\\Microsoft.VC90.CRT")
base_path = ""
data_files = [("Microsoft.VC90.CRT", glob.glob(r"C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT\*.*")),
```
Error it shows:
```
*** finding dlls needed ***
error: MSVCP90.dll: No such file or directory
```
**But I've installed "Microsoft Visual C++ 2008 Redistributable Package".** I'm running 32-bit python on 64-bit Windows 8. I'm trying to build a 32-bit binaries.
Also there is no folder like this: "C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\". This is what my computer contains:

**EDIT:**
On searching for `msvcp90.dll` on my C:\ drive I found that they are installed in weird paths like this:

|
2012/08/26
|
[
"https://Stackoverflow.com/questions/12127869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193653/"
] |
I would recommend ignoring the dependency outright. Add `MSVCP90.dll` to the list of `dll_excludes` given as an option to `py2exe`. Users will have to install the Microsoft Visual C++ 2008 redistributable. An example:
```
setup(
options = {
"py2exe":{
...
"dll_excludes": ["MSVCP90.dll", "HID.DLL", "w9xpopen.exe"],
...
}
},
console = [{'script': 'program.py'}]
)
```
|
(new answer, since the other answer describes an alternate solution)
You can take the files from the WinSxS directory and copy them to the `C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT` directory (normally created by Visual Studio, which you don't have). Copy them to get the following structure:
```
+-Microsoft.VC90.CRT
| |
| +-Microsoft.VC90.CRT.manifest
| +-msvcm90.dll
| +-msvcp90.dll
| +-msvcr90.dll
```
Then, you should be able to run the setup program (still excluding `msvcp90.dll`, as in the other answer), and it should successfully find the files under `Microsoft.VC90.CRT` and copy them as data files to your bundle.
See [the py2exe tutorial](http://www.py2exe.org/index.cgi/Tutorial#Step522) for more information.
|
12,127,869
|
I'm trying to build a package from source by executing `python setup.py py2exe`
This is the section of code from setup.py, I suppose would be relevant:
```
if sys.platform == "win32": # For py2exe.
import matplotlib
sys.path.append("C:\\Program Files\\Microsoft Visual Studio 9.0\\VC\\redist\\x86\\Microsoft.VC90.CRT")
base_path = ""
data_files = [("Microsoft.VC90.CRT", glob.glob(r"C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT\*.*")),
```
Error it shows:
```
*** finding dlls needed ***
error: MSVCP90.dll: No such file or directory
```
**But I've installed "Microsoft Visual C++ 2008 Redistributable Package".** I'm running 32-bit python on 64-bit Windows 8. I'm trying to build a 32-bit binaries.
Also there is no folder like this: "C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\". This is what my computer contains:

**EDIT:**
On searching for `msvcp90.dll` on my C:\ drive I found that they are installed in weird paths like this:

|
2012/08/26
|
[
"https://Stackoverflow.com/questions/12127869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193653/"
] |
I would recommend ignoring the dependency outright. Add `MSVCP90.dll` to the list of `dll_excludes` given as an option to `py2exe`. Users will have to install the Microsoft Visual C++ 2008 redistributable. An example:
```
setup(
options = {
"py2exe":{
...
"dll_excludes": ["MSVCP90.dll", "HID.DLL", "w9xpopen.exe"],
...
}
},
console = [{'script': 'program.py'}]
)
```
|
I think it has something to do with the spaces in the directory. You should try using `.rstrip()`. For example, put this:
```
directory='C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT'
directory=directory.rstrip()
```
You can then use the variable directory like you would have used the actual path.
This should make python able to recognize the directory where it wouldn't be able to decipher it before.
|
12,127,869
|
I'm trying to build a package from source by executing `python setup.py py2exe`
This is the section of code from setup.py, I suppose would be relevant:
```
if sys.platform == "win32": # For py2exe.
import matplotlib
sys.path.append("C:\\Program Files\\Microsoft Visual Studio 9.0\\VC\\redist\\x86\\Microsoft.VC90.CRT")
base_path = ""
data_files = [("Microsoft.VC90.CRT", glob.glob(r"C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT\*.*")),
```
Error it shows:
```
*** finding dlls needed ***
error: MSVCP90.dll: No such file or directory
```
**But I've installed "Microsoft Visual C++ 2008 Redistributable Package".** I'm running 32-bit python on 64-bit Windows 8. I'm trying to build a 32-bit binaries.
Also there is no folder like this: "C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\". This is what my computer contains:

**EDIT:**
On searching for `msvcp90.dll` on my C:\ drive I found that they are installed in weird paths like this:

|
2012/08/26
|
[
"https://Stackoverflow.com/questions/12127869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193653/"
] |
I would recommend ignoring the dependency outright. Add `MSVCP90.dll` to the list of `dll_excludes` given as an option to `py2exe`. Users will have to install the Microsoft Visual C++ 2008 redistributable. An example:
```
setup(
options = {
"py2exe":{
...
"dll_excludes": ["MSVCP90.dll", "HID.DLL", "w9xpopen.exe"],
...
}
},
console = [{'script': 'program.py'}]
)
```
|
I used to have a huge number of problems with complication on Windows, like the issue you're facing as well as installing packages like Cython with `pip install cython`.
The solution that worked best for me after two weeks of pain was downloading and running the unofficial MinGW GCC binary for Windows provided [here](http://www.develer.com/oss/GccWinBinaries). You might want to try giving that a shot and seeing if it helps.
If you do do it, you might want to uninstall MinGW if you have it already. I don't know if that's strictly necessary, but I always did it just in case. I did have it installed side-by-side with Cygwin without any problems.
|
12,127,869
|
I'm trying to build a package from source by executing `python setup.py py2exe`
This is the section of code from setup.py, I suppose would be relevant:
```
if sys.platform == "win32": # For py2exe.
import matplotlib
sys.path.append("C:\\Program Files\\Microsoft Visual Studio 9.0\\VC\\redist\\x86\\Microsoft.VC90.CRT")
base_path = ""
data_files = [("Microsoft.VC90.CRT", glob.glob(r"C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT\*.*")),
```
Error it shows:
```
*** finding dlls needed ***
error: MSVCP90.dll: No such file or directory
```
**But I've installed "Microsoft Visual C++ 2008 Redistributable Package".** I'm running 32-bit python on 64-bit Windows 8. I'm trying to build a 32-bit binaries.
Also there is no folder like this: "C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\". This is what my computer contains:

**EDIT:**
On searching for `msvcp90.dll` on my C:\ drive I found that they are installed in weird paths like this:

|
2012/08/26
|
[
"https://Stackoverflow.com/questions/12127869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193653/"
] |
(new answer, since the other answer describes an alternate solution)
You can take the files from the WinSxS directory and copy them to the `C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT` directory (normally created by Visual Studio, which you don't have). Copy them to get the following structure:
```
+-Microsoft.VC90.CRT
| |
| +-Microsoft.VC90.CRT.manifest
| +-msvcm90.dll
| +-msvcp90.dll
| +-msvcr90.dll
```
Then, you should be able to run the setup program (still excluding `msvcp90.dll`, as in the other answer), and it should successfully find the files under `Microsoft.VC90.CRT` and copy them as data files to your bundle.
See [the py2exe tutorial](http://www.py2exe.org/index.cgi/Tutorial#Step522) for more information.
|
I think it has something to do with the spaces in the directory. You should try using `.rstrip()`. For example, put this:
```
directory='C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT'
directory=directory.rstrip()
```
You can then use the variable directory like you would have used the actual path.
This should make python able to recognize the directory where it wouldn't be able to decipher it before.
|
12,127,869
|
I'm trying to build a package from source by executing `python setup.py py2exe`
This is the section of code from setup.py, I suppose would be relevant:
```
if sys.platform == "win32": # For py2exe.
import matplotlib
sys.path.append("C:\\Program Files\\Microsoft Visual Studio 9.0\\VC\\redist\\x86\\Microsoft.VC90.CRT")
base_path = ""
data_files = [("Microsoft.VC90.CRT", glob.glob(r"C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT\*.*")),
```
Error it shows:
```
*** finding dlls needed ***
error: MSVCP90.dll: No such file or directory
```
**But I've installed "Microsoft Visual C++ 2008 Redistributable Package".** I'm running 32-bit python on 64-bit Windows 8. I'm trying to build a 32-bit binaries.
Also there is no folder like this: "C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\". This is what my computer contains:

**EDIT:**
On searching for `msvcp90.dll` on my C:\ drive I found that they are installed in weird paths like this:

|
2012/08/26
|
[
"https://Stackoverflow.com/questions/12127869",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/193653/"
] |
(new answer, since the other answer describes an alternate solution)
You can take the files from the WinSxS directory and copy them to the `C:\Program Files\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT` directory (normally created by Visual Studio, which you don't have). Copy them to get the following structure:
```
+-Microsoft.VC90.CRT
| |
| +-Microsoft.VC90.CRT.manifest
| +-msvcm90.dll
| +-msvcp90.dll
| +-msvcr90.dll
```
Then, you should be able to run the setup program (still excluding `msvcp90.dll`, as in the other answer), and it should successfully find the files under `Microsoft.VC90.CRT` and copy them as data files to your bundle.
See [the py2exe tutorial](http://www.py2exe.org/index.cgi/Tutorial#Step522) for more information.
|
I used to have a huge number of problems with complication on Windows, like the issue you're facing as well as installing packages like Cython with `pip install cython`.
The solution that worked best for me after two weeks of pain was downloading and running the unofficial MinGW GCC binary for Windows provided [here](http://www.develer.com/oss/GccWinBinaries). You might want to try giving that a shot and seeing if it helps.
If you do do it, you might want to uninstall MinGW if you have it already. I don't know if that's strictly necessary, but I always did it just in case. I did have it installed side-by-side with Cygwin without any problems.
|
48,716,989
|
I'm trying to create an infinite loop that will output the Y axis of a sine wave, and want to use variables specifying the amplitude of the wave, frequency, and resolution. Where frequency is the number of full sine waves in a second like electrical AC frequency.
I'm trying to do something like this:
```
#!/usr/bin/python
from time import sleep
from math import sin
amplitude=100
frequency=0.01
resolution=0.01
while True:
y = <Sine wave math>
print str(y)
sleep(resolution)
```
I need help with the math and getting the resolution part right.
|
2018/02/10
|
[
"https://Stackoverflow.com/questions/48716989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6058228/"
] |
How about something like this:
```
DELETE FROM inventory
WHERE updated NOT IN (
SELECT updated FROM (
SELECT MAX(updated) updated
FROM inventory
GROUP BY DATE(updated)
) i
)
```
This would work well if you have the `updated` indexed (ordered).
Basically the sub query gets all the max updated dates per day and excludes them (`NOT IN`) from the `DELETE` statement.
|
Get the most recent time in a subquery, join that with the table, and delete.
```
DELETE i1
FROM inventory AS i1
JOIN (SELECT DATE(updated) AS date, MAX(updated) AS latest
FROM inventory
WHERE itemname = '24T7351'
GROUP BY date) AS i2 ON DATE(i1.updated) = i2.date AND i1.updated != i2.latest
WHERE itemname = '24T7351'
```
|
48,716,989
|
I'm trying to create an infinite loop that will output the Y axis of a sine wave, and want to use variables specifying the amplitude of the wave, frequency, and resolution. Where frequency is the number of full sine waves in a second like electrical AC frequency.
I'm trying to do something like this:
```
#!/usr/bin/python
from time import sleep
from math import sin
amplitude=100
frequency=0.01
resolution=0.01
while True:
y = <Sine wave math>
print str(y)
sleep(resolution)
```
I need help with the math and getting the resolution part right.
|
2018/02/10
|
[
"https://Stackoverflow.com/questions/48716989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6058228/"
] |
How about something like this:
```
DELETE FROM inventory
WHERE updated NOT IN (
SELECT updated FROM (
SELECT MAX(updated) updated
FROM inventory
GROUP BY DATE(updated)
) i
)
```
This would work well if you have the `updated` indexed (ordered).
Basically the sub query gets all the max updated dates per day and excludes them (`NOT IN`) from the `DELETE` statement.
|
If you want the latest record on each date for each item:
```
SELECT i.*
FROM inventory i
WHERE i.updated < (select max(i2.updated)
from inventory i2
where i2.itemname = i.itemname and
date(i2.updated) = date(i.updated)
)
ORDER BY updated DESC;
```
|
48,716,989
|
I'm trying to create an infinite loop that will output the Y axis of a sine wave, and want to use variables specifying the amplitude of the wave, frequency, and resolution. Where frequency is the number of full sine waves in a second like electrical AC frequency.
I'm trying to do something like this:
```
#!/usr/bin/python
from time import sleep
from math import sin
amplitude=100
frequency=0.01
resolution=0.01
while True:
y = <Sine wave math>
print str(y)
sleep(resolution)
```
I need help with the math and getting the resolution part right.
|
2018/02/10
|
[
"https://Stackoverflow.com/questions/48716989",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6058228/"
] |
How about something like this:
```
DELETE FROM inventory
WHERE updated NOT IN (
SELECT updated FROM (
SELECT MAX(updated) updated
FROM inventory
GROUP BY DATE(updated)
) i
)
```
This would work well if you have the `updated` indexed (ordered).
Basically the sub query gets all the max updated dates per day and excludes them (`NOT IN`) from the `DELETE` statement.
|
This solution would help you assign a rank to each of the rows. You can delete based on rank. The latest record has rank of 1.
```
SELECT
ItemName,
BLOCK_1_PRICE,
M,
S,
B,
P,
updated,
if (ItemName=@curItem,@curRank:= @curRank + 1, @curRank:=@reset) AS rank,
@curItem:=ItemName,
@curRank,
@reset,
@curItem
FROM
inventory, (select @curRank:=1, @curItem:='NA', @reset:= 1) as r
ORDER BY
ItemName,updated DESC
```
If you want all records apart from the latest record, use the query as below. I would suggest that you create a new table with latest records, rather than deleting in the old table, if the inventory table is large. Link to SQLFiddle ( <http://sqlfiddle.com/#!9/fb0a3d/20>)
```
select * from
(
SELECT
ItemName,
BLOCK_1_PRICE,
M,
S,
B,
P,
updated,
if (ItemName=@curItem,@curRank:= @curRank + 1, @curRank:=@reset) AS rank,
@curItem:=ItemName,
@curRank,
@reset,
@curItem
FROM
inventory, (select @curRank:=1, @curItem:='NA', @reset:= 1) as r
ORDER BY
ItemName,updated DESC
) as t
where rank > 1
```
|
62,892,652
|
I have a class that gets the data from the form, makes some changes and save it to the database.
I want to have several method inside.
* Get
* Post
* And some other method that will make some changes to the data from the form
I want the post method to save the data from the form to the database and pass the instanse variable to the next method. The next method should make some changes, save it to the databese and return redirect.
But I have an error. 'Site' object has no attribute 'get'
Here is my code:
```
class AddSiteView(View):
form_class = AddSiteForm
template_name = 'home.html'
def get(self, request, *args, **kwargs):
form = self.form_class()
return render(request, self.template_name, { 'form': form })
def post(self, request, *args, **kwargs):
form = self.form_class(request.POST)
if form.is_valid():
# Get website url from form
site_url = request.POST.get('url')
# Check if the site is in DB or it's a new site
try:
site_id = Site.objects.get(url=site_url)
except ObjectDoesNotExist:
site_instanse = form.save()
else:
site_instanse = site_id
return site_instanse
return render(request, self.template_name, { 'form': form })
def get_robots_link(self, *args, **kwargs):
# Set veriable to the Robot Model
robots = Robot.objects.get(site=site_instanse)
# Robobts Link
robots_url = Robots(site_url).get_url()
robots.url = robots_url
robots.save()
return redirect('checks:robots', robots.id, )
```
I need to pass site\_instanse from def post to def get\_robots\_link
Here is the traceback:
```
Internal Server Error: /add/
Traceback (most recent call last):
File "/home/atom/.local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/home/atom/.local/lib/python3.8/site-packages/django/utils/deprecation.py", line 96, in __call__
response = self.process_response(request, response)
File "/home/atom/.local/lib/python3.8/site-packages/django/middleware/clickjacking.py", line 26, in process_response
if response.get('X-Frame-Options') is not None:
AttributeError: 'Site' object has no attribute 'get'
[14/Jul/2020 10:36:27] "POST /add/ HTTP/1.1" 500 61371
```
**Here is the place where the problem is:**
If I use redirect inside the post method. Everything works good. Like so:
```
class AddSiteView(View):
form_class = AddSiteForm
template_name = 'home.html'
def get(self, request, *args, **kwargs):
form = self.form_class()
return render(request, self.template_name, { 'form': form })
def post(self, request, *args, **kwargs):
form = self.form_class(request.POST)
if form.is_valid():
# Get website url from form
site_url = request.POST.get('url')
# Check if the site is in DB or it's a new site
try:
site_id = Site.objects.get(url=site_url)
except ObjectDoesNotExist:
site_instanse = form.save()
else:
site_instanse = site_id
# Set veriable to the Robot Model
robots = Robot.objects.get(site=site_instanse)
# Robobts Link
robots_url = Robots(site_url).get_url()
robots.url = robots_url
robots.save()
return redirect('checks:robots', robots.id, ) ## HERE
return render(request, self.template_name, { 'form': form })
```
But if remove the line `return redirect('checks:robots', robots.id, )` from the post method and put return self.site\_instance there. and add the def get\_robots\_link. It give the error: `'Site' object has no attribute 'get'`
|
2020/07/14
|
[
"https://Stackoverflow.com/questions/62892652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13197641/"
] |
```
class AddSiteView(View):
form_class = AddSiteForm
template_name = 'home.html'
def get(self, request, *args, **kwargs):
form = self.form_class()
return render(request, self.template_name, { 'form': form })
def post(self, request, *args, **kwargs):
form = self.form_class(request.POST)
if form.is_valid():
# Get website url from form
site_url = request.POST.get('url')
# Check if the site is in DB or it's a new site
try:
site_id = Site.objects.get(url=site_url)
except ObjectDoesNotExist:
site_instanse = form.save()
else:
site_instanse = site_id
self.site_instance = site_instanse #See this
return site_instanse
return render(request, self.template_name, { 'form': form })
def get_robots_link(self, *args, **kwargs):
# Set veriable to the Robot Model
robots = Robot.objects.get(site=self.site_instance)
# Robobts Link
robots_url = Robots(site_url).get_url()
robots.url = robots_url
robots.save()
return redirect('checks:robots', robots.id, )
```
Use `self` to encode the data within the object
|
One must return a response from the `post` method. This code returns a `Site` instance on these lines. Not sure what is the intended behavior, either a `redirect` or `render` should be used.
```
try:
site_id = Site.objects.get(url=site_url)
except ObjectDoesNotExist:
site_instanse = form.save()
else:
site_instanse = site_id
return site_instanse
```
|
36,682,832
|
Exactly how should python models be exported for use in c++?
I'm trying to do something similar to this tutorial:
<https://www.tensorflow.org/versions/r0.8/tutorials/image_recognition/index.html>
I'm trying to import my own TF model in the c++ API in stead of the inception one. I adjusted input size and the paths, but strange errors keep popping up. I spent all day reading stack overflow and other forums but to no avail.
I've tried two methods for exporting the graph.
Method 1: metagraph.
```
...loading inputs, setting up the model, etc....
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
for i in range(num_steps):
x_batch, y_batch = batch(50)
if i%10 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:x_batch, y_: y_batch, keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: x_batch, y_: y_batch, keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x: features_test, y_: labels_test, keep_prob: 1.0}))
saver = tf.train.Saver(tf.all_variables())
checkpoint =
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/model.ckpt'
saver.save(sess, checkpoint)
tf.train.export_meta_graph(filename=
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/cat_graph.pb',
meta_info_def=None,
graph_def=sess.graph_def,
saver_def=saver.restore(sess, checkpoint),
collection_list=None, as_text=False)
```
Method 1 yields the following error when trying to run the program:
```
[libprotobuf ERROR
google/protobuf/src/google/protobuf/wire_format_lite.cc:532] String field
'tensorflow.NodeDef.op' contains invalid UTF-8 data when parsing a protocol
buffer. Use the 'bytes' type if you intend to send raw bytes.
E tensorflow/examples/cat_face/main.cc:281] Not found: Failed to load
compute graph at 'tensorflow/examples/cat_face/data/cat_graph.pb'
```
I also tried another method of exporting the graph:
Method 2: write\_graph:
```
tf.train.write_graph(sess.graph_def,
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/',
'cat_graph.pb', as_text=False)
```
This version actually seems to load something, but I get an error about variables not being initialized:
```
Running model failed: Failed precondition: Attempting to use uninitialized
value weight1
[[Node: weight1/read = Identity[T=DT_FLOAT, _class=["loc:@weight1"],
_device="/job:localhost/replica:0/task:0/cpu:0"](weight1)]]
```
|
2016/04/17
|
[
"https://Stackoverflow.com/questions/36682832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5452997/"
] |
At first, you need to graph definition to file by using following command
```
with tf.Session() as sess:
//Build network here
tf.train.write_graph(sess.graph.as_graph_def(), "C:\\output\\", "mymodel.pb")
```
Then, save your model by using saver
```
saver = tf.train.Saver(tf.global_variables())
saver.save(sess, "C:\\output\\mymodel.ckpt")
```
Then, you will have 2 files at your output, mymodel.ckpt, mymodel.pb
Download freeze\_graph.py from [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py) and run following command in C:\output\. Change output node name if it is different for you.
>
> python freeze\_graph.py --input\_graph mymodel.pb --input\_checkpoint mymodel.ckpt --output\_node\_names softmax/Reshape\_1 --output\_graph mymodelforc.pb
>
>
>
You can use mymodelforc.pb directly from C.
You can use following C code to load the proto file
```
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/cc/ops/image_ops.h"
Session* session;
NewSession(SessionOptions(), &session);
GraphDef graph_def;
ReadBinaryProto(Env::Default(), "C:\\output\\mymodelforc.pb", &graph_def);
session->Create(graph_def);
```
Now you can use session for inference.
You can apply inference parameter as following:
```
// Same dimension and type as input of your network
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({ 1, height, width, channel }));
std::vector<tensorflow::Tensor> finalOutput;
// Fill input tensor with your input data
std::string InputName = "input"; // Your input placeholder's name
std::string OutputName = "softmax/Reshape_1"; // Your output placeholder's name
session->Run({ { InputName, input_tensor } }, { OutputName }, {}, &finalOutput);
// finalOutput will contain the inference output that you search for
```
|
You can try this (modify name of output layer):
```
import os
import tensorflow as tf
from tensorflow.python.framework import graph_util
def load_graph_def(model_path, sess=None):
sess = sess if sess is not None else tf.get_default_session()
saver = tf.train.import_meta_graph(model_path + '.meta')
saver.restore(sess, model_path)
def freeze_graph(sess, output_layer_name, output_graph):
graph = tf.get_default_graph()
input_graph_def = graph.as_graph_def()
# Exporting the graph
print("Exporting graph...")
output_graph_def = graph_util.convert_variables_to_constants(
sess,
input_graph_def,
output_layer_name.split(","))
with tf.gfile.GFile(output_graph, "wb") as f:
f.write(output_graph_def.SerializeToString())
def freeze_from_checkpoint(checkpoint_file, output_layer_name):
model_folder = os.path.basename(checkpoint_file)
output_graph = os.path.join(model_folder, checkpoint_file + '.pb')
with tf.Session() as sess:
load_graph_def(checkpoint_file)
freeze_graph(sess, output_layer_name, output_graph)
if __name__ == '__main__':
freeze_from_checkpoint(
checkpoint_file='/home/sander/tensorflow/tensorflow/examples/cat_face/data/model.ckpt',
output_layer_name='???')
```
|
36,682,832
|
Exactly how should python models be exported for use in c++?
I'm trying to do something similar to this tutorial:
<https://www.tensorflow.org/versions/r0.8/tutorials/image_recognition/index.html>
I'm trying to import my own TF model in the c++ API in stead of the inception one. I adjusted input size and the paths, but strange errors keep popping up. I spent all day reading stack overflow and other forums but to no avail.
I've tried two methods for exporting the graph.
Method 1: metagraph.
```
...loading inputs, setting up the model, etc....
sess = tf.InteractiveSession()
sess.run(tf.initialize_all_variables())
for i in range(num_steps):
x_batch, y_batch = batch(50)
if i%10 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:x_batch, y_: y_batch, keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: x_batch, y_: y_batch, keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x: features_test, y_: labels_test, keep_prob: 1.0}))
saver = tf.train.Saver(tf.all_variables())
checkpoint =
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/model.ckpt'
saver.save(sess, checkpoint)
tf.train.export_meta_graph(filename=
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/cat_graph.pb',
meta_info_def=None,
graph_def=sess.graph_def,
saver_def=saver.restore(sess, checkpoint),
collection_list=None, as_text=False)
```
Method 1 yields the following error when trying to run the program:
```
[libprotobuf ERROR
google/protobuf/src/google/protobuf/wire_format_lite.cc:532] String field
'tensorflow.NodeDef.op' contains invalid UTF-8 data when parsing a protocol
buffer. Use the 'bytes' type if you intend to send raw bytes.
E tensorflow/examples/cat_face/main.cc:281] Not found: Failed to load
compute graph at 'tensorflow/examples/cat_face/data/cat_graph.pb'
```
I also tried another method of exporting the graph:
Method 2: write\_graph:
```
tf.train.write_graph(sess.graph_def,
'/home/sander/tensorflow/tensorflow/examples/cat_face/data/',
'cat_graph.pb', as_text=False)
```
This version actually seems to load something, but I get an error about variables not being initialized:
```
Running model failed: Failed precondition: Attempting to use uninitialized
value weight1
[[Node: weight1/read = Identity[T=DT_FLOAT, _class=["loc:@weight1"],
_device="/job:localhost/replica:0/task:0/cpu:0"](weight1)]]
```
|
2016/04/17
|
[
"https://Stackoverflow.com/questions/36682832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5452997/"
] |
At first, you need to graph definition to file by using following command
```
with tf.Session() as sess:
//Build network here
tf.train.write_graph(sess.graph.as_graph_def(), "C:\\output\\", "mymodel.pb")
```
Then, save your model by using saver
```
saver = tf.train.Saver(tf.global_variables())
saver.save(sess, "C:\\output\\mymodel.ckpt")
```
Then, you will have 2 files at your output, mymodel.ckpt, mymodel.pb
Download freeze\_graph.py from [here](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py) and run following command in C:\output\. Change output node name if it is different for you.
>
> python freeze\_graph.py --input\_graph mymodel.pb --input\_checkpoint mymodel.ckpt --output\_node\_names softmax/Reshape\_1 --output\_graph mymodelforc.pb
>
>
>
You can use mymodelforc.pb directly from C.
You can use following C code to load the proto file
```
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/cc/ops/image_ops.h"
Session* session;
NewSession(SessionOptions(), &session);
GraphDef graph_def;
ReadBinaryProto(Env::Default(), "C:\\output\\mymodelforc.pb", &graph_def);
session->Create(graph_def);
```
Now you can use session for inference.
You can apply inference parameter as following:
```
// Same dimension and type as input of your network
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({ 1, height, width, channel }));
std::vector<tensorflow::Tensor> finalOutput;
// Fill input tensor with your input data
std::string InputName = "input"; // Your input placeholder's name
std::string OutputName = "softmax/Reshape_1"; // Your output placeholder's name
session->Run({ { InputName, input_tensor } }, { OutputName }, {}, &finalOutput);
// finalOutput will contain the inference output that you search for
```
|
You can find very useful the [DNN](https://docs.opencv.org/master/d6/d0f/group__dnn.html) module of OpenCV.
It makes it simple to load and use pretrained models developed with Tensorflow (and other frameworks).
It can be used in a C++ program.
[Here](https://www.pyimagesearch.com/2017/08/21/deep-learning-with-opencv/) is a tutorial.
|
61,578,697
|
I am trying to wrap my head around python. Basically I am trying to remove a duplicate string (a date to be more precise) in some data. So for example:
```
2019-03-31
2019-06-30
2019-09-30
2019-12-31
2020-03-31
2020-03-31
```
notice 2020-03-31 is duplicated. I would like to find the duplicated date and rename it as `last quarter`
conversely, If the data has no duplicates I want to leave as is.
I currently have some working code that checks to see if there are duplicates. And that is it. I just need some guidance in the right direction as to how to rename the duplicate.
```
def checkForDuplicates(listOfElems):
if len(listOfElems) == len(set(listOfElems)):
return
else:
return print("You have a duplicate")
```
|
2020/05/03
|
[
"https://Stackoverflow.com/questions/61578697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13461709/"
] |
Use a [`set`](https://docs.python.org/3/library/stdtypes.html#set) to keep track of items you've seen. If any items are already in the set, append the desired string (use `= "last quarter"` if you want a full rename; it's unclear).
```
data = """2019-03-31
2019-06-30
2019-09-30
2019-12-31
2020-03-31
2020-03-31""".split("\n")
seen = set()
for i, e in enumerate(data):
if e in seen:
data[i] += " last quarter"
seen.add(e)
print(data)
```
Output:
```
['2019-03-31', '2019-06-30', '2019-09-30',
'2019-12-31', '2020-03-31', '2020-03-31 last quarter']
```
|
For your function, if you have list of elements then your function will be :
```
def checkForDuplicates(listOfElems):
check=[]
for i in range(len(listOfElems)):
if listofElems[i] in check:
#then it is a duplicate and you can rename it
listofElems[i]='last quarter'
else:
check.append(listofElems[i])
```
This will replace all the duplicate enteries to `last quarter`.
|
61,578,697
|
I am trying to wrap my head around python. Basically I am trying to remove a duplicate string (a date to be more precise) in some data. So for example:
```
2019-03-31
2019-06-30
2019-09-30
2019-12-31
2020-03-31
2020-03-31
```
notice 2020-03-31 is duplicated. I would like to find the duplicated date and rename it as `last quarter`
conversely, If the data has no duplicates I want to leave as is.
I currently have some working code that checks to see if there are duplicates. And that is it. I just need some guidance in the right direction as to how to rename the duplicate.
```
def checkForDuplicates(listOfElems):
if len(listOfElems) == len(set(listOfElems)):
return
else:
return print("You have a duplicate")
```
|
2020/05/03
|
[
"https://Stackoverflow.com/questions/61578697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13461709/"
] |
Use a [`set`](https://docs.python.org/3/library/stdtypes.html#set) to keep track of items you've seen. If any items are already in the set, append the desired string (use `= "last quarter"` if you want a full rename; it's unclear).
```
data = """2019-03-31
2019-06-30
2019-09-30
2019-12-31
2020-03-31
2020-03-31""".split("\n")
seen = set()
for i, e in enumerate(data):
if e in seen:
data[i] += " last quarter"
seen.add(e)
print(data)
```
Output:
```
['2019-03-31', '2019-06-30', '2019-09-30',
'2019-12-31', '2020-03-31', '2020-03-31 last quarter']
```
|
If your list is sorted, you can try this.
```
def checkForDuplicates(listOfElems):
for i in range(len(listOfElems)-1):
if listOfElems[i]==listOfElems[i+1]:
listOfElems[i+1] = "last quarter"
retrun listOfElems
```
|
61,578,697
|
I am trying to wrap my head around python. Basically I am trying to remove a duplicate string (a date to be more precise) in some data. So for example:
```
2019-03-31
2019-06-30
2019-09-30
2019-12-31
2020-03-31
2020-03-31
```
notice 2020-03-31 is duplicated. I would like to find the duplicated date and rename it as `last quarter`
conversely, If the data has no duplicates I want to leave as is.
I currently have some working code that checks to see if there are duplicates. And that is it. I just need some guidance in the right direction as to how to rename the duplicate.
```
def checkForDuplicates(listOfElems):
if len(listOfElems) == len(set(listOfElems)):
return
else:
return print("You have a duplicate")
```
|
2020/05/03
|
[
"https://Stackoverflow.com/questions/61578697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13461709/"
] |
For your function, if you have list of elements then your function will be :
```
def checkForDuplicates(listOfElems):
check=[]
for i in range(len(listOfElems)):
if listofElems[i] in check:
#then it is a duplicate and you can rename it
listofElems[i]='last quarter'
else:
check.append(listofElems[i])
```
This will replace all the duplicate enteries to `last quarter`.
|
If your list is sorted, you can try this.
```
def checkForDuplicates(listOfElems):
for i in range(len(listOfElems)-1):
if listOfElems[i]==listOfElems[i+1]:
listOfElems[i+1] = "last quarter"
retrun listOfElems
```
|
44,610,150
|
I downloaded Python 3.6 from Python's website (from the download page for Windows) and it seems only the interpreter is available. I don't see anything else (Standard Library or something) in my system. Is it included in the interpreter and hidden or something?
I tried to install ibm\_db 2.0.7 as an extension of Python DB API, but the installation instructions seem too old. The paths defined don't exist in my Win10.
By the way, I installed the latest Platform SDK as instructed (which is the predecessor of Windows 10 SDK, so I had to to install Windows 10 SDK instead). I also installed .NET SDK V1.1, which is told to include Visual C++ 2003 (Visual C++ 2003 is not available today on its own). I considered to install VS2017, but because it was too big (12.some GB) I passed on that option.
I am stuck and can't proceed for I don't know which changes happened from the point the installation instructions have been written and what else I need to do. How can I install python with the ibm\_db package on Windows 10?
|
2017/06/17
|
[
"https://Stackoverflow.com/questions/44610150",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8176681/"
] |
Timeout is for raising a timeout error if an event isn't emitted within a certain time period. You probably want Observable.interval:
```
return
Observable.interval(1000).mergeMap(t=> this.http.get(matchUrl))
.toPromise()
.then(response => response.json().participants as Match[]);
```
if you want to serialize the requests so that it runs 1 second after the other, use concatMap instead.
```
return
Observable.interval(1000).concatMap(t=> this.http.get(matchUrl))
.toPromise()
.then(response => response.json().participants as Match[]);
```
|
Use `debounceTime` operator as below
```
getMatch(matchId: number): Promise<Match[]> {
let matchUrl: string = 'https://br1.api.riotgames.com/lol/match/v3/matches/'+ matchId +'?api_key=';
return this.http.get(matchUrl)
.debounceTime(1000)
.toPromise()
.then(response => response.json().participants as Match[]);
```
|
5,395,782
|
In a python/google app engine app, I've got a choice between storing some static data (couple KB in size) in a local json/xml file or putting it into the datastore and querying it from there. The data is created by me, so there's no issues with badly formed data. In specific terms such as saving quota, less resource usage and app speed, which is the better method for this case?
I'd guess using simplyjson to read from a json file would be better since this method doesn't require a datastore query, whilst still being reasonably quick.
Taking this further, the app doesn't require a large data store (currently ~400KB), so would it be worthwhile moving all the data to a json file to get around the quota restrictions?
|
2011/03/22
|
[
"https://Stackoverflow.com/questions/5395782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614453/"
] |
If your data is small, static and infrequently changed, you'll get the best performance by just writing your data as a `dict` in it's own module and just `import` it where you need it. This would take advantage of the fact that Python will cache your modules on import.
|
It is faster to keep your data in a static file instead of the datastore. As you said, this saves on datastore quota, and also saves time in round-trips to the datastore.
However, any **data you store in static files is static and cannot be changed by your application** (see the ["Sandbox" section here](http://code.google.com/appengine/docs/whatisgoogleappengine.html)). You would need to re-deploy the file each time you wanted to make changes (i.e. none of the changes could be made through a web interface). If this is acceptable, then use this method, since it's simple and fast.
|
5,395,782
|
In a python/google app engine app, I've got a choice between storing some static data (couple KB in size) in a local json/xml file or putting it into the datastore and querying it from there. The data is created by me, so there's no issues with badly formed data. In specific terms such as saving quota, less resource usage and app speed, which is the better method for this case?
I'd guess using simplyjson to read from a json file would be better since this method doesn't require a datastore query, whilst still being reasonably quick.
Taking this further, the app doesn't require a large data store (currently ~400KB), so would it be worthwhile moving all the data to a json file to get around the quota restrictions?
|
2011/03/22
|
[
"https://Stackoverflow.com/questions/5395782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614453/"
] |
For superior app performance, as Chris and others pointed out, python dict is the best.
But if you are ok with the minimal performance hit caused by datastore queries, I think that is the way to go purely from a design and maintenance perspective. Simplicity takes precedence over performance if you are not approaching the quota limits.
I assume yours is a simple app as of today. But it can gradually grow in complexity. As you add more features, having a hard-coded data somewhere will come in the way of design flexibility even in the short term. And you might end up rewriting those areas in future.
|
It is faster to keep your data in a static file instead of the datastore. As you said, this saves on datastore quota, and also saves time in round-trips to the datastore.
However, any **data you store in static files is static and cannot be changed by your application** (see the ["Sandbox" section here](http://code.google.com/appengine/docs/whatisgoogleappengine.html)). You would need to re-deploy the file each time you wanted to make changes (i.e. none of the changes could be made through a web interface). If this is acceptable, then use this method, since it's simple and fast.
|
5,395,782
|
In a python/google app engine app, I've got a choice between storing some static data (couple KB in size) in a local json/xml file or putting it into the datastore and querying it from there. The data is created by me, so there's no issues with badly formed data. In specific terms such as saving quota, less resource usage and app speed, which is the better method for this case?
I'd guess using simplyjson to read from a json file would be better since this method doesn't require a datastore query, whilst still being reasonably quick.
Taking this further, the app doesn't require a large data store (currently ~400KB), so would it be worthwhile moving all the data to a json file to get around the quota restrictions?
|
2011/03/22
|
[
"https://Stackoverflow.com/questions/5395782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/614453/"
] |
If your data is small, static and infrequently changed, you'll get the best performance by just writing your data as a `dict` in it's own module and just `import` it where you need it. This would take advantage of the fact that Python will cache your modules on import.
|
For superior app performance, as Chris and others pointed out, python dict is the best.
But if you are ok with the minimal performance hit caused by datastore queries, I think that is the way to go purely from a design and maintenance perspective. Simplicity takes precedence over performance if you are not approaching the quota limits.
I assume yours is a simple app as of today. But it can gradually grow in complexity. As you add more features, having a hard-coded data somewhere will come in the way of design flexibility even in the short term. And you might end up rewriting those areas in future.
|
32,977,076
|
Related: [ImportError: No module named bootstrap3 even while using virtualenv](https://stackoverflow.com/questions/29781872/importerror-no-module-named-bootstrap3-even-while-using-virtualenv)
Every time I attempt to use manage.py (startapp, shell, etc) or load my page (using Apache), I get the error below. I'm running Django 1.8 inside a virtual environment, and already installed django-bootstrap-toolkit (and tried with django-bootstrap as well although I'm not sure what the difference is). The instructions on github said to add 'bootstrap3' to INSTALLED\_APPS, which I did, and now get the following error:
```
...
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named bootstrap3
```
My Versions:
```
Django 1.8
Python 2.7.5
django-bootstrap-toolkit 2.15.0
```
from settings.py:
```
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.admin',
'bootstrap3',
'django_cron',
```
What am I missing here? Thanks for your time.
Edit: Full pip freeze output:
```
(venv)[root@ myhost path]# pip freeze
IPy==0.75
PyOpenGL==3.0.1
SSSDConfig==1.12.2
backports.ssl-match-hostname==3.4.0.2
blivet==0.61.0.27
chardet==2.2.1
configobj==4.7.2
configshell-fb==1.1.14
coverage==3.6b3
cupshelpers==1.0
decorator==3.4.0
di==0.3
django-bootstrap-toolkit==2.15.0
dnspython==1.11.1
ethtool==0.8
firstboot==19.5
freeipa==2.0.0.alpha.0
fros==1.0
glusterfs-api==3.6.0.29
iniparse==0.4
initial-setup==0.3.9.23
iotop==0.6
ipaplatform==4.1.0
ipapython==4.1.0
javapackages==1.0.0
kerberos==1.1
kitchen==1.1.1
kmod==0.1
langtable==0.0.13
lxml==3.2.1
meld==3.11.0
netaddr==0.7.5
nose==1.3.0
numpy==1.7.1
pcp==1.0
policycoreutils-default-encoding==0.1
psutil==1.2.1
psycopg2==2.5.1
pyOpenSSL==0.13.1
pyasn1==0.1.6
pycups==1.9.63
pycurl==7.19.0
pygobject==3.8.2
pygpgme==0.3
pyinotify==0.9.4
pykickstart==1.99.43.17
pyliblzma==0.5.3
pyodbc==3.0.0-unsupported
pyparsing==1.5.6
pyparted==3.9
python-augeas==0.4.1
python-dateutil==1.5
python-default-encoding==0.1
python-dmidecode==3.10.13
python-ldap==2.4.15
python-meh==0.25.2
python-nss==0.16.0
python-yubico==1.2.1
pytz==2012d
pyudev==0.15
pyusb==1.0.0b1
pyxattr==0.5.1
qrcode==5.0.1
rtslib-fb==2.1.50
scdate==1.10.6
seobject==0.1
sepolicy==1.1
setroubleshoot==1.1
six==1.3.0
slip==0.4.0
slip.dbus==0.4.0
stevedore==0.14
targetcli-fb==2.1.fb37
urlgrabber==3.10
urwid==1.1.1
virtualenv==1.10.1
virtualenv-clone==0.2.4
virtualenvwrapper==4.3.1
wsgiref==0.1.2
yum-langpacks==0.4.2
yum-metadata-parser==1.1.4
```
|
2015/10/06
|
[
"https://Stackoverflow.com/questions/32977076",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1303827/"
] |
As it is said in docs, you need `django-bootstrap3` package to use bootstrap3. Here is the [link](https://github.com/dyve/django-bootstrap3).
|
I defer to the answer above. Just pointing out that the django-bootstrap-toolkit, which is for v2 of Bootstrap, could be removed. Thanks for using these libraries!
|
5,880,781
|
Can anybody tell me what is wrong in this program? I face
```
syntaxerror unexpected character after line continuation character
```
when I run this program:
```
f = open(D\\python\\HW\\2_1 - Copy.cp,"r");
lines = f.readlines();
for i in lines:
thisline = i.split(" ");
```
|
2011/05/04
|
[
"https://Stackoverflow.com/questions/5880781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/642564/"
] |
You need to quote that filename:
```
f = open("D\\python\\HW\\2_1 - Copy.cp", "r")
```
Otherwise the bare backslash after the D is interpreted as a line-continuation character, and should be followed by a newline. This is used to extend long expressions over multiple lines, for readability:
```
print "This is a long",\
"line of text",\
"that I'm printing."
```
Also, you shouldn't have semicolons (`;`) at the end of your statements in Python.
|
Replace
`f = open(D\\python\\HW\\2_1 - Copy.cp,"r");`
by
`f = open("D:\\python\\HW\\2_1 - Copy.cp", "r")`
1. File path needs to be a string (constant)
2. need colon in Windows file path
3. space after comma for better style
4. ; after statement is allowed but fugly.
What tutorial are you using?
|
5,880,781
|
Can anybody tell me what is wrong in this program? I face
```
syntaxerror unexpected character after line continuation character
```
when I run this program:
```
f = open(D\\python\\HW\\2_1 - Copy.cp,"r");
lines = f.readlines();
for i in lines:
thisline = i.split(" ");
```
|
2011/05/04
|
[
"https://Stackoverflow.com/questions/5880781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/642564/"
] |
You need to quote that filename:
```
f = open("D\\python\\HW\\2_1 - Copy.cp", "r")
```
Otherwise the bare backslash after the D is interpreted as a line-continuation character, and should be followed by a newline. This is used to extend long expressions over multiple lines, for readability:
```
print "This is a long",\
"line of text",\
"that I'm printing."
```
Also, you shouldn't have semicolons (`;`) at the end of your statements in Python.
|
The filename should be a string. In other names it should be within quotes.
```py
f = open("D\\python\\HW\\2_1 - Copy.cp","r")
lines = f.readlines()
for i in lines:
thisline = i.split(" ");
```
You can also open the file using `with`
```py
with open("D\\python\\HW\\2_1 - Copy.cp","r") as f:
lines = f.readlines()
for i in lines:
thisline = i.split(" ");
```
There is no need to add the semicolon(`;`) in python. It's ugly.
|
5,880,781
|
Can anybody tell me what is wrong in this program? I face
```
syntaxerror unexpected character after line continuation character
```
when I run this program:
```
f = open(D\\python\\HW\\2_1 - Copy.cp,"r");
lines = f.readlines();
for i in lines:
thisline = i.split(" ");
```
|
2011/05/04
|
[
"https://Stackoverflow.com/questions/5880781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/642564/"
] |
Replace
`f = open(D\\python\\HW\\2_1 - Copy.cp,"r");`
by
`f = open("D:\\python\\HW\\2_1 - Copy.cp", "r")`
1. File path needs to be a string (constant)
2. need colon in Windows file path
3. space after comma for better style
4. ; after statement is allowed but fugly.
What tutorial are you using?
|
The filename should be a string. In other names it should be within quotes.
```py
f = open("D\\python\\HW\\2_1 - Copy.cp","r")
lines = f.readlines()
for i in lines:
thisline = i.split(" ");
```
You can also open the file using `with`
```py
with open("D\\python\\HW\\2_1 - Copy.cp","r") as f:
lines = f.readlines()
for i in lines:
thisline = i.split(" ");
```
There is no need to add the semicolon(`;`) in python. It's ugly.
|
16,297,892
|
Upgrade to 13.04 has totally messed my system up .
I am having this issue when running
```
./manage.py runserver
Traceback (most recent call last):
File "./manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
File "/home/rats/rats/local/lib/python2.7/site-packages/django/core/management
/__init__.py", line 4, in <module>
from optparse import OptionParser, NO_DEFAULT
File "/usr/lib/python2.7/optparse.py", line 77, in <module>
import textwrap
File "/usr/lib/python2.7/textwrap.py", line 10, in <module>
import string, re
File "/usr/lib/python2.7/string.py", line 83, in <module>
import re as _re
File "/home/rats/rats/lib/python2.7/re.py", line 105, in <module>
import sre_compile
File "/home/rats/rats/lib/python2.7/sre_compile.py", line 14, in <module>
import sre_parse
File "/home/rats/rats/lib/python2.7/sre_parse.py", line 17, in <module>
from sre_constants import *
File "/home/rats/rats/lib/python2.7/sre_constants.py", line 18, in <module>
from _sre import MAXREPEAT
ImportError: cannot import name MAXREPEAT
```
this is happening for both the real environment as well as for virtual environment .
i tried removing python with
```
sudo apt-get remove python
```
and sadly it has removed everything .
now google chrome does not show any fonts .
i am looking for getting things back to work .
help is needed for proper configuring it again.
|
2013/04/30
|
[
"https://Stackoverflow.com/questions/16297892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1080407/"
] |
If you are using virtualenvwrapper then you can recreate the virtualenv on top of the existing one (with no environment currently active):
`mkvirtualenv <existing name>`
which should pull in the latest (upgraded) python version from the system and fix any mismatch errors.
|
I have just solved that problem on my machine.
The problem was that Ubuntu 13.04 use python 2.7.4. That makes conflict with the Python version of the `virtualenv`.
What I do was to re-create the `virtualenv` with the new version of python. I think it's the simplest way, but you can try to upgrade the python version without re-creating all of the `virtualenv`.
|
16,297,892
|
Upgrade to 13.04 has totally messed my system up .
I am having this issue when running
```
./manage.py runserver
Traceback (most recent call last):
File "./manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
File "/home/rats/rats/local/lib/python2.7/site-packages/django/core/management
/__init__.py", line 4, in <module>
from optparse import OptionParser, NO_DEFAULT
File "/usr/lib/python2.7/optparse.py", line 77, in <module>
import textwrap
File "/usr/lib/python2.7/textwrap.py", line 10, in <module>
import string, re
File "/usr/lib/python2.7/string.py", line 83, in <module>
import re as _re
File "/home/rats/rats/lib/python2.7/re.py", line 105, in <module>
import sre_compile
File "/home/rats/rats/lib/python2.7/sre_compile.py", line 14, in <module>
import sre_parse
File "/home/rats/rats/lib/python2.7/sre_parse.py", line 17, in <module>
from sre_constants import *
File "/home/rats/rats/lib/python2.7/sre_constants.py", line 18, in <module>
from _sre import MAXREPEAT
ImportError: cannot import name MAXREPEAT
```
this is happening for both the real environment as well as for virtual environment .
i tried removing python with
```
sudo apt-get remove python
```
and sadly it has removed everything .
now google chrome does not show any fonts .
i am looking for getting things back to work .
help is needed for proper configuring it again.
|
2013/04/30
|
[
"https://Stackoverflow.com/questions/16297892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1080407/"
] |
If you are using virtualenvwrapper then you can recreate the virtualenv on top of the existing one (with no environment currently active):
`mkvirtualenv <existing name>`
which should pull in the latest (upgraded) python version from the system and fix any mismatch errors.
|
You don't need to recreate the environment.
You can upgrade the virtualenv running this command:
>
> virtualenv /PATH/TO/YOUR\_OLD\_ENV
>
>
>
`YOUR_OLD_ENV` folder will be properly upgraded to the version 2.7.4.
|
16,297,892
|
Upgrade to 13.04 has totally messed my system up .
I am having this issue when running
```
./manage.py runserver
Traceback (most recent call last):
File "./manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
File "/home/rats/rats/local/lib/python2.7/site-packages/django/core/management
/__init__.py", line 4, in <module>
from optparse import OptionParser, NO_DEFAULT
File "/usr/lib/python2.7/optparse.py", line 77, in <module>
import textwrap
File "/usr/lib/python2.7/textwrap.py", line 10, in <module>
import string, re
File "/usr/lib/python2.7/string.py", line 83, in <module>
import re as _re
File "/home/rats/rats/lib/python2.7/re.py", line 105, in <module>
import sre_compile
File "/home/rats/rats/lib/python2.7/sre_compile.py", line 14, in <module>
import sre_parse
File "/home/rats/rats/lib/python2.7/sre_parse.py", line 17, in <module>
from sre_constants import *
File "/home/rats/rats/lib/python2.7/sre_constants.py", line 18, in <module>
from _sre import MAXREPEAT
ImportError: cannot import name MAXREPEAT
```
this is happening for both the real environment as well as for virtual environment .
i tried removing python with
```
sudo apt-get remove python
```
and sadly it has removed everything .
now google chrome does not show any fonts .
i am looking for getting things back to work .
help is needed for proper configuring it again.
|
2013/04/30
|
[
"https://Stackoverflow.com/questions/16297892",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1080407/"
] |
You don't need to recreate the environment.
You can upgrade the virtualenv running this command:
>
> virtualenv /PATH/TO/YOUR\_OLD\_ENV
>
>
>
`YOUR_OLD_ENV` folder will be properly upgraded to the version 2.7.4.
|
I have just solved that problem on my machine.
The problem was that Ubuntu 13.04 use python 2.7.4. That makes conflict with the Python version of the `virtualenv`.
What I do was to re-create the `virtualenv` with the new version of python. I think it's the simplest way, but you can try to upgrade the python version without re-creating all of the `virtualenv`.
|
51,531,429
|
I have an ndarray of N 1x3 arrays I'd like to perform dot multiplication with a 3x3 matrix. I can't seem to figure out an efficient way to do this, as all the multi\_dot and tensordot, etc methods seem to recursively sum or multiply the results of each operation. I simply want to apply a dot multiply the same way you can apply a scalar. I can do this with a for loop or list comprehension but it is much too slow for my application.
```
N = np.asarray([[1, 2, 3], [4, 5, 6], [7, 8, 9], ...])
m = np.asarray([[10, 20, 30], [40, 50, 60], [70, 80, 90]])
```
I'd like to perform something such as this but without any python loops:
```
np.asarray([np.dot(m, a) for a in N])
```
so that it simply returns `[m * N[0], m * N[1], m * N[2], ...]`
What's the most efficient way to do this? And is there a way to do this so that if N is just a single 1x3 matrix, it will just output the same as np.dot(m, N)?
|
2018/07/26
|
[
"https://Stackoverflow.com/questions/51531429",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6587755/"
] |
Try This:
```
import numpy as np
N = np.asarray([[1, 2, 3], [4, 5, 6], [7, 8, 9], [1, 2, 3], [4, 5, 6]])
m = np.asarray([[10, 20, 30], [40, 50, 60], [70, 80, 90]])
re0 = np.asarray([np.dot(m, a) for a in N]) # original
re1 = np.dot(m, N.T).T # efficient
print("result0:\n{}".format(re0))
print("result1:\n{}".format(re1))
print("Is result0 == result1? {}".format(np.array_equal(re0, re1)))
```
Output:
```
result0:
[[ 140 320 500]
[ 320 770 1220]
[ 500 1220 1940]
[ 140 320 500]
[ 320 770 1220]]
result1:
[[ 140 320 500]
[ 320 770 1220]
[ 500 1220 1940]
[ 140 320 500]
[ 320 770 1220]]
Is result0 == result1? True
```
Time cost:
```
import timeit
setup = '''
import numpy as np
N = np.random.random((1, 3))
m = np.asarray([[10, 20, 30], [40, 50, 60], [70, 80, 790]])
'''
>> timeit.timeit("np.asarray([np.dot(m, a) for a in N])", setup=setup, number=100000)
0.295798063278
>> timeit.timeit("np.dot(m, N.T).T", setup=setup, number=100000)
0.10135102272
# N = np.random.random((10, 3))
>> timeit.timeit("np.asarray([np.dot(m, a) for a in N])", setup=setup, number=100000)
1.7417007659969386
>> timeit.timeit("np.dot(m, N.T).T", setup=setup, number=100000)
0.1587108800013084
# N = np.random.random((100, 3))
>> timeit.timeit("np.asarray([np.dot(m, a) for a in N])", setup=setup, number=100000)
11.6454949379
>> timeit.timeit("np.dot(m, N.T).T", setup=setup, number=100000)
0.180465936661
```
|
First, regarding your last question. There's a difference between a (3,) `N` and (1,3):
```
In [171]: np.dot(m,[1,2,3])
Out[171]: array([140, 320, 500]) # (3,) result
In [172]: np.dot(m,[[1,2,3]])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-172-e8006b318a32> in <module>()
----> 1 np.dot(m,[[1,2,3]])
ValueError: shapes (3,3) and (1,3) not aligned: 3 (dim 1) != 1 (dim 0)
```
Your iterative version produces a (1,3) result:
```
In [174]: np.array([np.dot(m,a) for a in [[1,2,3]]])
Out[174]: array([[140, 320, 500]])
```
Make `N` a (4,3) array (this helps keep the first dim of N distinct):
```
In [176]: N = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10,11,12]])
In [177]: N.shape
Out[177]: (4, 3)
In [178]: np.array([np.dot(m,a) for a in N])
Out[178]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
Result is (4,3).
A simple `dot` doesn't work (same as in the (1,3) case):
```
In [179]: np.dot(m,N)
...
ValueError: shapes (3,3) and (4,3) not aligned: 3 (dim 1) != 4 (dim 0)
In [180]: np.dot(m,N.T) # (3,3) dot with (3,4) -> (3,4)
Out[180]:
array([[ 140, 320, 500, 680],
[ 320, 770, 1220, 1670],
[ 500, 1220, 1940, 2660]])
```
So this needs another transpose to match your iterative result.
The explicit indices of `einsum` can also take care of these transpose:
```
In [181]: np.einsum('ij,kj->ki',m,N)
Out[181]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
Also works with the (1,3) case (but not with the (3,) case):
```
In [182]: np.einsum('ij,kj->ki',m,[[1,2,3]])
Out[182]: array([[140, 320, 500]])
```
`matmul`, `@` is also designed to calculate repeated dots - if the inputs are 3d (or broadcastable to that):
```
In [184]: (m@N[:,:,None]).shape
Out[184]: (4, 3, 1)
In [185]: (m@N[:,:,None])[:,:,0] # to squeeze out that last dimension
Out[185]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
`dot` and `matmul` describe what happens with 1, 2 and 3d inputs. It can take some time, and experimentation, to get a feel for what is happening. The basic rule is last of A with 2nd to the last of B.
Your `N` is actually (n,3), `n` `(3,)` arrays. Here's what 4 (1,3) arrays looks like:
```
In [186]: N1 = N[:,None,:]
In [187]: N1.shape
Out[187]: (4, 1, 3)
In [188]: N1
Out[188]:
array([[[ 1, 2, 3]],
[[ 4, 5, 6]],
[[ 7, 8, 9]],
[[10, 11, 12]]])
```
and the dot as before (4,1,3) dot (3,3).T -> (4,1,3) -> (4,3)
```
In [190]: N1.dot(m.T).squeeze()
Out[190]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
and n of those:
```
In [191]: np.array([np.dot(a,m.T).squeeze() for a in N1])
Out[191]:
array([[ 140, 320, 500],
[ 320, 770, 1220],
[ 500, 1220, 1940],
[ 680, 1670, 2660]])
```
|
25,570,507
|
Still working with LDAP...
The problem i submit today is this: i'm creating a posixGroup on a server LDAP using a custom method developed in python using Django framework. I attach the method code below.
The main issue is that attribute **gidNumber is compulsory of posixGroup class**, but usually is not required when using graphical LDAP client like phpLDAPadmin since they fill automatically this field like an auto-integer.
Here the question: **gidNumber is an auto integer attribute for default, or just using client like the quoted above? Must i specify it during the posixGroup entry creation?**
```
def ldap_cn_entry(self):
import ldap.modlist as modlist
dn = u"cn=myGroupName,ou=plant,dc=ldap,dc=dem2m,dc=it"
# A dict to help build the "body" of the object
attrs = {}
attrs['objectclass'] = ['posixGroup']
attrs['cn'] = 'myGroupName'
attrs['gidNumber'] = '508'
# Convert our dict to nice syntax for the add-function using modlist-module
ldif = modlist.addModlist(attrs)
# Do the actual synchronous add-operation to the ldapserver
self.connector.add_s(dn, ldif)
```
* connector is first instanced in the constructor of the class where this method is built. The constructor provides also to the LDAP initialization and binding. Than, the connection will be closed by the destructor.
* to use the method i begin instancing the class it belongs, so it also connects to LDAP server. Than i use the method and finally i destroy the object i instanced before to close the connection. **All works, indeed, if use this procedure to create a different entry, or if i specify the gidNumber manually**.
The fact is i CAN'T specify the gidNumber any time i want to create a group to goal my purpose. I should leave it filling automatically (if that's possible) or think about another way to complete it.
I'm not posting more code about the class i made to not throng the page.
I'll provide more information if needed. Thank you.
|
2014/08/29
|
[
"https://Stackoverflow.com/questions/25570507",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3963403/"
] |
See paragraph a "Note on Minification" in <https://docs.angularjs.org/tutorial/step_05>
It is used to keep a string reference of your injections of dependencies after minifications :
>
> Since Angular infers the controller's dependencies from the names of
> arguments to the controller's constructor function, if you were to
> minify the JavaScript code for PhoneListCtrl controller, all of its
> function arguments would be minified as well, and the dependency
> injector would not be able to identify services correctly.
>
>
>
|
Controllers are callables, and their arguments must be injected with existing/valid/registered dependencies. Angular takes three ways:
1. If the passed controller (this also applies to providers) is an array, the last item is the controller, and the former items are expected to be strings with the names of dependencies to inject. The names count and the arity must match.
```
//parameter names are what I want.
c = mymodule.controller('MyController', ['$scope', '$http', function($s, $h) {}]);
```
2. Otherwise, if the passed controller has an `$inject` property, it is expected that such property is an array of strings being the names of the dependencies. The length of the array and the arity must match.
```
con = function($s, $h) {};
con.$inject = ['$scope', '$http'];
c = mymodule.controller('MyController', conn);
```
3. Otherwise, the array of names to inject is taken from the parameter list, so they **must be named accordingly**.
```
c = mymodule.controller('MyController', function($scope, $http) {});
//one type, one minification, and you're screwed
```
You should **never** expect that the controller works if you don't explicitly set -explicitly- the names of the dependencies to inject. It is a **bad** practice because:
1. The parameter names **will** change if you minify (and you **will** -some day- minify your script).
2. A typo in your parameter name, and you'll be hours looking for the error.
Suggestion: **always** use an explicit notation (way 1 or 2).
|
8,127,648
|
I have two threads in python (2.7).
I start them at the beginning of my program. While they execute, my program reaches the end and exits, killing both of my threads before waiting for resolution.
I'm trying to figure out how to wait for both threads to finish before exiting.
```
def connect_cam(ip, execute_lock):
try:
conn = TelnetConnection.TelnetClient(ip)
execute_lock.acquire()
ExecuteUpdate(conn, ip)
execute_lock.release()
except ValueError:
pass
execute_lock = thread.allocate_lock()
thread.start_new_thread(connect_cam, ( headset_ip, execute_lock ) )
thread.start_new_thread(connect_cam, ( handcam_ip, execute_lock ) )
```
In .NET I would use something like WaitAll() but I haven't found the equivalent in python. In my scenario, TelnetClient is a long operation which may result in a failure after a timeout.
|
2011/11/14
|
[
"https://Stackoverflow.com/questions/8127648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6367/"
] |
`Thread` is meant as a lower level primitive interface to Python's threading machinery - use [`threading`](http://docs.python.org/library/threading.html#thread-objects) instead. Then, you can use `threading.join()` to synchronize threads.
>
> Other threads can call a thread’s join() method. This blocks the
> calling thread until the thread whose join() method is called is
> terminated.
>
>
>
|
First, you ought to be using the [threading](http://docs.python.org/library/threading.html#module-threading) module, not the thread module. Next, have your main thread [join()](http://docs.python.org/library/threading.html#threading.Thread.join) the other threads.
|
8,127,648
|
I have two threads in python (2.7).
I start them at the beginning of my program. While they execute, my program reaches the end and exits, killing both of my threads before waiting for resolution.
I'm trying to figure out how to wait for both threads to finish before exiting.
```
def connect_cam(ip, execute_lock):
try:
conn = TelnetConnection.TelnetClient(ip)
execute_lock.acquire()
ExecuteUpdate(conn, ip)
execute_lock.release()
except ValueError:
pass
execute_lock = thread.allocate_lock()
thread.start_new_thread(connect_cam, ( headset_ip, execute_lock ) )
thread.start_new_thread(connect_cam, ( handcam_ip, execute_lock ) )
```
In .NET I would use something like WaitAll() but I haven't found the equivalent in python. In my scenario, TelnetClient is a long operation which may result in a failure after a timeout.
|
2011/11/14
|
[
"https://Stackoverflow.com/questions/8127648",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6367/"
] |
`Thread` is meant as a lower level primitive interface to Python's threading machinery - use [`threading`](http://docs.python.org/library/threading.html#thread-objects) instead. Then, you can use `threading.join()` to synchronize threads.
>
> Other threads can call a thread’s join() method. This blocks the
> calling thread until the thread whose join() method is called is
> terminated.
>
>
>
|
Yoo can do something like that:
```
import threading
class connect_cam(threading.Thread):
def __init__(self, ip, execute_lock):
threading.Thread.__init__(self)
self.ip = ip
self.execute_lock = execute_lock
def run(self):
try:
conn = TelnetConnection.TelnetClient(self.ip)
self.execute_lock.acquire()
ExecuteUpdate(conn, self.ip)
self.execute_lock.release()
except ValueError:
pass
execute_lock = thread.allocate_lock()
tr1 = connect_cam(headset_ip, execute_lock)
tr2 = connect_cam(handcam_ip, execute_lock)
tr1.start()
tr2.start()
tr1.join()
tr2.join()
```
With the method .join(), the two threads (tr1 and tr2) will wait for each other.
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
I think your code is ok.
You need close resultset, statement and connection in block finally.
|
Though your code works, I strongly suggest to **refactor it for better maintenance and readability** as shown below. Also, ensure that the resources are closed properly:
```
public void dashboardReports() {
handleTotalStocks();
handleTotalSales();
handleTotalPurchages();
//Add others
}
```
**handleTotalStocks() method:**
```
private void handleTotalStocks() {
String total_stock_value="select sum(price*closingstock)as tsv
from purchase_table";
try(Statement ps_tsv=connection.createStatement();
ResultSet set_tsv=ps_tsv.executeQuery(total_stock_value);) {
if(set_tsv.next()) {
total_stock.setText(set_tsv.getString("tsv"));
}
}
}
//add other methods
```
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
Since you are only doing `SELECT` operations here, there is no real need for an explicit transaction, because you are not changing the state of the database, and there is nothing to rollback. There is nothing wrong with grouping all `SELECT` statements inside a single `try` block. However, there is a potential drawback, namely that if one `SELECT` fails, your code will exit that `try` block and all subsequent queries will not run. If you can tolerate this, then you can leave your as is. An analogy to this would be a series of lightbulbs connected in serial; if one breaks, then they all go out.
An alternative to what you have would be to use separate `try` blocks for each query. Then, even if an exception were to happen in one of them, it is possible that the others could complete successfully. The analogy here would be a series of lightbulbs in a parallel circuit.
|
Though your code works, I strongly suggest to **refactor it for better maintenance and readability** as shown below. Also, ensure that the resources are closed properly:
```
public void dashboardReports() {
handleTotalStocks();
handleTotalSales();
handleTotalPurchages();
//Add others
}
```
**handleTotalStocks() method:**
```
private void handleTotalStocks() {
String total_stock_value="select sum(price*closingstock)as tsv
from purchase_table";
try(Statement ps_tsv=connection.createStatement();
ResultSet set_tsv=ps_tsv.executeQuery(total_stock_value);) {
if(set_tsv.next()) {
total_stock.setText(set_tsv.getString("tsv"));
}
}
}
//add other methods
```
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
If you are happy with all subsequent `SELECT`s failing if one fails, then I would change the method to throw an exception
```
public void dashboardReports() throws SQLException
{
....
}
```
and then catch the SQLException from the calling method.
**Note** I think it is better to throw/catch a `SQLException` rather than a `Exception`
|
Just make sure you close your statements and resultsets:
```
try {
String total_stock_value="select sum(price*closingstock)as tsv from purchase_table";
try (Statement ps_tsv=connection.createStatement();
ResultSet set_tsv=ps_tsv.executeQuery(total_stock_value)) {
if(set_tsv.next())
{
total_stock.setText(set_tsv.getString("tsv"));
}
}
String tota_sales="select sum(INVOICE_VALUE) as iv from PARTYWISE_ACCOUNTS_LEDGER";
try (Statement st_total_sales=connection.createStatement();
ResultSet set_total_sales=st_total_sales.executeQuery(tota_sales)) {
if(set_total_sales.next())
{
total_sales.setText(set_total_sales.getString("iv"));
}
}
String total_purchases="select sum(CP_INVOICEVALUE)as cpi from COMPANY_PAYMENTS";
try (Statement st_tps=connection.createStatement();
ResultSet set_tps=st_tps.executeQuery(total_purchases)) {
if(set_tps.next())
{
total_purchases_label.setText(set_tps.getString("cpi"));
}
}
String total_collectionss="select sum(PAYMENT_REC) as payrec from PARTYWISE_ACCOUNTS_LEDGER";
try (Statement ps_toco=connection.createStatement();
ResultSet set_toco=ps_toco.executeQuery(total_collectionss)) {
if(set_toco.next())
{
total_collections.setText(set_toco.getString("payrec"));
}
}
String total_payments="select sum(CP_PAYMENTREC) as paid from COMPANY_PAYMENTS";
try (Statement ps_topa=connection.createStatement();
ResultSet set_topa=ps_topa.executeQuery(total_payments)) {
if(set_topa.next())
{
total_payments_label.setText(set_topa.getString("paid"));
}
}
} catch (Exception e) {
// TODO: handle except
}
```
}
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
This violates the [Single Responsbility](https://en.wikipedia.org/wiki/Single_responsibility_principle) and the [Single Layer of Abstraction](http://principles-wiki.net/principles:single_level_of_abstraction) principles.
So although this code is technically valid; you should not only focus on its **correctness**, but also on its readability. And testability. And I think neither of does are "great" in the input you are showing.
Thus; coming from a clean code (quality) perspective; I would rather advise to go for something along the lines of:
```
outer method ...
try {
helperMethod1();
helperMethod2();
} catch( ...
```
with a small helper for each of the different cases you got there. And of course, you wouldn't stop there; but try to isolate *common* aspects of those helpers; and to maybe find ways to go with a single, more generic helper.
And of course: you try to avoid catching **Exception** if possible. Instead, you catch the **most specific** exception possible!
|
If you are happy with all subsequent `SELECT`s failing if one fails, then I would change the method to throw an exception
```
public void dashboardReports() throws SQLException
{
....
}
```
and then catch the SQLException from the calling method.
**Note** I think it is better to throw/catch a `SQLException` rather than a `Exception`
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
Since you are only doing `SELECT` operations here, there is no real need for an explicit transaction, because you are not changing the state of the database, and there is nothing to rollback. There is nothing wrong with grouping all `SELECT` statements inside a single `try` block. However, there is a potential drawback, namely that if one `SELECT` fails, your code will exit that `try` block and all subsequent queries will not run. If you can tolerate this, then you can leave your as is. An analogy to this would be a series of lightbulbs connected in serial; if one breaks, then they all go out.
An alternative to what you have would be to use separate `try` blocks for each query. Then, even if an exception were to happen in one of them, it is possible that the others could complete successfully. The analogy here would be a series of lightbulbs in a parallel circuit.
|
If you are happy with all subsequent `SELECT`s failing if one fails, then I would change the method to throw an exception
```
public void dashboardReports() throws SQLException
{
....
}
```
and then catch the SQLException from the calling method.
**Note** I think it is better to throw/catch a `SQLException` rather than a `Exception`
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
I think your code is ok.
You need close resultset, statement and connection in block finally.
|
1.You can have a method like , `executeQuery(Connection conn, Statement st, String sql)` to encapsulate and reduce your lines of code.
2.Don't rely on generic `Exception` , catch sql specific exception classes too
3.I don't see a `finally` block there to properly close resources unless you are doing that somewhere else. Alternatively, you can try using [try with resource](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) syntax to eliminate need of `finally` block
4.See what you need to do in `catch` block - do you need to propagate exception up above the chain or fail the program right there?
5.In my opinion, a `ResultSet` and `Statement` needs to live as short as they can so try closing them as soon as you can - don't wait to close them all in single chunk. Point#1 will help in achieving this.
From technical correctness and validity perspective, there is no harm in writing code the way you did- using a single try-catch for all SQLs and eating out any exceptions ( since I see only `SELECT` sqls there) but there are clean,readable and maintainable code and on that front, your code looks poor.
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
If you are happy with all subsequent `SELECT`s failing if one fails, then I would change the method to throw an exception
```
public void dashboardReports() throws SQLException
{
....
}
```
and then catch the SQLException from the calling method.
**Note** I think it is better to throw/catch a `SQLException` rather than a `Exception`
|
1.You can have a method like , `executeQuery(Connection conn, Statement st, String sql)` to encapsulate and reduce your lines of code.
2.Don't rely on generic `Exception` , catch sql specific exception classes too
3.I don't see a `finally` block there to properly close resources unless you are doing that somewhere else. Alternatively, you can try using [try with resource](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) syntax to eliminate need of `finally` block
4.See what you need to do in `catch` block - do you need to propagate exception up above the chain or fail the program right there?
5.In my opinion, a `ResultSet` and `Statement` needs to live as short as they can so try closing them as soon as you can - don't wait to close them all in single chunk. Point#1 will help in achieving this.
From technical correctness and validity perspective, there is no harm in writing code the way you did- using a single try-catch for all SQLs and eating out any exceptions ( since I see only `SELECT` sqls there) but there are clean,readable and maintainable code and on that front, your code looks poor.
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
Since you are only doing `SELECT` operations here, there is no real need for an explicit transaction, because you are not changing the state of the database, and there is nothing to rollback. There is nothing wrong with grouping all `SELECT` statements inside a single `try` block. However, there is a potential drawback, namely that if one `SELECT` fails, your code will exit that `try` block and all subsequent queries will not run. If you can tolerate this, then you can leave your as is. An analogy to this would be a series of lightbulbs connected in serial; if one breaks, then they all go out.
An alternative to what you have would be to use separate `try` blocks for each query. Then, even if an exception were to happen in one of them, it is possible that the others could complete successfully. The analogy here would be a series of lightbulbs in a parallel circuit.
|
1.You can have a method like , `executeQuery(Connection conn, Statement st, String sql)` to encapsulate and reduce your lines of code.
2.Don't rely on generic `Exception` , catch sql specific exception classes too
3.I don't see a `finally` block there to properly close resources unless you are doing that somewhere else. Alternatively, you can try using [try with resource](https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html) syntax to eliminate need of `finally` block
4.See what you need to do in `catch` block - do you need to propagate exception up above the chain or fail the program right there?
5.In my opinion, a `ResultSet` and `Statement` needs to live as short as they can so try closing them as soon as you can - don't wait to close them all in single chunk. Point#1 will help in achieving this.
From technical correctness and validity perspective, there is no harm in writing code the way you did- using a single try-catch for all SQLs and eating out any exceptions ( since I see only `SELECT` sqls there) but there are clean,readable and maintainable code and on that front, your code looks poor.
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
This violates the [Single Responsbility](https://en.wikipedia.org/wiki/Single_responsibility_principle) and the [Single Layer of Abstraction](http://principles-wiki.net/principles:single_level_of_abstraction) principles.
So although this code is technically valid; you should not only focus on its **correctness**, but also on its readability. And testability. And I think neither of does are "great" in the input you are showing.
Thus; coming from a clean code (quality) perspective; I would rather advise to go for something along the lines of:
```
outer method ...
try {
helperMethod1();
helperMethod2();
} catch( ...
```
with a small helper for each of the different cases you got there. And of course, you wouldn't stop there; but try to isolate *common* aspects of those helpers; and to maybe find ways to go with a single, more generic helper.
And of course: you try to avoid catching **Exception** if possible. Instead, you catch the **most specific** exception possible!
|
Better way is to create method that does common operations:
```
public String execute(String query) throws SQLException {
Statement ps_toco=connection.createStatement();
ResultSet set_toco=ps_toco.executeQuery(query);
return set_toco.next();
}
```
When you call this method surround it with try catch block.
|
43,603,199
|
I am using Docker on a Python Flask webapp, but am getting an error when I try and run it.
```
$ sudo docker run -t imgcomparer6
unable to load configuration from app.py
```
**Python**
In my app.py file, my only instance of `app.run()` in the webapp is within the `'__main__':` function (seen [here](https://stackoverflow.com/questions/34615743/unable-to-load-configuration-from-uwsgi))
```
if __name__ == '__main__':
app.run(host="127.0.0.1", port=int("8000"), debug=True)
```
**Dockerfile**
```
FROM ubuntu:latest
#Update OS
RUN sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y upgrade
# Install Python
RUN apt-get install -y python-dev python-pip
RUN mkdir /webapp/
# Add requirements.txt
ADD requirements.txt /webapp/
ADD requirements.txt .
# Install uwsgi Python web server
RUN pip install uwsgi
# Install app requirements
RUN pip install -r requirements.txt
# Create app directory
ADD . /webapp/
# Set the default directory for our environment
ENV HOME /webapp/
WORKDIR /webapp/
# Expose port 8000 for uwsgi
EXPOSE 8000
ENTRYPOINT ["uwsgi", "--http", "127.0.0.1:8000", "--module", "app:app", "--processes", "1", "--threads", "8"]
#ENTRYPOINT ["python"]
CMD ["app.py"]
```
**Directory Structure**
```
app.py
image_data.db
README.txt
requirements.txt
Dockerfile
templates
- index.html
static/
- image.js
- main.css
img/
- camera.png
images/
- empty
```
EDIT:
Docker images
```
castro@Ezri:~/Desktop/brian_castro_programming_test$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
imgcomparer6 latest d2af1b18ec87 59 minutes ago 430 MB
imgcomparer5 latest 305fa5062b41 About an hour ago 430 MB
<none> <none> e982e54b011a About an hour ago 430 MB
imgcomparer2 latest c7e3ad57be55 About an hour ago 430 MB
imgcomparer latest a1402ec1efb1 About an hour ago 430 MB
<none> <none> 8f5126108354 14 hours ago 425 MB
flask-sample-one latest 9bdc51fa4d7c 23 hours ago 453 MB
ubuntu latest 6a2f32de169d 12 days ago 117 MB
```
Image Log (gives error)
```
sudo docker logs imgcomparer6
Error: No such container: imgcomparer6
```
Tried running this, as suggested below:
'$ sudo docker run -t imgcomparer6 ; sudo docker logs $(docker ps -lq)'
```
unable to load configuration from app.py
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.27/containers/json?limit=1: dial unix /var/run/docker.sock: connect: permission denied
"docker logs" requires exactly 1 argument(s).
See 'docker logs --help'.
Usage: docker logs [OPTIONS] CONTAINER
Fetch the logs of a container
```
ps -a
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e80bfd0a3a11 imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago musing_fermat
29c188ede9ba imgcomparer6 "uwsgi --http 127...." 54 minutes ago Exited (1) 54 minutes ago kind_jepsen
a58945d9cd86 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago musing_wright
ca70b624df5e imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago brave_hugle
964a1366b105 imgcomparer6 "uwsgi --http 127...." 55 minutes ago Exited (1) 55 minutes ago clever_almeida
155c296a3dce imgcomparer6 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago jovial_heisenberg
0a6a3bb55b55 imgcomparer5 "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago sharp_mclean
76d4f40c4b82 e982e54b011a "uwsgi --http 127...." 2 hours ago Exited (1) 2 hours ago kind_hodgkin
918954bf416a d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago amazing_bassi
205276ba1ab2 d73c44a6c215 "/bin/sh -c 'mkdir..." 2 hours ago Exited (1) 2 hours ago distracted_joliot
86a180f071c6 d73c44a6c215 "/bin/sh -c '#(nop..." 2 hours ago Created goofy_torvalds
fc1ec345c236 imgcomparer2 "uwsgi --http 127...." 2 hours ago Created wizardly_boyd
b051d4cdf0c6 imgcomparer "uwsgi --http 127...." 2 hours ago Created jovial_mclean
ed78e965755c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elated_shirley
a65978d30c8f d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created vigilant_wright
760ac5a0281b d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created xenodochial_heyrovsky
9d7d8bcb2226 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_noyce
36012d4c6115 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created adoring_hypatia
deacab89f416 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created objective_franklin
43e894f8fb9c d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created sleepy_hodgkin
2d190d0fc6e5 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created modest_hoover
b1640a039c31 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_galileo
baf94cf2dc6e d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created affectionate_elion
2b54996907b6 d73c44a6c215 "/bin/sh -c '#(nop..." 3 hours ago Created elastic_wiles
663d4e096938 8f5126108354 "/bin/sh -c 'pip i..." 15 hours ago Exited (1) 15 hours ago admiring_agnesi
```
|
2017/04/25
|
[
"https://Stackoverflow.com/questions/43603199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1258509/"
] |
I think your code is ok.
You need close resultset, statement and connection in block finally.
|
Better way is to create method that does common operations:
```
public String execute(String query) throws SQLException {
Statement ps_toco=connection.createStatement();
ResultSet set_toco=ps_toco.executeQuery(query);
return set_toco.next();
}
```
When you call this method surround it with try catch block.
|
47,961,437
|
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
```
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
```
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
```
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
```
and the exception is called as can be seen from my output in my notebook:
```
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
```
However, error.log is not being written to. Any thoughts would be appreciated.
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47961437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694657/"
] |
If **spark-shell** doesn't show this line on start:
>
> Spark context available as 'sc' (master = local[\*], app id = local-XXX).
>
>
>
Run
```
val sc = SparkContext.getOrCreate()
```
|
The issue is that you created `sc` of type `SparkConfig` not `SparkContext` (both have the same initials).
---
For using parallelize method in Spark 2.0 version or any other version, `sc` should be `SparkContext` and not `SparkConf`. The correct code should be like this:
```
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
val sparkConf = new SparkConf().setAppName("myname").setMaster("mast")
val sc = new SparkContext(sparkConf)
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
```
This will give you the desired result.
|
47,961,437
|
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
```
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
```
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
```
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
```
and the exception is called as can be seen from my output in my notebook:
```
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
```
However, error.log is not being written to. Any thoughts would be appreciated.
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47961437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694657/"
] |
The issue is that you created `sc` of type `SparkConfig` not `SparkContext` (both have the same initials).
---
For using parallelize method in Spark 2.0 version or any other version, `sc` should be `SparkContext` and not `SparkConf`. The correct code should be like this:
```
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
val sparkConf = new SparkConf().setAppName("myname").setMaster("mast")
val sc = new SparkContext(sparkConf)
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
```
This will give you the desired result.
|
You should prefer to use `SparkSession` as it is the the entry point for Spark from version 2. You could try something like :
```
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder.
master("local")
.appName("spark session example")
.getOrCreate()
val sc = spark.sparkContext
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
```
[](https://i.stack.imgur.com/JXca7.png)
|
47,961,437
|
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
```
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
```
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
```
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
```
and the exception is called as can be seen from my output in my notebook:
```
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
```
However, error.log is not being written to. Any thoughts would be appreciated.
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47961437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694657/"
] |
The issue is that you created `sc` of type `SparkConfig` not `SparkContext` (both have the same initials).
---
For using parallelize method in Spark 2.0 version or any other version, `sc` should be `SparkContext` and not `SparkConf`. The correct code should be like this:
```
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
val sparkConf = new SparkConf().setAppName("myname").setMaster("mast")
val sc = new SparkContext(sparkConf)
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
```
This will give you the desired result.
|
There is some problem with `2.2.0 version` of Apache Spark. I replaced it with `2.2.1 version` which is the latest one and i am able to get `sc` and `spark` variables automatically when I start `spark-shell` via `cmd` in `windows 7`. I hope it will help someone.
I executed below code which creates rdd and it works perfectly. No need to import any packages.
```
val dataOne=sc.parallelize(1 to 10)
dataOne.collect(); //Will print 1 to 10 numbers in array
```
|
47,961,437
|
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
```
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
```
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
```
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
```
and the exception is called as can be seen from my output in my notebook:
```
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
```
However, error.log is not being written to. Any thoughts would be appreciated.
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47961437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694657/"
] |
The issue is that you created `sc` of type `SparkConfig` not `SparkContext` (both have the same initials).
---
For using parallelize method in Spark 2.0 version or any other version, `sc` should be `SparkContext` and not `SparkConf`. The correct code should be like this:
```
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
val sparkConf = new SparkConf().setAppName("myname").setMaster("mast")
val sc = new SparkContext(sparkConf)
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
```
This will give you the desired result.
|
Your code shud like this
```
val conf = new SparkConf()
conf.setMaster("local[*]")
conf.setAppName("myname")
val sc = new SparkContext(conf)
```
NOTE: master url should be local[\*]
|
47,961,437
|
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
```
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
```
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
```
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
```
and the exception is called as can be seen from my output in my notebook:
```
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
```
However, error.log is not being written to. Any thoughts would be appreciated.
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47961437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694657/"
] |
If **spark-shell** doesn't show this line on start:
>
> Spark context available as 'sc' (master = local[\*], app id = local-XXX).
>
>
>
Run
```
val sc = SparkContext.getOrCreate()
```
|
You should prefer to use `SparkSession` as it is the the entry point for Spark from version 2. You could try something like :
```
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder.
master("local")
.appName("spark session example")
.getOrCreate()
val sc = spark.sparkContext
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
```
[](https://i.stack.imgur.com/JXca7.png)
|
47,961,437
|
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
```
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
```
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
```
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
```
and the exception is called as can be seen from my output in my notebook:
```
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
```
However, error.log is not being written to. Any thoughts would be appreciated.
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47961437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694657/"
] |
If **spark-shell** doesn't show this line on start:
>
> Spark context available as 'sc' (master = local[\*], app id = local-XXX).
>
>
>
Run
```
val sc = SparkContext.getOrCreate()
```
|
There is some problem with `2.2.0 version` of Apache Spark. I replaced it with `2.2.1 version` which is the latest one and i am able to get `sc` and `spark` variables automatically when I start `spark-shell` via `cmd` in `windows 7`. I hope it will help someone.
I executed below code which creates rdd and it works perfectly. No need to import any packages.
```
val dataOne=sc.parallelize(1 to 10)
dataOne.collect(); //Will print 1 to 10 numbers in array
```
|
47,961,437
|
I'm using Jupyter Notebook to develop some Python. This is my first stab at logging errors and I'm having an issue where no errors are logged to my error file.
I'm using:
```
import logging
logger = logging.getLogger('error')
logger.propagate = False
hdlr = logging.FileHandler("error.log")
formatter = logging.Formatter('%(asctime)s %(message)s')
hdlr.setFormatter(formatter)
logger.addHandler(hdlr)
logger.setLevel(logging.DEBUG)
```
to create the logger.
I'm then using a try/Except block which is being called on purpose (for testing) by using a column that doesn't exist in my database:
```
try:
some_bad_call_to_the_database('start',group_id)
except Exception as e:
logger.exception("an error I would like to log")
print traceback.format_exc(limit=1)
```
and the exception is called as can be seen from my output in my notebook:
```
Traceback (most recent call last):
File "<ipython-input-10-ef8532b8e6e0>", line 19, in <module>
some_bad_call_to_the_database('start',group_id)
InternalError: (1054, u"Unknown column 'start_times' in 'field list'")
```
However, error.log is not being written to. Any thoughts would be appreciated.
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47961437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1694657/"
] |
If **spark-shell** doesn't show this line on start:
>
> Spark context available as 'sc' (master = local[\*], app id = local-XXX).
>
>
>
Run
```
val sc = SparkContext.getOrCreate()
```
|
Your code shud like this
```
val conf = new SparkConf()
conf.setMaster("local[*]")
conf.setAppName("myname")
val sc = new SparkContext(conf)
```
NOTE: master url should be local[\*]
|
16,913,086
|
I want to run third part tool written in python on my ubuntu machine ([corgy tool](https://github.com/pkerpedjiev/corgy)).
However I don't know how to add additional modules to Python path.
```
cat doc/download.rst
There is currently no setup.py, so you need to manually add
the download directory to your PYTHON_PATH environment variable.
```
How can I add directory to PYTHON\_PATH?
**I have tried:**
`export PYTHON_PATH=/home/user/directory:$PYTHON_PATH && source .bashrc`
`export PATH=/home/user/directory:$PATH && source .bashrc`
`python
import sys
sys.path.append("/home/user/directory/")`
But when I try to run this tool I get:
```
Traceback (most recent call last):
File "examples/dotbracket_to_bulge_graph.py", line 4, in <module>
import corgy.graph.bulge_graph as cgb
ImportError: No module named corgy.graph.bulge_graph
```
|
2013/06/04
|
[
"https://Stackoverflow.com/questions/16913086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1286528/"
] |
Create a `.bash_profile` in your home directory. Then, add the line
```
PYTHONPATH=$PYTHONPATH:new_dir
EXPORT $PYTHONPATH
```
Or even better:
```
if [ -d "new_dir" ] ; then
PYTHONPATH="$PYTHONPATH:new_dir"
fi
EXPORT $PYTHONPATH
```
The `.bash_profile` properties are loaded every time you log in.
The `source` command is useful if you don't want to log in again.
|
[@fedorqui](https://stackoverflow.com/users/1983854/fedorqui-so-stop-harming)'s answer above was almost good for me, but there is at least one mistake (I am not sure about the `export` statement in all caps, I am a complete newbie).
There should not be a `$` sign preceding PYTHONPATH in the export statement. So the options would be:
>
> Create a .bash\_profile in your home directory. Then, add the line
>
>
>
> ```
> PYTHONPATH=$PYTHONPATH:new_dir
> export PYTHONPATH
>
> ```
>
> Or even better:
>
>
>
> ```
> if [ -d "new_dir" ] ; then
> PYTHONPATH="$PYTHONPATH:new_dir"
> fi
> export PYTHONPATH
>
> ```
>
>
|
54,503,298
|
I have a list of list of lists (all of lists have same size) in python like this:
```
A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
```
I want to remove some columns (i-th elements of all lists).
Is there any way that does this without `for` statements?
|
2019/02/03
|
[
"https://Stackoverflow.com/questions/54503298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7789910/"
] |
As mentioned, you can't do this without loop. However, using built-in functions here's a functional approach that doesn't explicitly use any loop:
```
In [24]: from operator import itemgetter
In [25]: def remove_col(arr, ith):
...: itg = itemgetter(*filter((ith).__ne__, range(len(arr[0]))))
...: return list(map(list, map(itg, arr)))
...:
```
Demo:
```
In [26]: remove_col(A, 1)
Out[26]: [[1, 3, 4], ['a', 'c', 'd'], [12, 14, 15]]
In [27]: remove_col(A, 3)
Out[27]: [[1, 2, 3], ['a', 'b', 'c'], [12, 13, 14]]
```
Note that instead of `list(map(list, map(itg, arr)))` if you only return `map(itg, arr)` it will give you the expected result but as an iterator of iterators instead of list of lists. This will be a more optimized approach in terms of both memory and run-time in this case.
Also, using loops here's the way I'd do this:
```
In [31]: def remove_col(arr, ith):
...: return [[j for i,j in enumerate(sub) if i != ith] for sub in arr]
```
Surprisingly (not if you believe in the power of C :)) the functional approach is even faster for large arrays.
```
In [41]: arr = A * 10000
In [42]: %timeit remove_col_functional(arr, 2)
8.42 ms ± 37.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [43]: %timeit remove_col_list_com(arr, 2)
23.7 ms ± 165 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# And if in functional approach you just return map(itg, arr)
In [47]: %timeit remove_col_functional_iterator(arr, 2)
1.48 µs ± 4.71 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
|
You could easily use [list comprehension](https://www.pythonforbeginners.com/basics/list-comprehensions-in-python) and [slices](https://www.pythoncentral.io/how-to-slice-listsarrays-and-tuples-in-python/) :
```
A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
k = 1
B = [l[:k]+l[k+1:] for l in A]
print(B) # >> returns [[1, 3, 4], ['a', 'c', 'd'], [12, 14, 15]]
```
|
54,503,298
|
I have a list of list of lists (all of lists have same size) in python like this:
```
A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
```
I want to remove some columns (i-th elements of all lists).
Is there any way that does this without `for` statements?
|
2019/02/03
|
[
"https://Stackoverflow.com/questions/54503298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7789910/"
] |
As mentioned, you can't do this without loop. However, using built-in functions here's a functional approach that doesn't explicitly use any loop:
```
In [24]: from operator import itemgetter
In [25]: def remove_col(arr, ith):
...: itg = itemgetter(*filter((ith).__ne__, range(len(arr[0]))))
...: return list(map(list, map(itg, arr)))
...:
```
Demo:
```
In [26]: remove_col(A, 1)
Out[26]: [[1, 3, 4], ['a', 'c', 'd'], [12, 14, 15]]
In [27]: remove_col(A, 3)
Out[27]: [[1, 2, 3], ['a', 'b', 'c'], [12, 13, 14]]
```
Note that instead of `list(map(list, map(itg, arr)))` if you only return `map(itg, arr)` it will give you the expected result but as an iterator of iterators instead of list of lists. This will be a more optimized approach in terms of both memory and run-time in this case.
Also, using loops here's the way I'd do this:
```
In [31]: def remove_col(arr, ith):
...: return [[j for i,j in enumerate(sub) if i != ith] for sub in arr]
```
Surprisingly (not if you believe in the power of C :)) the functional approach is even faster for large arrays.
```
In [41]: arr = A * 10000
In [42]: %timeit remove_col_functional(arr, 2)
8.42 ms ± 37.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [43]: %timeit remove_col_list_com(arr, 2)
23.7 ms ± 165 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# And if in functional approach you just return map(itg, arr)
In [47]: %timeit remove_col_functional_iterator(arr, 2)
1.48 µs ± 4.71 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
|
I think you can do this without `for` if you are proficient with `zip` (it's my favorite "hack"):
```
A = [[1, 2, 3, 4], ['a', 'b', 'c', 'd'], [12, 13, 14, 15]]
B = list(zip(*A))
B.pop(i)
C = list(map(list, zip(*B)))
```
Result (i = 2):
```
[[1, 2, 4], ['a', 'b', 'd'], [12, 13, 15]]
```
---
Of course, `map` is an alternative to list comprehension:
```
B = list(map(lambda l: l[:i] + l[i + 1:], A))
```
|
54,503,298
|
I have a list of list of lists (all of lists have same size) in python like this:
```
A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
```
I want to remove some columns (i-th elements of all lists).
Is there any way that does this without `for` statements?
|
2019/02/03
|
[
"https://Stackoverflow.com/questions/54503298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7789910/"
] |
As mentioned, you can't do this without loop. However, using built-in functions here's a functional approach that doesn't explicitly use any loop:
```
In [24]: from operator import itemgetter
In [25]: def remove_col(arr, ith):
...: itg = itemgetter(*filter((ith).__ne__, range(len(arr[0]))))
...: return list(map(list, map(itg, arr)))
...:
```
Demo:
```
In [26]: remove_col(A, 1)
Out[26]: [[1, 3, 4], ['a', 'c', 'd'], [12, 14, 15]]
In [27]: remove_col(A, 3)
Out[27]: [[1, 2, 3], ['a', 'b', 'c'], [12, 13, 14]]
```
Note that instead of `list(map(list, map(itg, arr)))` if you only return `map(itg, arr)` it will give you the expected result but as an iterator of iterators instead of list of lists. This will be a more optimized approach in terms of both memory and run-time in this case.
Also, using loops here's the way I'd do this:
```
In [31]: def remove_col(arr, ith):
...: return [[j for i,j in enumerate(sub) if i != ith] for sub in arr]
```
Surprisingly (not if you believe in the power of C :)) the functional approach is even faster for large arrays.
```
In [41]: arr = A * 10000
In [42]: %timeit remove_col_functional(arr, 2)
8.42 ms ± 37.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [43]: %timeit remove_col_list_com(arr, 2)
23.7 ms ± 165 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# And if in functional approach you just return map(itg, arr)
In [47]: %timeit remove_col_functional_iterator(arr, 2)
1.48 µs ± 4.71 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
|
`numpy` is able to remove entire columns:
```
import numpy
A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
na = numpy.array(A)
print(na[:,:-1]) # remove last column
print(na[:,1:]) # remove first column
print(numpy.concatenate((na[:,:2],na[:,3:]),axis=1)) # build from 2 slices: remove third column
```
result (simplicity's sake: all data have been converted to string, no `dtype` involved):
```
[['1' '2' '3']
['a' 'b' 'c']
['12' '13' '14']]
[['2' '3' '4']
['b' 'c' 'd']
['13' '14' '15']]
[['1' '2' '4']
['a' 'b' 'd']
['12' '13' '15']]
```
|
54,503,298
|
I have a list of list of lists (all of lists have same size) in python like this:
```
A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
```
I want to remove some columns (i-th elements of all lists).
Is there any way that does this without `for` statements?
|
2019/02/03
|
[
"https://Stackoverflow.com/questions/54503298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7789910/"
] |
As mentioned, you can't do this without loop. However, using built-in functions here's a functional approach that doesn't explicitly use any loop:
```
In [24]: from operator import itemgetter
In [25]: def remove_col(arr, ith):
...: itg = itemgetter(*filter((ith).__ne__, range(len(arr[0]))))
...: return list(map(list, map(itg, arr)))
...:
```
Demo:
```
In [26]: remove_col(A, 1)
Out[26]: [[1, 3, 4], ['a', 'c', 'd'], [12, 14, 15]]
In [27]: remove_col(A, 3)
Out[27]: [[1, 2, 3], ['a', 'b', 'c'], [12, 13, 14]]
```
Note that instead of `list(map(list, map(itg, arr)))` if you only return `map(itg, arr)` it will give you the expected result but as an iterator of iterators instead of list of lists. This will be a more optimized approach in terms of both memory and run-time in this case.
Also, using loops here's the way I'd do this:
```
In [31]: def remove_col(arr, ith):
...: return [[j for i,j in enumerate(sub) if i != ith] for sub in arr]
```
Surprisingly (not if you believe in the power of C :)) the functional approach is even faster for large arrays.
```
In [41]: arr = A * 10000
In [42]: %timeit remove_col_functional(arr, 2)
8.42 ms ± 37.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [43]: %timeit remove_col_list_com(arr, 2)
23.7 ms ± 165 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# And if in functional approach you just return map(itg, arr)
In [47]: %timeit remove_col_functional_iterator(arr, 2)
1.48 µs ± 4.71 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
|
Another variant using a list-comprehension, with `enumerate`:
```
>>> A = [[1,2,3,4],['a','b','c','d'] , [12,13,14,15]]
>>> k = 2
>>> [[x for i, x in enumerate(a) if i != k] for a in A]
[[1, 2, 4], ['a', 'b', 'd'], [12, 13, 15]]
```
And, yes, this has the word `for` in it (twice even!), but performance should not be different than for any of the other approaches (`numpy` might be faster, though).
|
60,171,622
|
I'm working with large data sets. I'm trying to use the NumPy library where I can or python features to process the data sets in an efficient way (e.g. LC).
First I find the relevant indexes:
```
dt_temp_idx = np.where(dt_diff > dt_temp_th)
```
Then I want to create a mask containing for each index a sequence starting from the index to a stop value, I tried:
```
mask_dt_temp = [np.arange(idx, idx+dt_temp_step) for idx in dt_temp_idx]
```
and:
```
mask_dt_temp = [idxs for idx in dt_temp_idx for idxs in np.arange(idx, idx+dt_temp_step)]
```
but it gives me the exception:
```
The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
Example input:
```
indexes = [0, 100, 1000]
```
Example output with stop values after 10 integers from each indexes:
```
list = [0, 1, ..., 10, 100, 101, ..., 110, 1000, 1001, ..., 1010]
```
1) How can I solve it?
2) Is it the best practice to do it?
|
2020/02/11
|
[
"https://Stackoverflow.com/questions/60171622",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5080562/"
] |
Using masks (boolean arrays) are efficient being memory-efficient and performant too. We will make use of [`SciPy's binary-dilation`](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_dilation.html) to extend the thresholded mask.
Here's a step-by-step setup and solution run-
```
In [42]: # Random data setup
...: np.random.seed(0)
...: dt_diff = np.random.rand(20)
...: dt_temp_th = 0.9
In [43]: # Get mask of threshold crossings
...: mask = dt_diff > dt_temp_th
In [44]: mask
Out[44]:
array([False, False, False, False, False, False, False, False, True,
False, False, False, False, True, False, False, False, False,
False, False])
In [45]: W = 3 # window size for extension (edit it according to your use-case)
In [46]: from scipy.ndimage.morphology import binary_dilation
In [47]: extm = binary_dilation(mask, np.ones(W, dtype=bool), origin=-(W//2))
In [48]: mask
Out[48]:
array([False, False, False, False, False, False, False, False, True,
False, False, False, False, True, False, False, False, False,
False, False])
In [49]: extm
Out[49]:
array([False, False, False, False, False, False, False, False, True,
True, True, False, False, True, True, True, False, False,
False, False])
```
Compare `mask` against `extm` to see how the extension takes place.
As, we can see the thresholded `mask` is extended by window-size `W` on the right side, as is the expected output mask `extm`. This can be use to mask out those in the input array : `dt_diff[~extm]` to simulate the deleting/dropping of the elements from the input following [`boolean-indexing`](https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#boolean-array-indexing) or inversely `dt_diff[extm]` to simulate selecting those.
### Alternatives with NumPy based functions
**Alternative #1**
```
extm = np.convolve(mask, np.ones(W, dtype=int))[:len(dt_diff)]>0
```
**Alternative #2**
```
idx = np.flatnonzero(mask)
ext_idx = (idx[:,None]+ np.arange(W)).ravel()
ext_mask = np.ones(len(dt_diff), dtype=bool)
ext_mask[ext_idx[ext_idx<len(dt_diff)]] = False
# Get filtered o/p
out = dt_diff[ext_mask]
```
|
`dt_temp_idx` is a numpy array, but still a Python iterable so you can use a good old Python list comprehension:
```
lst = [ i for j in dt_temp_idx for i in range(j, j+11)]
```
If you want to cope with sequence overlaps and make it back a np.array, just do:
```
result = np.array({i for j in dt_temp_idx for i in range(j, j+11)})
```
But beware the use of a set is robust and guarantee no repetition but it could be more expensive that a simple list.
|
5,077,625
|
I'm trying to write a short program that will read in the contents of e-mails within a folder on my exchange/Outlook profile so I can manipulate the data. However I'm having a problem finding much information about python and exchange/Outlook integration. A lot of stuff is either very old/has no docs/not explained. I've tried several snippets but seem to be getting the same errors. I've tried Tim Golden's code:
```
import win32com.client
session = win32com.client.gencache.EnsureDispatch ("MAPI.Session")
#
# Leave blank to be prompted for a session, or use
# your own profile name if not "Outlook". It is also
# possible to pull the default profile from the registry.
#
session.Logon ("Outlook")
messages = session.Inbox.Messages
#
# Although the inbox_messages collection can be accessed
# via getitem-style calls (inbox_messages[1] etc.) this
# is the recommended approach from Microsoft since the
# Inbox can mutate while you're iterating.
#
message = messages.GetFirst ()
while message:
print message.Subject
message = messages.GetNext ()
```
However I get an error:
```
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
```
Not sure what my profile name is so I tried with:
```
session.Logon()
```
to be prompted but that didn't work either (same error). Also tried both with Outlook open and closed and neither changed anything.
|
2011/02/22
|
[
"https://Stackoverflow.com/questions/5077625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/559504/"
] |
I had the same problem you did - didn't find much that worked. The following code, however, works like a charm.
```
import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6) # "6" refers to the index of a folder - in this case,
# the inbox. You can change that number to reference
# any other folder
messages = inbox.Items
message = messages.GetLast()
body_content = message.body
print body_content
```
|
I have created my own iterator to iterate over Outlook objects via python. The issue is that python tries to iterates starting with Index[0], but outlook expects for first item Index[1]... To make it more Ruby simple, there is below a helper class Oli with following
methods:
.items() - yields a tuple(index, Item)...
.prop() - helping to introspect outlook object exposing available properties (methods and attributes)
```
from win32com.client import constants
from win32com.client.gencache import EnsureDispatch as Dispatch
outlook = Dispatch("Outlook.Application")
mapi = outlook.GetNamespace("MAPI")
class Oli():
def __init__(self, outlook_object):
self._obj = outlook_object
def items(self):
array_size = self._obj.Count
for item_index in xrange(1,array_size+1):
yield (item_index, self._obj[item_index])
def prop(self):
return sorted( self._obj._prop_map_get_.keys() )
for inx, folder in Oli(mapi.Folders).items():
# iterate all Outlook folders (top level)
print "-"*70
print folder.Name
for inx,subfolder in Oli(folder.Folders).items():
print "(%i)" % inx, subfolder.Name,"=> ", subfolder
```
|
5,077,625
|
I'm trying to write a short program that will read in the contents of e-mails within a folder on my exchange/Outlook profile so I can manipulate the data. However I'm having a problem finding much information about python and exchange/Outlook integration. A lot of stuff is either very old/has no docs/not explained. I've tried several snippets but seem to be getting the same errors. I've tried Tim Golden's code:
```
import win32com.client
session = win32com.client.gencache.EnsureDispatch ("MAPI.Session")
#
# Leave blank to be prompted for a session, or use
# your own profile name if not "Outlook". It is also
# possible to pull the default profile from the registry.
#
session.Logon ("Outlook")
messages = session.Inbox.Messages
#
# Although the inbox_messages collection can be accessed
# via getitem-style calls (inbox_messages[1] etc.) this
# is the recommended approach from Microsoft since the
# Inbox can mutate while you're iterating.
#
message = messages.GetFirst ()
while message:
print message.Subject
message = messages.GetNext ()
```
However I get an error:
```
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
```
Not sure what my profile name is so I tried with:
```
session.Logon()
```
to be prompted but that didn't work either (same error). Also tried both with Outlook open and closed and neither changed anything.
|
2011/02/22
|
[
"https://Stackoverflow.com/questions/5077625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/559504/"
] |
I had the same problem you did - didn't find much that worked. The following code, however, works like a charm.
```
import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6) # "6" refers to the index of a folder - in this case,
# the inbox. You can change that number to reference
# any other folder
messages = inbox.Items
message = messages.GetLast()
body_content = message.body
print body_content
```
|
I had the same issue. Combining various approaches from the internet (and above) come up with the following approach (checkEmails.py)
```
class CheckMailer:
def __init__(self, filename="LOG1.txt", mailbox="Mailbox - Another User Mailbox", folderindex=3):
self.f = FileWriter(filename)
self.outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI").Folders(mailbox)
self.inbox = self.outlook.Folders(folderindex)
def check(self):
#===============================================================================
# for i in xrange(1,100): #Uncomment this section if index 3 does not work for you
# try:
# self.inbox = self.outlook.Folders(i) # "6" refers to the index of inbox for Default User Mailbox
# print "%i %s" % (i,self.inbox) # "3" refers to the index of inbox for Another user's mailbox
# except:
# print "%i does not work"%i
#===============================================================================
self.f.pl(time.strftime("%H:%M:%S"))
tot = 0
messages = self.inbox.Items
message = messages.GetFirst()
while message:
self.f.pl (message.Subject)
message = messages.GetNext()
tot += 1
self.f.pl("Total Messages found: %i" % tot)
self.f.pl("-" * 80)
self.f.flush()
if __name__ == "__main__":
mail = CheckMailer()
for i in xrange(320): # this is 10.6 hours approximately
mail.check()
time.sleep(120.00)
```
For concistency I include also the code for the FileWriter class (found in FileWrapper.py). I needed this because
trying to pipe UTF8 to a file in windows did not work.
```
class FileWriter(object):
'''
convenient file wrapper for writing to files
'''
def __init__(self, filename):
'''
Constructor
'''
self.file = open(filename, "w")
def pl(self, a_string):
str_uni = a_string.encode('utf-8')
self.file.write(str_uni)
self.file.write("\n")
def flush(self):
self.file.flush()
```
|
5,077,625
|
I'm trying to write a short program that will read in the contents of e-mails within a folder on my exchange/Outlook profile so I can manipulate the data. However I'm having a problem finding much information about python and exchange/Outlook integration. A lot of stuff is either very old/has no docs/not explained. I've tried several snippets but seem to be getting the same errors. I've tried Tim Golden's code:
```
import win32com.client
session = win32com.client.gencache.EnsureDispatch ("MAPI.Session")
#
# Leave blank to be prompted for a session, or use
# your own profile name if not "Outlook". It is also
# possible to pull the default profile from the registry.
#
session.Logon ("Outlook")
messages = session.Inbox.Messages
#
# Although the inbox_messages collection can be accessed
# via getitem-style calls (inbox_messages[1] etc.) this
# is the recommended approach from Microsoft since the
# Inbox can mutate while you're iterating.
#
message = messages.GetFirst ()
while message:
print message.Subject
message = messages.GetNext ()
```
However I get an error:
```
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
```
Not sure what my profile name is so I tried with:
```
session.Logon()
```
to be prompted but that didn't work either (same error). Also tried both with Outlook open and closed and neither changed anything.
|
2011/02/22
|
[
"https://Stackoverflow.com/questions/5077625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/559504/"
] |
I had the same problem you did - didn't find much that worked. The following code, however, works like a charm.
```
import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.GetDefaultFolder(6) # "6" refers to the index of a folder - in this case,
# the inbox. You can change that number to reference
# any other folder
messages = inbox.Items
message = messages.GetLast()
body_content = message.body
print body_content
```
|
Sorry for my bad English.
Checking Mails using Python with **MAPI** is easier,
```
outlook =win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders[5]
Subfldr = folder.Folders[5]
messages_REACH = Subfldr.Items
message = messages_REACH.GetFirst()
```
Here we can get the most first mail into the Mail box, or into any sub folder. Actually, we need to check the Mailbox number & orientation. With the help of this analysis we can check each mailbox & its sub mailbox folders.
Similarly please find the below code, where we can see, the last/ earlier mails. How we need to check.
```
`outlook =win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders[5]
Subfldr = folder.Folders[5]
messages_REACH = Subfldr.Items
message = messages_REACH.GetLast()`
```
With this we can get most recent email into the mailbox.
According to the above mentioned code, we can check our all mail boxes, & its sub folders.
|
5,077,625
|
I'm trying to write a short program that will read in the contents of e-mails within a folder on my exchange/Outlook profile so I can manipulate the data. However I'm having a problem finding much information about python and exchange/Outlook integration. A lot of stuff is either very old/has no docs/not explained. I've tried several snippets but seem to be getting the same errors. I've tried Tim Golden's code:
```
import win32com.client
session = win32com.client.gencache.EnsureDispatch ("MAPI.Session")
#
# Leave blank to be prompted for a session, or use
# your own profile name if not "Outlook". It is also
# possible to pull the default profile from the registry.
#
session.Logon ("Outlook")
messages = session.Inbox.Messages
#
# Although the inbox_messages collection can be accessed
# via getitem-style calls (inbox_messages[1] etc.) this
# is the recommended approach from Microsoft since the
# Inbox can mutate while you're iterating.
#
message = messages.GetFirst ()
while message:
print message.Subject
message = messages.GetNext ()
```
However I get an error:
```
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
```
Not sure what my profile name is so I tried with:
```
session.Logon()
```
to be prompted but that didn't work either (same error). Also tried both with Outlook open and closed and neither changed anything.
|
2011/02/22
|
[
"https://Stackoverflow.com/questions/5077625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/559504/"
] |
I have created my own iterator to iterate over Outlook objects via python. The issue is that python tries to iterates starting with Index[0], but outlook expects for first item Index[1]... To make it more Ruby simple, there is below a helper class Oli with following
methods:
.items() - yields a tuple(index, Item)...
.prop() - helping to introspect outlook object exposing available properties (methods and attributes)
```
from win32com.client import constants
from win32com.client.gencache import EnsureDispatch as Dispatch
outlook = Dispatch("Outlook.Application")
mapi = outlook.GetNamespace("MAPI")
class Oli():
def __init__(self, outlook_object):
self._obj = outlook_object
def items(self):
array_size = self._obj.Count
for item_index in xrange(1,array_size+1):
yield (item_index, self._obj[item_index])
def prop(self):
return sorted( self._obj._prop_map_get_.keys() )
for inx, folder in Oli(mapi.Folders).items():
# iterate all Outlook folders (top level)
print "-"*70
print folder.Name
for inx,subfolder in Oli(folder.Folders).items():
print "(%i)" % inx, subfolder.Name,"=> ", subfolder
```
|
I had the same issue. Combining various approaches from the internet (and above) come up with the following approach (checkEmails.py)
```
class CheckMailer:
def __init__(self, filename="LOG1.txt", mailbox="Mailbox - Another User Mailbox", folderindex=3):
self.f = FileWriter(filename)
self.outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI").Folders(mailbox)
self.inbox = self.outlook.Folders(folderindex)
def check(self):
#===============================================================================
# for i in xrange(1,100): #Uncomment this section if index 3 does not work for you
# try:
# self.inbox = self.outlook.Folders(i) # "6" refers to the index of inbox for Default User Mailbox
# print "%i %s" % (i,self.inbox) # "3" refers to the index of inbox for Another user's mailbox
# except:
# print "%i does not work"%i
#===============================================================================
self.f.pl(time.strftime("%H:%M:%S"))
tot = 0
messages = self.inbox.Items
message = messages.GetFirst()
while message:
self.f.pl (message.Subject)
message = messages.GetNext()
tot += 1
self.f.pl("Total Messages found: %i" % tot)
self.f.pl("-" * 80)
self.f.flush()
if __name__ == "__main__":
mail = CheckMailer()
for i in xrange(320): # this is 10.6 hours approximately
mail.check()
time.sleep(120.00)
```
For concistency I include also the code for the FileWriter class (found in FileWrapper.py). I needed this because
trying to pipe UTF8 to a file in windows did not work.
```
class FileWriter(object):
'''
convenient file wrapper for writing to files
'''
def __init__(self, filename):
'''
Constructor
'''
self.file = open(filename, "w")
def pl(self, a_string):
str_uni = a_string.encode('utf-8')
self.file.write(str_uni)
self.file.write("\n")
def flush(self):
self.file.flush()
```
|
5,077,625
|
I'm trying to write a short program that will read in the contents of e-mails within a folder on my exchange/Outlook profile so I can manipulate the data. However I'm having a problem finding much information about python and exchange/Outlook integration. A lot of stuff is either very old/has no docs/not explained. I've tried several snippets but seem to be getting the same errors. I've tried Tim Golden's code:
```
import win32com.client
session = win32com.client.gencache.EnsureDispatch ("MAPI.Session")
#
# Leave blank to be prompted for a session, or use
# your own profile name if not "Outlook". It is also
# possible to pull the default profile from the registry.
#
session.Logon ("Outlook")
messages = session.Inbox.Messages
#
# Although the inbox_messages collection can be accessed
# via getitem-style calls (inbox_messages[1] etc.) this
# is the recommended approach from Microsoft since the
# Inbox can mutate while you're iterating.
#
message = messages.GetFirst ()
while message:
print message.Subject
message = messages.GetNext ()
```
However I get an error:
```
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
```
Not sure what my profile name is so I tried with:
```
session.Logon()
```
to be prompted but that didn't work either (same error). Also tried both with Outlook open and closed and neither changed anything.
|
2011/02/22
|
[
"https://Stackoverflow.com/questions/5077625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/559504/"
] |
I have created my own iterator to iterate over Outlook objects via python. The issue is that python tries to iterates starting with Index[0], but outlook expects for first item Index[1]... To make it more Ruby simple, there is below a helper class Oli with following
methods:
.items() - yields a tuple(index, Item)...
.prop() - helping to introspect outlook object exposing available properties (methods and attributes)
```
from win32com.client import constants
from win32com.client.gencache import EnsureDispatch as Dispatch
outlook = Dispatch("Outlook.Application")
mapi = outlook.GetNamespace("MAPI")
class Oli():
def __init__(self, outlook_object):
self._obj = outlook_object
def items(self):
array_size = self._obj.Count
for item_index in xrange(1,array_size+1):
yield (item_index, self._obj[item_index])
def prop(self):
return sorted( self._obj._prop_map_get_.keys() )
for inx, folder in Oli(mapi.Folders).items():
# iterate all Outlook folders (top level)
print "-"*70
print folder.Name
for inx,subfolder in Oli(folder.Folders).items():
print "(%i)" % inx, subfolder.Name,"=> ", subfolder
```
|
Sorry for my bad English.
Checking Mails using Python with **MAPI** is easier,
```
outlook =win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders[5]
Subfldr = folder.Folders[5]
messages_REACH = Subfldr.Items
message = messages_REACH.GetFirst()
```
Here we can get the most first mail into the Mail box, or into any sub folder. Actually, we need to check the Mailbox number & orientation. With the help of this analysis we can check each mailbox & its sub mailbox folders.
Similarly please find the below code, where we can see, the last/ earlier mails. How we need to check.
```
`outlook =win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders[5]
Subfldr = folder.Folders[5]
messages_REACH = Subfldr.Items
message = messages_REACH.GetLast()`
```
With this we can get most recent email into the mailbox.
According to the above mentioned code, we can check our all mail boxes, & its sub folders.
|
32,150,849
|
I'm writing a simple Flask app, with the sole purpose to learn Python and MongoDB.
I've managed to reach to the point where all the collections are defined, and CRUD operations work in general. Now, one thing that I really want to understand, is how to refresh the collection, after updating its structure. For example, say that I have the following `model`:
user.py
=======
```
class User(db.Document, UserMixin):
email = db.StringField(required=True, unique=True)
password = db.StringField(required=True)
active = db.BooleanField()
first_name = db.StringField(max_length=64, required=True)
last_name = db.StringField(max_length=64, required=True)
registered_at = db.DateTimeField(default=datetime.datetime.utcnow())
confirmed = db.BooleanField()
confirmed_at = db.DateTimeField()
last_login_at = db.DateTimeField()
current_login_at = db.DateTimeField()
last_login_ip = db.StringField(max_length=45)
current_login_ip = db.StringField(max_length=45)
login_count = db.IntField()
companies = db.ListField(db.ReferenceField('Company'), default=[])
roles = db.ListField(db.ReferenceField(Role), default=[])
meta = {
'indexes': [
{'fields': ['email'], 'unique': True}
]
}
```
Now, I already have entries in my `user` collection, but I want to change `companies` to:
```
company = db.ReferenceField('Company')
```
How can I refresh the collection's structure, without having to bring the whole database down?
----------------------------------------------------------------------------------------------
I do have a `manage.py` script that helps me and also provides a shell:
```
#!/usr/bin/python
from flask.ext.script import Manager
from flask.ext.script.commands import Shell
from app import factory
app = factory.create_app()
manager = Manager(app)
manager.add_command("shell", Shell(use_ipython=True))
# manager.add_command('run_tests', RunTests())
if __name__ == "__main__":
manager.run()
```
and I have tried a couple of commands, from information that I could recompile and out of my basic knowledge:
```
>>> from app.models import db, User
>>> import mongoengine
>>> mongoengine.Document(User)
field = iter(self._fields_ordered)
AttributeError: 'Document' object has no attribute '_fields_ordered'
>>> mongoengine.Document(User).modify() # well, same result as above
```
Any pointers on how to achieve this?
Update
------
I am asking all of this, because I have updated my `user.py` to match my new requests, but anytime I interact with the db its self, since the table's structure was not refreshed, I get the following error:
>
> FieldDoesNotExist: The field 'companies' does not exist on the
> document 'User', referer: <http://local.faqcolab.com/company>
>
>
>
|
2015/08/21
|
[
"https://Stackoverflow.com/questions/32150849",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971392/"
] |
Solution is easier then I expected:
```
db.getCollection('user').update(
// query
{},
// update
{
$rename: {
'companies': 'company'
}
},
// options
{
"multi" : true, // update all documents
"upsert" : false // insert a new document, if no existing document match the query
}
);
```
Explanation for each of the `{}`:
* First is empty because I want to update all documents in `user` collection.
* Second contains [`$rename`](http://docs.mongodb.org/manual/reference/operator/update/rename/) which is the invoking action to rename the fields I want.
* Last contains aditional settings for the query to be executed.
|
>
> I have updated my `user.py` to match my new requests, but anytime I interact with the db its self, since the table's structure was not refreshed, I get the following error
>
>
>
MongoDB does not have a "table structure" like relational databases do. After a document has been inserted, you can't change it's schema by changing the document model.
I don't want to sound like I'm telling you that the answer is to use different tools, but seeing things like `db.ListField(db.ReferenceField('Company'))` makes me think you'd be much better off with a relational database (Postgres is well supported in the Flask ecosystem).
Mongo works best for storing schema-less documents (you don't know before hand how your data is structured, or it varies significantly between documents). Unless you have data like that, it's worth looking at other options. Especially since you're just getting started with Python and Flask, there's no point in making things harder than they are.
|
66,684,265
|
The case is if I want to reverse select a python list to `n` like:
```
n = 3
l = [1,2,3,4,5,6]
s = l[5:n:-1] # s is [6, 5]
```
OK, it works, but how can I set `n`'s value to select the whole list?
let's see this example, what I expect the first line is `[5, 4, 3, 2, 1]`
```
[40]: for i in range(-1, 5):
...: print(l[4:i:-1])
...:
[]
[5, 4, 3, 2]
[5, 4, 3]
[5, 4]
[5]
[]
```
if the upper bound n set to 0, the result will lost `0`. but if n is `-1`, the result is empty because `-1` means "the last one".
The only way I can do is:
```
if n < 0:
s = l[5::-1]
else:
s = l[5:n:-1]
```
a bit confusing.
|
2021/03/18
|
[
"https://Stackoverflow.com/questions/66684265",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2955827/"
] |
I was led to answer from <https://github.com/JeffreyWay/laravel-mix/issues/2896;>
they seemed to upgrade the syntax the documentation can be found here:
<https://github.com/JeffreyWay/laravel-mix/blob/467f0c9b01b7da71c519619ba8b310422321e0d6/UPGRADE.md#vue-configuration>
|
When I tried your solution it did not work.
Here is how I managed to fix this issue, using this new/changed property additionalData, in previous versions this property was data or prependData.
```
mix.webpackConfig({
module: {
rules: [
{
test: /\.scss$/,
use: [
{
loader: "sass-loader",
options: {
additionalData: `@import "@/_variables.scss";
@import "@/_mixins.scss";
@import "@/_extends.scss";
@import "@/_general.scss";`
},
},
],
}
]
},
resolve: {
alias: {
'@': path.resolve('resources/sass'),
'~': path.resolve('/resources/js'),
'Vue': 'vue/dist/vue.esm-bundler.js',
}
},
output: {
chunkFilename: '[name].js?id=[chunkhash]',
},
});
mix.js('resources/js/app.js', 'public/js').vue()
.sass('resources/sass/app.scss', 'public/css')
.copyDirectory('resources/images', 'public/images');
if (mix.inProduction()) {
mix.version();
}
```
Note that alias for path 'resources/scss' is "@".
I hope this saved someone some time :)
|
67,792,538
|
i'm writing a python script which reads emails from Outlook then extract the body
The problem is that when it reads an email answer, the body contains the previous emails.
Is there away to avoid that and just extract the body of the email.
This is a part of my code :
```
import requests
import json
import base64
utlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders.Item("UGAP-AMS-L2")
inbox = folder.Folders.Item("Inbox")
mails = inbox.Items
mails.Sort("[ReceivedTime]", False)
for mail in mails:
if mail.UnRead == True :
print(" mail.Body")
```
This is what i get :
*-Email body of the current email-*
*De : "tracker@gmail.fr" tracker@gmail.fr*
*Date : vendredi 21 mai 2021 à 08:44*
*À : Me Me@outlook.com*
*Objet : object*
*-body of previous email-*
|
2021/06/01
|
[
"https://Stackoverflow.com/questions/67792538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15268168/"
] |
Try this solution that uses `zoo` (and `dplyr`, which I'm inferring you're already using):
```r
library(dplyr)
eg <- expand.grid(Sample.Type = unique(dat$Sample.Type),
date = seq(min(dat$date), max(dat$date), by = "day"),
stringsAsFactors = FALSE)
dat %>%
mutate(a=TRUE) %>%
full_join(eg, by = c("Sample.Type", "date")) %>%
mutate(a=!is.na(a)) %>%
arrange(date) %>%
group_by(Sample.Type) %>%
mutate(last7 = zoo::rollapplyr(a, 7, any, partial = TRUE)) %>%
select(-a) %>%
ungroup() %>%
print(n=99)
# # A tibble: 29 x 3
# Sample.Type date last7
# <chr> <date> <lgl>
# 1 A 2020-10-05 TRUE
# 2 B 2020-10-05 TRUE
# 3 A 2020-10-06 TRUE
# 4 B 2020-10-06 TRUE
# 5 B 2020-10-06 TRUE
# 6 B 2020-10-06 TRUE
# 7 A 2020-10-07 TRUE
# 8 B 2020-10-07 TRUE
# 9 A 2020-10-08 TRUE
# 10 B 2020-10-08 TRUE
# 11 A 2020-10-09 TRUE
# 12 B 2020-10-09 TRUE
# 13 A 2020-10-10 TRUE
# 14 B 2020-10-10 TRUE
# 15 A 2020-10-11 TRUE
# 16 A 2020-10-11 TRUE
# 17 B 2020-10-11 TRUE
# 18 A 2020-10-12 TRUE
# 19 B 2020-10-12 TRUE
# 20 A 2020-10-13 TRUE
# 21 B 2020-10-13 FALSE
# 22 A 2020-10-14 TRUE
# 23 B 2020-10-14 FALSE
# 24 A 2020-10-15 TRUE
# 25 B 2020-10-15 FALSE
# 26 A 2020-10-16 TRUE
# 27 B 2020-10-16 FALSE
# 28 A 2020-10-17 TRUE
# 29 B 2020-10-17 FALSE
```
Data
```r
dat <- structure(list(Sample.Type = c("A", "B", "A", "B", "B", "B", "A", "A", "A", "A", "A", "A"), date = structure(c(18540, 18540, 18541, 18541, 18541, 18541, 18545, 18546, 18546, 18550, 18551, 18552), class = "Date")), row.names = c(NA, -12L), class = "data.frame")
```
|
You just need to `lag()` while grouping by `Sample.Type`.
1. Toy dataset. I just added a third Sample.Type
```r
library(dplyr)
library(lubridate)
typeday <- tibble(
Sample.Type = c("A", "B", "A", "B", "A", "A","B", "C", "C"),
date = as.Date(c("2020-10-05", "2020-10-05", "2020-10-06",
"2020-10-06", "2020-10-11", "2020-10-17",
"2020-10-17", "2020-10-17", "2020-10-18"))
)
typeday
#> # A tibble: 9 x 2
#> Sample.Type date
#> <chr> <date>
#> 1 A 2020-10-05
#> 2 B 2020-10-05
#> 3 A 2020-10-06
#> 4 B 2020-10-06
#> 5 A 2020-10-11
#> 6 A 2020-10-17
#> 7 B 2020-10-17
#> 8 C 2020-10-17
#> 9 C 2020-10-18
```
2. Then, make sure you have the right order for the types and dates. Once you group by Sample.Type, evaluate if the last date (`lag(date)`) is greater or equal than seven days behind your actual date. From there it is just cleaning the `sampled` column. You can also arrange by only date after ungrouping.
```r
typeday %>%
arrange(Sample.Type, date) %>%
group_by(Sample.Type) %>%
mutate(
sampled = lag(date) >= date - days(7),
sampled = case_when(
sampled ~ "yes",
!sampled | is.na(sampled) ~ "no"
)
) %>%
ungroup() %>%
arrange(date)
#> # A tibble: 9 x 3
#> Sample.Type date sampled
#> <chr> <date> <chr>
#> 1 A 2020-10-05 no
#> 2 B 2020-10-05 no
#> 3 A 2020-10-06 yes
#> 4 B 2020-10-06 yes
#> 5 A 2020-10-11 yes
#> 6 A 2020-10-17 yes
#> 7 B 2020-10-17 no
#> 8 C 2020-10-17 no
#> 9 C 2020-10-18 yes
```
Created on 2021-06-01 by the [reprex package](https://reprex.tidyverse.org) (v2.0.0)
|
65,888,118
|
I deployed a flask app to IIS using FastCGI and WSGI Handler. The steps that I have followed are
1. Created a virtual environment for Python and installed all packages including wfastCGI.
2. Set the Handler mappings and included the FastCGI settings.
3. Assigned the necessary permissions for the folders by adding IIS\_IUSRS and IUSR.
Below is the medium link that I followed in terms of the steps.
<https://medium.com/@dpralay07/deploy-a-python-flask-application-in-iis-server-and-run-on-machine-ip-address-ddb81df8edf3>
The folder structure for the code is as shown below with (checkin\_env) being the virtual environment.
[](https://i.stack.imgur.com/PZUH0.png)
[](https://i.stack.imgur.com/Mv3iH.png)
Fast CGI settings are shown as below with WSGI Handler being `checkFlask.app`
[](https://i.stack.imgur.com/Qqxir.png)
The web.config file which was generated is here.
[](https://i.stack.imgur.com/j7kVM.png)
When I tried running on Ports 80, 5000 I received a permission error related to System32 which I am totally confused and unsure about. Any thoughts or inputs are highly appreciated. Thank you.
[](https://i.stack.imgur.com/T0G7e.png)
|
2021/01/25
|
[
"https://Stackoverflow.com/questions/65888118",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6831630/"
] |
You use the wrong css.
The `translateX` is the state, not the animation. Use `animation` instead.
```css
@keyframes menu-open {
from {width: 0px;}
to {width: 220px;}
}
.open {
animation-name: menu-open;
animation-duration: 1s;
animation-fill-mode: forwards;
}
@keyframes menu-close {
from {width: 220px;}
to {width: 0px;}
}
.close {
animation-name: menu-close;
animation-duration: 1s;
animation-fill-mode: forwards;
}
.header__menu {
position: absolute;
top: 0;
right: 0;
background-color: black;
outline: 1px solid var(--border-accent);
padding: 64px 12px 0 12px;
height: 100vh;
z-index: 9;
}
```
```
import cs from 'classnames';
import React from "react";
import "./styles.css";
export default function App() {
const [showMenu, setShowMenu] = React.useState(false);
const onClick = React.useCallback(() => {
setShowMenu(!showMenu);
}, [showMenu]);
const classnames = cs('header__menu', {
open: showMenu,
close: !showMenu,
})
return (
<div className="App">
<button onClick={onClick}>{showMenu ? "close" : "open"}</button>
{<div className={classnames} />}
</div>
);
}
```
Here is the working [codesandbox](https://codesandbox.io/s/magical-pasteur-y1223?file=/src/App.js:51-414)
|
You should specify both transform properties in style attribute like so:
style={{transform: this.showMenu ? "translateX(0%)" : "translateX(100%)"}}
|
63,899,935
|
I'm writing a code to take a rain time-series and save hourly files for each day in order to feed a hydrological model, so, basically, I need to save each file with the hour of the day with tho digits, like this:
```
rain_20200101_0000.txt
rain_20200101_0100.txt
...
rain_20200101_0900.txt
rain_20200101_1000.txt
..
rain_20200101_2300.txt
```
But python doesn't put a zero before numbers between 0-9, so if I use a `range(24)` to do that it will save the first 10 hours like "rain\_20200101\_100.txt"
The solution I found was to put an `if` for x<10 and x>=10 inside the `range(24)` and insert a "0" before the hour for the first condition, but I think that is too rude and should be a more eficient way to do that. Could you help me with a simpler solution for this code?
|
2020/09/15
|
[
"https://Stackoverflow.com/questions/63899935",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7335001/"
] |
I experienced the same case, for me, what worked was what has been commented before:
Change the value of 'cwd' property in launch.json to the value of the project directory.
{
...
"cwd": "${workspaceFolder}"
}
to
{
...
"cwd": "${workspaceFolder}/SATestUtils.API"
}
All the credits to Bemm...
[](https://i.stack.imgur.com/HYorb.png)
|
Did you try to specify the asp net environment at the launch.json?
Something like this:
[](https://i.stack.imgur.com/gwMzt.png)
|
60,836,709
|
So after doing some web scraping and turning data frames into lists, I want to compare one list to another that I have created myself. But, if one list doesn’t have a value from another, I want it added in the exact order of the list I’m comparing it to.
For example, if I’m comparing one list of snacks with another
I want to make sure List2 ends up looking exactly like List1. Another thing I want to do is add the price of apples to ListPrices to “Nan” since I don’t have an existing value and I want it done in one for/if statement.
```
List1 = ['apples', 'bananas', 'cookies', 'soda']
List2 = ['apples', 'cookies', 'soda']
ListPrices = [1, 3, 1]
for i in range(List1):
if List1[i] != List2[i]:
List2.insert([i-1], List1[i])
ListPrices.insert([i-1], Nan)
print(List2)
```
Above is the the python script, but I seem to be stuck at a dead end as to how to do it without generating errors. I'm hoping to get the output looking exactly like this.
```
List2 = ['apples', 'bananas', 'cookies', 'soda']
ListPrices = [1, Nan, 3, 1]
```
Any suggestions is truly appreciated.
|
2020/03/24
|
[
"https://Stackoverflow.com/questions/60836709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11583556/"
] |
Assuming that prices in `ListPrices` are consistent, i.e. if `List1` has two bananas, both have the same price, you can create a `dict` by `zip`ing `List1` and `ListPrices` and then look up the price for the items in `List2` in that dict, or use `nan` as a default.
```
prices = dict(zip(List2, ListPrices))
# {'apples': 1, 'cookies': 3, 'soda': 1}
List2 = List1 # make same as List1...
ListPrices = [prices.get(x, float("nan")) for x in List1]
# [1, nan, 3, 1]
```
|
You can convert it to a dictionary for a simple look up
```
d = dict(zip(List2,ListPrices))
[d.get(i,None) for i in List1]
```
|
60,836,709
|
So after doing some web scraping and turning data frames into lists, I want to compare one list to another that I have created myself. But, if one list doesn’t have a value from another, I want it added in the exact order of the list I’m comparing it to.
For example, if I’m comparing one list of snacks with another
I want to make sure List2 ends up looking exactly like List1. Another thing I want to do is add the price of apples to ListPrices to “Nan” since I don’t have an existing value and I want it done in one for/if statement.
```
List1 = ['apples', 'bananas', 'cookies', 'soda']
List2 = ['apples', 'cookies', 'soda']
ListPrices = [1, 3, 1]
for i in range(List1):
if List1[i] != List2[i]:
List2.insert([i-1], List1[i])
ListPrices.insert([i-1], Nan)
print(List2)
```
Above is the the python script, but I seem to be stuck at a dead end as to how to do it without generating errors. I'm hoping to get the output looking exactly like this.
```
List2 = ['apples', 'bananas', 'cookies', 'soda']
ListPrices = [1, Nan, 3, 1]
```
Any suggestions is truly appreciated.
|
2020/03/24
|
[
"https://Stackoverflow.com/questions/60836709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11583556/"
] |
You can convert it to a dictionary for a simple look up
```
d = dict(zip(List2,ListPrices))
[d.get(i,None) for i in List1]
```
|
so if I understand correctly, `ListPrices` is the list of prices for `List2`.
```
List1 = ['apples', 'bananas', 'cookies', 'soda']
List2 = ['apples', 'cookies', 'soda']
ListPrices = [1, 3, 1]
```
If so, you can make sure to bind prices to each item of `List2` using a dict:
```
MapPrices = dict(zip(List2, ListPrices))
```
and then use that map to get the prices for `List1`:
```
>>> [MapPrices.get(item, float('NaN')) for item in List1]
[1, nan, 3, 1]
```
|
60,836,709
|
So after doing some web scraping and turning data frames into lists, I want to compare one list to another that I have created myself. But, if one list doesn’t have a value from another, I want it added in the exact order of the list I’m comparing it to.
For example, if I’m comparing one list of snacks with another
I want to make sure List2 ends up looking exactly like List1. Another thing I want to do is add the price of apples to ListPrices to “Nan” since I don’t have an existing value and I want it done in one for/if statement.
```
List1 = ['apples', 'bananas', 'cookies', 'soda']
List2 = ['apples', 'cookies', 'soda']
ListPrices = [1, 3, 1]
for i in range(List1):
if List1[i] != List2[i]:
List2.insert([i-1], List1[i])
ListPrices.insert([i-1], Nan)
print(List2)
```
Above is the the python script, but I seem to be stuck at a dead end as to how to do it without generating errors. I'm hoping to get the output looking exactly like this.
```
List2 = ['apples', 'bananas', 'cookies', 'soda']
ListPrices = [1, Nan, 3, 1]
```
Any suggestions is truly appreciated.
|
2020/03/24
|
[
"https://Stackoverflow.com/questions/60836709",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11583556/"
] |
Assuming that prices in `ListPrices` are consistent, i.e. if `List1` has two bananas, both have the same price, you can create a `dict` by `zip`ing `List1` and `ListPrices` and then look up the price for the items in `List2` in that dict, or use `nan` as a default.
```
prices = dict(zip(List2, ListPrices))
# {'apples': 1, 'cookies': 3, 'soda': 1}
List2 = List1 # make same as List1...
ListPrices = [prices.get(x, float("nan")) for x in List1]
# [1, nan, 3, 1]
```
|
so if I understand correctly, `ListPrices` is the list of prices for `List2`.
```
List1 = ['apples', 'bananas', 'cookies', 'soda']
List2 = ['apples', 'cookies', 'soda']
ListPrices = [1, 3, 1]
```
If so, you can make sure to bind prices to each item of `List2` using a dict:
```
MapPrices = dict(zip(List2, ListPrices))
```
and then use that map to get the prices for `List1`:
```
>>> [MapPrices.get(item, float('NaN')) for item in List1]
[1, nan, 3, 1]
```
|
17,980,691
|
I'm having difficulty getting my sizers to work properly in wxpython. I am trying to do a simple one horizontal bar at top (with text in it) and two vertical boxes below (with gridsizers \* the left one should only be 2 columns!! \* inside each). I want the everything in the image to stretch and fit my panel as well (with the ability to add padding to sides and top/bottom). 
I have two main issues:
1. I cant get the text in the horizontal bar to be in the middle (it goes to the left)
2. I would like to space the two vertical boxes to span AND fit the page appropriately (also would like the grids to span better too).
Here is my code (with some parts omitted):
```
self.LeagueInfoU = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
self.LeagueInfoL = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
self.LeagueInfoR = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
vbox = wx.BoxSizer(wx.VERTICAL)
hbox1 = wx.BoxSizer(wx.HORIZONTAL)
hbox2 = wx.BoxSizer(wx.HORIZONTAL)
vbox2a = wx.GridSizer(12,2,0,0)
vbox3a = wx.GridSizer(10,3,0,0)
hbox1a = wx.BoxSizer(wx.VERTICAL)
vbox2 = wx.BoxSizer(wx.VERTICAL)
vbox3 = wx.BoxSizer(wx.VERTICAL)
hbox1.Add(self.LeagueInfoU, 1, wx.EXPAND | wx.ALL, 3)
vbox2.Add(self.LeagueInfoL, 1, wx.EXPAND | wx.ALL, 3)
vbox3.Add(self.LeagueInfoR, 1, wx.EXPAND | wx.ALL, 3)
vbox2a.AddMany([this is all correct])
self.LeagueInfoL.SetSizer(vbox2a)
vbox3a.AddMany([this is all correct])
self.LeagueInfoR.SetSizer(vbox3a)
font = wx.Font(20, wx.DEFAULT, wx.NORMAL, wx.BOLD)
self.Big_Header = wx.StaticText(self.LeagueInfoU, -1, 'Testing This')
self.Big_Header.SetFont(font)
hbox1a.Add(self.Big_Header, 0, wx.ALIGN_CENTER|wx.ALIGN_CENTER_VERTICAL)
self.LeagueInfoU.SetSizer(hbox1a)
hbox2.Add(vbox2, 0, wx.EXPAND)
hbox2.Add(vbox3, 0, wx.EXPAND)
vbox.Add(hbox1, 0, wx.EXPAND)
vbox.Add(hbox2, 1, wx.EXPAND)
self.LeagueInfo.SetSizer(vbox)
```
|
2013/07/31
|
[
"https://Stackoverflow.com/questions/17980691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936911/"
] |
Is this what you're after?

```
import wx
class Frame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent)
self.panel = wx.Panel(self)
main_sizer = wx.BoxSizer(wx.VERTICAL)
# Title
self.centred_text = wx.StaticText(self.panel, label="Title")
main_sizer.Add(self.centred_text, 0, wx.ALIGN_CENTRE | wx.ALL, 3)
# Grids
content_sizer = wx.BoxSizer(wx.HORIZONTAL)
grid_1 = wx.GridSizer(12, 2, 0, 0)
grid_1.AddMany(wx.StaticText(self.panel, label=str(i)) for i in xrange(24))
content_sizer.Add(grid_1, 1, wx.EXPAND | wx.ALL, 3)
grid_2 = wx.GridSizer(10, 3, 0, 0)
grid_2.AddMany(wx.StaticText(self.panel, label=str(i)) for i in xrange(30))
content_sizer.Add(grid_2, 1, wx.EXPAND | wx.ALL, 3)
main_sizer.Add(content_sizer, 1, wx.EXPAND)
self.panel.SetSizer(main_sizer)
self.Show()
if __name__ == "__main__":
app = wx.App(False)
Frame(None)
app.MainLoop()
```
|
something like this??
```
import wx
class MyFrame(wx.Frame):
def __init__(self):
wx.Frame.__init__(self,None,-1,"Test Stretching!!")
p1 = wx.Panel(self,-1,size=(500,100))
p1.SetMinSize((500,100))
p1.SetBackgroundColour(wx.GREEN)
hsz = wx.BoxSizer(wx.HORIZONTAL)
p2 = wx.Panel(self,-1,size=(200,400))
p2.SetMinSize((200,400))
p2.SetBackgroundColour(wx.RED)
p3 = wx.Panel(self,-1,size=(300,400))
p3.SetMinSize((300,400))
p3.SetBackgroundColour(wx.BLUE)
hsz.Add(p2,1,wx.EXPAND)
hsz.Add(p3,1,wx.EXPAND)
sz = wx.BoxSizer(wx.VERTICAL)
sz.Add(p1,0,wx.EXPAND)
sz.Add(hsz,1,wx.EXPAND)
self.SetSizer(sz)
self.Layout()
self.Fit()
a = wx.App(redirect=False)
f = MyFrame()
f.Show()
a.MainLoop()
```
|
17,980,691
|
I'm having difficulty getting my sizers to work properly in wxpython. I am trying to do a simple one horizontal bar at top (with text in it) and two vertical boxes below (with gridsizers \* the left one should only be 2 columns!! \* inside each). I want the everything in the image to stretch and fit my panel as well (with the ability to add padding to sides and top/bottom). 
I have two main issues:
1. I cant get the text in the horizontal bar to be in the middle (it goes to the left)
2. I would like to space the two vertical boxes to span AND fit the page appropriately (also would like the grids to span better too).
Here is my code (with some parts omitted):
```
self.LeagueInfoU = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
self.LeagueInfoL = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
self.LeagueInfoR = wx.Panel(self.LeagueInfo,-1, style=wx.BORDER_NONE)
vbox = wx.BoxSizer(wx.VERTICAL)
hbox1 = wx.BoxSizer(wx.HORIZONTAL)
hbox2 = wx.BoxSizer(wx.HORIZONTAL)
vbox2a = wx.GridSizer(12,2,0,0)
vbox3a = wx.GridSizer(10,3,0,0)
hbox1a = wx.BoxSizer(wx.VERTICAL)
vbox2 = wx.BoxSizer(wx.VERTICAL)
vbox3 = wx.BoxSizer(wx.VERTICAL)
hbox1.Add(self.LeagueInfoU, 1, wx.EXPAND | wx.ALL, 3)
vbox2.Add(self.LeagueInfoL, 1, wx.EXPAND | wx.ALL, 3)
vbox3.Add(self.LeagueInfoR, 1, wx.EXPAND | wx.ALL, 3)
vbox2a.AddMany([this is all correct])
self.LeagueInfoL.SetSizer(vbox2a)
vbox3a.AddMany([this is all correct])
self.LeagueInfoR.SetSizer(vbox3a)
font = wx.Font(20, wx.DEFAULT, wx.NORMAL, wx.BOLD)
self.Big_Header = wx.StaticText(self.LeagueInfoU, -1, 'Testing This')
self.Big_Header.SetFont(font)
hbox1a.Add(self.Big_Header, 0, wx.ALIGN_CENTER|wx.ALIGN_CENTER_VERTICAL)
self.LeagueInfoU.SetSizer(hbox1a)
hbox2.Add(vbox2, 0, wx.EXPAND)
hbox2.Add(vbox3, 0, wx.EXPAND)
vbox.Add(hbox1, 0, wx.EXPAND)
vbox.Add(hbox2, 1, wx.EXPAND)
self.LeagueInfo.SetSizer(vbox)
```
|
2013/07/31
|
[
"https://Stackoverflow.com/questions/17980691",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/936911/"
] |
Is this what you're after?

```
import wx
class Frame(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent)
self.panel = wx.Panel(self)
main_sizer = wx.BoxSizer(wx.VERTICAL)
# Title
self.centred_text = wx.StaticText(self.panel, label="Title")
main_sizer.Add(self.centred_text, 0, wx.ALIGN_CENTRE | wx.ALL, 3)
# Grids
content_sizer = wx.BoxSizer(wx.HORIZONTAL)
grid_1 = wx.GridSizer(12, 2, 0, 0)
grid_1.AddMany(wx.StaticText(self.panel, label=str(i)) for i in xrange(24))
content_sizer.Add(grid_1, 1, wx.EXPAND | wx.ALL, 3)
grid_2 = wx.GridSizer(10, 3, 0, 0)
grid_2.AddMany(wx.StaticText(self.panel, label=str(i)) for i in xrange(30))
content_sizer.Add(grid_2, 1, wx.EXPAND | wx.ALL, 3)
main_sizer.Add(content_sizer, 1, wx.EXPAND)
self.panel.SetSizer(main_sizer)
self.Show()
if __name__ == "__main__":
app = wx.App(False)
Frame(None)
app.MainLoop()
```
|
Here's one way to do it:
```
import wx
########################################################################
class MyPanel(wx.Panel):
""""""
#----------------------------------------------------------------------
def __init__(self, parent):
"""Constructor"""
wx.Panel.__init__(self, parent)
mainSizer = wx.BoxSizer(wx.VERTICAL)
hSizer = wx.BoxSizer(wx.HORIZONTAL)
leftGridSizer = wx.GridSizer(rows=10, cols=12, vgap=5, hgap=5)
rightGridSizer = wx.GridSizer(rows=10, cols=3, vgap=5, hgap=5)
title = wx.StaticText(self, label="Main title")
mainSizer.Add(wx.StaticText(self), 0, wx.EXPAND) # add a "spacer"
mainSizer.Add(title, 0, wx.CENTER, wx.ALL, 10)
for row in range(1, 11):
for col in range(1, 13):
lbl = "Row%s Col%s" % (row, col)
leftGridSizer.Add(wx.StaticText(self, label=lbl))
hSizer.Add(leftGridSizer, 0, wx.ALL, 20)
for row in range(1, 11):
for col in range(1, 4):
lbl = "Row%s Col%s" % (row, col)
rightGridSizer.Add(wx.StaticText(self, label=lbl))
hSizer.Add(rightGridSizer, 0, wx.ALL, 20)
mainSizer.Add(hSizer)
self.SetSizer(mainSizer)
########################################################################
class MyFrame(wx.Frame):
""""""
#----------------------------------------------------------------------
def __init__(self):
"""Constructor"""
wx.Frame.__init__(self, None, title="Sizers", size=(1600,600))
panel = MyPanel(self)
self.Show()
if __name__ == "__main__":
app = wx.App(False)
frame = MyFrame()
app.MainLoop()
```
To learn about spanning rows, I recommend looking at the wxPython demo. I think that may only be supported in wx.GridBagSizer and the FlexGridSizer though. You can try the span parameter though. Also, it should be noted that wx.GROW and wx.EXPAND are one and the same. You might also want to check out the wiki for more information: <http://wiki.wxpython.org/GridBagSizerTutorial>
|
45,876,059
|
I have this server
<https://github.com/crossbario/autobahn-python/blob/master/examples/twisted/websocket/echo_tls/server.py>
And I want to connect to the server with this code:
```
ws = create_connection("wss://127.0.0.1:9000")
```
What options do I need to add to `create_connection`? Adding `sslopt={"cert_reqs": ssl.CERT_NONE}` does not work:
```
websocket._exceptions.WebSocketBadStatusException: Handshake status 400
```
|
2017/08/25
|
[
"https://Stackoverflow.com/questions/45876059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1238675/"
] |
This works
```
import asyncio
import websockets
import ssl
async def hello():
async with websockets.connect('wss://127.0.0.1:9000',ssl=ssl.SSLContext(protocol=ssl.PROTOCOL_TLS)) as websocket:
data = 'hi'
await websocket.send(data)
print("> {}".format(data))
response = await websocket.recv()
print("< {}".format(response))
asyncio.get_event_loop().run_until_complete(hello())
```
|
For me the option from the question seems to work:
```
from websocket import create_connection
import ssl
ws = create_connection("wss://echo.websocket.org", sslopt={"cert_reqs": ssl.CERT_NONE})
ws.send("python hello!")
print (ws.recv())
ws.close()
```
See also here:
<https://github.com/websocket-client/websocket-client#faq>
Note: I'm using win7 with python 3.6.5 with following packages installed(pip):
* simple-websocket-server==0.4.0
* websocket-client==0.53.0
* websockets==6.0
|
17,986,923
|
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same?
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/17986923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2054020/"
] |
Using [defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict):
```
import itertools
from collections import defaultdict
a1 = {u'SFP_1': [u'cat', u'3'], u'SFP_0': [u'cat', u'5', u'bat', u'1']}
b1 = itertools.chain.from_iterable(a1.itervalues())
c1 = defaultdict(int)
for animal, count in itertools.izip(*[iter(b1)] * 2):
c1[animal] += int(count)
# c1 => defaultdict(<type 'int'>, {u'bat': 1, u'cat': 8})
c1 = {animal: str(count) for animal, count in c1.iteritems()}
# c1 => {u'bat': '1', u'cat': '8'}
```
|
```
In [8]: a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
In [9]: answer = collections.defaultdict(int)
In [10]: for L in a1.values():
for k,v in itertools.izip(itertools.islice(L, 0, len(L), 2),
itertools.islice(L, 1, len(L), 2)):
answer[k] += int(v)
In [11]: answer
Out[11]: defaultdict(<type 'int'>, {'bat': 1, 'cat': 8})
In [12]: dict(answer)
Out[12]: {'bat': 1, 'cat': 8}
```
|
17,986,923
|
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same?
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/17986923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2054020/"
] |
```
In [8]: a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
In [9]: answer = collections.defaultdict(int)
In [10]: for L in a1.values():
for k,v in itertools.izip(itertools.islice(L, 0, len(L), 2),
itertools.islice(L, 1, len(L), 2)):
answer[k] += int(v)
In [11]: answer
Out[11]: defaultdict(<type 'int'>, {'bat': 1, 'cat': 8})
In [12]: dict(answer)
Out[12]: {'bat': 1, 'cat': 8}
```
|
Try `collections.defaultdict(int):`
From the manual -
```
>>> s = 'mississippi'
>>> d = defaultdict(int)
>>> for k in s:
... d[k] += 1
...
>>> d.items()
[('i', 4), ('p', 2), ('s', 4), ('m', 1)]
```
This should let you get to where you need to be.
|
17,986,923
|
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same?
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/17986923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2054020/"
] |
```
In [8]: a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
In [9]: answer = collections.defaultdict(int)
In [10]: for L in a1.values():
for k,v in itertools.izip(itertools.islice(L, 0, len(L), 2),
itertools.islice(L, 1, len(L), 2)):
answer[k] += int(v)
In [11]: answer
Out[11]: defaultdict(<type 'int'>, {'bat': 1, 'cat': 8})
In [12]: dict(answer)
Out[12]: {'bat': 1, 'cat': 8}
```
|
For whatever a pure Python solution is worth, here's one.
```
a1 = {'SFP_1': ['cat', '3'], 'SFP_0': ['cat', '5', 'bat', '1']}
def count(what):
sums = {}
for items in what.itervalues():
for k, v in zip(items[::2], items[1::2]):
if k in sums:
sums[k] = str(int(sums[k]) + int(v))
else:
sums[k] = v
return sums
```
`count(a1)` gives, `{'bat': '1', 'cat': '8'}`.
|
17,986,923
|
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same?
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/17986923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2054020/"
] |
```
In [8]: a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
In [9]: answer = collections.defaultdict(int)
In [10]: for L in a1.values():
for k,v in itertools.izip(itertools.islice(L, 0, len(L), 2),
itertools.islice(L, 1, len(L), 2)):
answer[k] += int(v)
In [11]: answer
Out[11]: defaultdict(<type 'int'>, {'bat': 1, 'cat': 8})
In [12]: dict(answer)
Out[12]: {'bat': 1, 'cat': 8}
```
|
You can use [collections.Counter](http://docs.python.org/2/library/collections.html#collections.Counter) class, which is basically a specialized version of collections.defaultdict(int) with nice extra methods and nice name:
```
from collections import Counter
def count(dct):
# Counter is specialized version of defaultdict(int)
counter = Counter()
for values in dct.viewvalues():
assert len(values) % 2 == 0, "{!r} must have even length".format(values)
# iterate by pairs
for i in xrange(0, len(values) - 1, 2):
counter[values[i]] += int(values[i + 1])
# convert frequencies to strings
return {word: str(freq) for word, freq in counter.viewitems()}
if __name__ == "__main__":
a1 = {"SFP_1": ["cat", "3"],
"SFP_0": ["cat", "5", "bat", "1"]
}
print count(a1)
```
|
17,986,923
|
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same?
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/17986923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2054020/"
] |
Using [defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict):
```
import itertools
from collections import defaultdict
a1 = {u'SFP_1': [u'cat', u'3'], u'SFP_0': [u'cat', u'5', u'bat', u'1']}
b1 = itertools.chain.from_iterable(a1.itervalues())
c1 = defaultdict(int)
for animal, count in itertools.izip(*[iter(b1)] * 2):
c1[animal] += int(count)
# c1 => defaultdict(<type 'int'>, {u'bat': 1, u'cat': 8})
c1 = {animal: str(count) for animal, count in c1.iteritems()}
# c1 => {u'bat': '1', u'cat': '8'}
```
|
Try `collections.defaultdict(int):`
From the manual -
```
>>> s = 'mississippi'
>>> d = defaultdict(int)
>>> for k in s:
... d[k] += 1
...
>>> d.items()
[('i', 4), ('p', 2), ('s', 4), ('m', 1)]
```
This should let you get to where you need to be.
|
17,986,923
|
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same?
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/17986923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2054020/"
] |
Using [defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict):
```
import itertools
from collections import defaultdict
a1 = {u'SFP_1': [u'cat', u'3'], u'SFP_0': [u'cat', u'5', u'bat', u'1']}
b1 = itertools.chain.from_iterable(a1.itervalues())
c1 = defaultdict(int)
for animal, count in itertools.izip(*[iter(b1)] * 2):
c1[animal] += int(count)
# c1 => defaultdict(<type 'int'>, {u'bat': 1, u'cat': 8})
c1 = {animal: str(count) for animal, count in c1.iteritems()}
# c1 => {u'bat': '1', u'cat': '8'}
```
|
For whatever a pure Python solution is worth, here's one.
```
a1 = {'SFP_1': ['cat', '3'], 'SFP_0': ['cat', '5', 'bat', '1']}
def count(what):
sums = {}
for items in what.itervalues():
for k, v in zip(items[::2], items[1::2]):
if k in sums:
sums[k] = str(int(sums[k]) + int(v))
else:
sums[k] = v
return sums
```
`count(a1)` gives, `{'bat': '1', 'cat': '8'}`.
|
17,986,923
|
I have a python dictionary of the form :
```
a1 = {
'SFP_1': ['cat', '3'],
'SFP_0': ['cat', '5', 'bat', '1']
}
```
The end result I need is a dictionary of the form :
```
{'bat': '1', 'cat': '8'}
```
I am currently doing this:
```
b1 = list(itertools.chain(*a1.values()))
c1 = dict(itertools.izip_longest(*[iter(b1)] * 2, fillvalue=""))
```
which gives me the output:
```
>>> c1
{'bat': '1', 'cat': '5'}
```
I can iterate over the dictionary and get this but can anybody give me a more pythonic way of doing the same?
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/17986923",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2054020/"
] |
Using [defaultdict](http://docs.python.org/2/library/collections.html#collections.defaultdict):
```
import itertools
from collections import defaultdict
a1 = {u'SFP_1': [u'cat', u'3'], u'SFP_0': [u'cat', u'5', u'bat', u'1']}
b1 = itertools.chain.from_iterable(a1.itervalues())
c1 = defaultdict(int)
for animal, count in itertools.izip(*[iter(b1)] * 2):
c1[animal] += int(count)
# c1 => defaultdict(<type 'int'>, {u'bat': 1, u'cat': 8})
c1 = {animal: str(count) for animal, count in c1.iteritems()}
# c1 => {u'bat': '1', u'cat': '8'}
```
|
You can use [collections.Counter](http://docs.python.org/2/library/collections.html#collections.Counter) class, which is basically a specialized version of collections.defaultdict(int) with nice extra methods and nice name:
```
from collections import Counter
def count(dct):
# Counter is specialized version of defaultdict(int)
counter = Counter()
for values in dct.viewvalues():
assert len(values) % 2 == 0, "{!r} must have even length".format(values)
# iterate by pairs
for i in xrange(0, len(values) - 1, 2):
counter[values[i]] += int(values[i + 1])
# convert frequencies to strings
return {word: str(freq) for word, freq in counter.viewitems()}
if __name__ == "__main__":
a1 = {"SFP_1": ["cat", "3"],
"SFP_0": ["cat", "5", "bat", "1"]
}
print count(a1)
```
|
45,045,147
|
I'm trying to migrate a table with SQLAlchemy Migrate, but I'm getting this error:
```
sqlalchemy.exc.UnboundExecutionError: Table object 'responsibles' is not bound to an Engine or Connection. Execution can not proceed without a database to execute against.
```
When I run:
```
python manage.py test
```
This is my migration file:
```
from sqlalchemy import *
from migrate import *
meta = MetaData()
responsibles = Table(
'responsibles', meta,
Column('id', Integer, primary_key=True),
Column('breakdown_type', String(255)),
Column('breakdown_name', String(500)),
Column('email', String(255)),
Column('name', String(255)),
)
def upgrade(migrate_engine):
# Upgrade operations go here. Don't create your own engine; bind
# migrate_engine to your metadata
responsibles.create()
def downgrade(migrate_engine):
# Operations to reverse the above upgrade go here.
responsibles.drop()
```
|
2017/07/11
|
[
"https://Stackoverflow.com/questions/45045147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1934510/"
] |
did you create your engine? like this
`engine = create_engine('sqlite:///:memory:')`
and then do
`meta.bind = engine
meta.create_all(engine)`
|
You need to supply `engine` or `connection`
[`sqlalchemy.schema.MetaData.bind`](http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.MetaData.bind)
For e.g.:
```
engine = create_engine("someurl://")
metadata.bind = engine
```
|
70,596,608
|
In the office we have a fileWatcher that converts pointclouds to .laz files.
We just started working with Revit but came to the conclusion that it is not possible to import .laz in Revit.
So I googled and found a solution execept it is written in python and our watcher is in c#.
Below the python script.
`<location>/decap.exe –importWithLicence E:\decap text\Building 1\ Building 1`
Is there a way to convert this python script to c# or is there maybe another way.
Please let me know.
|
2022/01/05
|
[
"https://Stackoverflow.com/questions/70596608",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10411687/"
] |
Convert the columns to `Date` class and use `difftime`
```
df1$Difference <- with(df1, as.numeric(difftime(as.Date(DeliveryDate),
as.Date(ExpectedDate), units = "days")))
```
---
Or using `tidyverse`
```
library(dplyr)
library(lubridate)
df1 %>%
mutate(Difference = as.numeric(difftime(ymd(DeliveryDate),
ymd(ExpectedDate), units = "days")))
DeliveryDate ExpectedDate Difference
1 2022-01-05 2022-01-07 -2
```
|
First change your date to lubridate date:
2022-01-05 would be
```
date1 <- ymd("2022-01-05")
date2 <- ymd("2022-01-07")
diff_days <- difftime(date2, date1, units="days")
```
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I got the same issue and found a [GitHub issue](https://github.com/conda/conda/issues/6073#issuecomment-356981567) related to this. In the comments, @kalefranz posted an ideal solution by using the `--no-builds` flag with conda env export.
```
conda env export --no-builds > environment.yml
```
However, even remove build numbers, some packages may still have different version number at different OS. The best way I think is to create different env yml file for different OS.
Hope this helps.
|
I had a similar issue and was able to work around it. My issue wasn't related to pip but rather because the export platform wasn't the same as the import platform (Ref: nehaljwani's November 2018 answer on <https://github.com/conda/conda/issues/7311>).
@Shixiang Wang's answer point towards a part of the solution. The no-build argument allows for more flexibility but there are some components which are specific to the platform or OS.
Using the no-build export, I was able to identify (from the error message at import time) which libraries were problematic and simply removed them from the YML file. This may not be flawless, but saves a lot of time compared to starting from scratch.
NOTE: I got a `Pip subprocess error` which interrupted the installation at a given library, which could be simply overcome via a `conda install <library>`. From there I could relaunch the import from the YML file.
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I got the same issue and found a [GitHub issue](https://github.com/conda/conda/issues/6073#issuecomment-356981567) related to this. In the comments, @kalefranz posted an ideal solution by using the `--no-builds` flag with conda env export.
```
conda env export --no-builds > environment.yml
```
However, even remove build numbers, some packages may still have different version number at different OS. The best way I think is to create different env yml file for different OS.
Hope this helps.
|
tl;dr `conda env export --from-history -n name_of_your_env -f environment.yml`
---
`conda env export` command pins your dependencies to the exact version along with OS specific details.
Looks like this for Pandas on macOS for example, `- pandas=1.0.5=py38h959d312_0`. `conda env create` cannot use this to create the same environment on other OS, like Linux inside Docker for instance.
So export the packages without pinning, and the ones you specifically installed after creating the conda environment, from history, using `conda env export --from-history`.
<https://repo2docker.readthedocs.io/en/latest/howto/export_environment.html>
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I had same problem and found your question googling for it.
`ResolvePackageNotFound` error describes all packages not installed yet, but required.
To solve the problem, move them under `pip` section:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- ***
- another dependencies, except not found ones
- pip:
- gst-plugins-base==1.8.0
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
*** added ***
- gst-plugins-base==1.8.0
- dbus==1.10.20
- opencv3==3.2.0
- qt==5.6.2
- libxcb==1.12
- libgcc==5.2.0
- gstreamer==1.8.0
```
|
tl;dr `conda env export --from-history -n name_of_your_env -f environment.yml`
---
`conda env export` command pins your dependencies to the exact version along with OS specific details.
Looks like this for Pandas on macOS for example, `- pandas=1.0.5=py38h959d312_0`. `conda env create` cannot use this to create the same environment on other OS, like Linux inside Docker for instance.
So export the packages without pinning, and the ones you specifically installed after creating the conda environment, from history, using `conda env export --from-history`.
<https://repo2docker.readthedocs.io/en/latest/howto/export_environment.html>
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I had same problem and found your question googling for it.
`ResolvePackageNotFound` error describes all packages not installed yet, but required.
To solve the problem, move them under `pip` section:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- ***
- another dependencies, except not found ones
- pip:
- gst-plugins-base==1.8.0
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
*** added ***
- gst-plugins-base==1.8.0
- dbus==1.10.20
- opencv3==3.2.0
- qt==5.6.2
- libxcb==1.12
- libgcc==5.2.0
- gstreamer==1.8.0
```
|
I had a similar issue and was able to work around it. My issue wasn't related to pip but rather because the export platform wasn't the same as the import platform (Ref: nehaljwani's November 2018 answer on <https://github.com/conda/conda/issues/7311>).
@Shixiang Wang's answer point towards a part of the solution. The no-build argument allows for more flexibility but there are some components which are specific to the platform or OS.
Using the no-build export, I was able to identify (from the error message at import time) which libraries were problematic and simply removed them from the YML file. This may not be flawless, but saves a lot of time compared to starting from scratch.
NOTE: I got a `Pip subprocess error` which interrupted the installation at a given library, which could be simply overcome via a `conda install <library>`. From there I could relaunch the import from the YML file.
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I got the same issue and found a [GitHub issue](https://github.com/conda/conda/issues/6073#issuecomment-356981567) related to this. In the comments, @kalefranz posted an ideal solution by using the `--no-builds` flag with conda env export.
```
conda env export --no-builds > environment.yml
```
However, even remove build numbers, some packages may still have different version number at different OS. The best way I think is to create different env yml file for different OS.
Hope this helps.
|
There can be another reason for the '**ResolvePackageNotFound**' error -- the version of the packages you require might be in an old version of the repository that is not searched by default.
The different paths to locations in the Anaconda repositories can be found at:
<https://repo.continuum.io/pkgs/>
My yml file [NW\_BI.yml] is as follows:
```
name: NW_BI
channels:
- 'https://repo.continuum.io/pkgs/free' # Remove this line and it fails!!!
- conda-forge
- defaults
dependencies:
- python=2.7.10
- pandas=0.16.2
- pyodbc=3.0.10
```
**Create using:**
```
conda env create -f 'path to file'\NW_BI.yml
```
I wanted to recreate an old environment!!!!
**Note using:**
Anaconda3 2019.10
Windows10
|
49,154,899
|
I want to create a virtual environment using conda and yml file.
Command:
```
conda env create -n ex3 -f env.yml
```
Type ENTER it gives following message:
```
ResolvePackageNotFound:
- gst-plugins-base==1.8.0=0
- dbus==1.10.20=0
- opencv3==3.2.0=np111py35_0
- qt==5.6.2=5
- libxcb==1.12=1
- libgcc==5.2.0=0
- gstreamer==1.8.0=0
```
However, I do have those on my Mac. My MacOS: High Sierra 10.13.3
My env.yml file looks like this:
```
name: ex3
channels:
- menpo
- defaults
dependencies:
- cairo=1.14.8=0
- certifi=2016.2.28=py35_0
- cycler=0.10.0=py35_0
- dbus=1.10.20=0
- expat=2.1.0=0
- fontconfig=2.12.1=3
- freetype=2.5.5=2
- glib=2.50.2=1
- gst-plugins-base=1.8.0=0
- gstreamer=1.8.0=0
- harfbuzz=0.9.39=2
- hdf5=1.8.17=2
- icu=54.1=0
- jbig=2.1=0
- jpeg=9b=0
- libffi=3.2.1=1
- libgcc=5.2.0=0
- libgfortran=3.0.0=1
- libiconv=1.14=0
- libpng=1.6.30=1
- libtiff=4.0.6=3
- libxcb=1.12=1
- libxml2=2.9.4=0
- matplotlib=2.0.2=np111py35_0
- mkl=2017.0.3=0
- numpy=1.11.3=py35_0
- openssl=1.0.2l=0
- pandas=0.20.1=np111py35_0
- patsy=0.4.1=py35_0
- pcre=8.39=1
- pip=9.0.1=py35_1
- pixman=0.34.0=0
- pyparsing=2.2.0=py35_0
- pyqt=5.6.0=py35_2
- python=3.5.4=0
- python-dateutil=2.6.1=py35_0
- pytz=2017.2=py35_0
- qt=5.6.2=5
- readline=6.2=2
- scipy=0.19.0=np111py35_0
- seaborn=0.8=py35_0
- setuptools=36.4.0=py35_1
- sip=4.18=py35_0
- six=1.10.0=py35_0
- sqlite=3.13.0=0
- statsmodels=0.8.0=np111py35_0
- tk=8.5.18=0
- wheel=0.29.0=py35_0
- xz=5.2.3=0
- zlib=1.2.11=0
- opencv3=3.2.0=np111py35_0
- pip:
- bleach==1.5.0
- enum34==1.1.6
- html5lib==0.9999999
- markdown==2.6.11
- protobuf==3.5.1
- tensorflow==1.4.1
- tensorflow-tensorboard==0.4.0
- werkzeug==0.14.1
```
How to solve this problem?
Well....The stack overflow prompt me to say more details, but I think I describe things clearly, it is sad, stack overflow does not support to upload attachment....
|
2018/03/07
|
[
"https://Stackoverflow.com/questions/49154899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7722305/"
] |
I got the same issue and found a [GitHub issue](https://github.com/conda/conda/issues/6073#issuecomment-356981567) related to this. In the comments, @kalefranz posted an ideal solution by using the `--no-builds` flag with conda env export.
```
conda env export --no-builds > environment.yml
```
However, even remove build numbers, some packages may still have different version number at different OS. The best way I think is to create different env yml file for different OS.
Hope this helps.
|
If you are looking at this and feel too much chore to change Conda version
`packge=ver=py.*` to pip style `package==ver`, I wrote this small script that delete the `=py.*` part from Conda style.
Note below code work on the presume that you already changed `package=ver` to `package==ver`.
```
#!/bin/bash
COUNT=0
find_pip=0
while IFS= read -r line; do
COUNT=$(( $COUNT + 1 ))
# echo "$COUNT"
# echo "read it"
if echo ${line} | grep -q -- "- pip:" ; then
# echo "find it"
find_pip=1
indent=`awk -F- '{print length($1)}' <<< "$line"`
pip_indent=$(( $indent + 2 ))
# echo $indent
# echo $pip_indent
fi
line_indent=`awk -F- '{print length($1)}' <<< "$line"`
if [[ ${find_pip} ]] && [[ ${pip_indent} -eq ${line_indent} ]]; then
# echo "$line"
new_line=`echo ${line} | cut -d'=' -f-3`
new_line=" $new_line"
# echo "${new_line}"
sed -e "${COUNT}s/.*/${new_line}/" -i '' $1
fi
done < "$1"
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.