qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
6,256,369
|
I want to create a "full file name" variable from several other variables, but the string concatenation and string format operations aren't behaving the way I expect.
My code is below:
```
file_date = str(input("Enter file date: "))
root_folder = "\\\\SERVER\\FOLDER\\"
file_prefix = "sample_file_"
file_extension = ".txt"
print("")
print("Full file name with concatenation: ")
print(root_folder + file_prefix + file_date + file_extension)
print("Full file name with concatenation, without file_extension: ")
print(root_folder + file_prefix + file_date)
print("")
print("")
print("Full file name with string formatting: ")
print("%s%s%s%s" % (root_folder, file_prefix, file_date, file_extension))
print("Full file name with string formatting, without file_extension: ")
print("%s%s%s" % (root_folder, file_prefix, file_date))
print("")
```
The output when I run the script is:
```
C:\Temp>python test.py
Enter file date: QT1
Full file name with concatenation:
.txtRVER\FOLDER\sample_file_QT1
Full file name with concatenation, without file_extension:
\\SERVER\FOLDER\sample_file_QT1
Full file name with string formatting:
.txtRVER\FOLDER\sample_file_QT1
Full file name with string formatting, without file_extension:
\\SERVER\FOLDER\sample_file_QT1
```
I was expecting it to concatenate the ".txt" at the very end, except it's replacing the first four characters of the string with it instead.
How do I concatenate the extension variable to the end of the string instead of having it replace the first n characters of the string?
In addition to how to solve this particular problem, I'd like to know why I ran into it in the first place. What did I do wrong/what Python 3.2 behavior am I not aware of?
|
2011/06/06
|
[
"https://Stackoverflow.com/questions/6256369",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/467055/"
] |
I think the method input used in your example, like so:
```
file_date = str(input("Enter file date: "))
```
may be returning a carriage return character at the end.
This causes the cursor to go back to the start of the line when you try to print it out.
You may want to trim the return value of input().
|
Use this line instead to get rid of the line feed:
```
file_date = str(input("Enter file date: ")).rstrip()
```
| 11,522
|
27,666,846
|
I try to run [this example](http://scikit-learn.org/stable/modules/tree.html) for decision tree learning, but get the following error message:
>
> File "coco.py", line 18, in
> graph.write\_pdf("iris.pdf") File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py",
> line 1602, in
> lambda path, f=frmt, prog=self.prog : self.write(path, format=f, prog=prog)) File
> "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py",
> line 1696, in write
> dot\_fd.write(self.create(prog, format)) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pydot.py",
> line 1727, in create
> 'GraphViz\'s executables not found' ) pydot.InvocationException: GraphViz's executables not found
>
>
>
I saw [this post](https://stackoverflow.com/questions/18438997/why-is-pydot-unable-to-find-graphvizs-executables-in-windows-8) about a similar error, but even when I follow their solution (uninstall and then reinstall graphviz and pydot in the opposite order) the problem continues... I'm using MacOS (Yosemite).
Any ideas? Would appreciate the help.
|
2014/12/27
|
[
"https://Stackoverflow.com/questions/27666846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4397694/"
] |
Worked for me on Ubuntu 18.04 as well:
```
$ sudo apt-get install graphviz
```
|
On mac, use Brew to install graphviz and not pip, see links:
graphviz information: <http://www.graphviz.org/download/>
brew installation: <https://brew.sh/>
So typing the following in the terminal after you install brew should work:
```
brew install graphviz
```
| 11,523
|
3,076,928
|
Using sqlite3 and Django I want to change to PostgreSQL and keep all data intact. I used `./manage.py dumpdata > dump.json` to dump the data, and changed settings to use PostgreSQL.
With an empty database `./manage.py loaddata dump.json` resulted in errors about tables not existing, so I ran `./manage.py syncdb` and tried again. That results in this error:
```
Problem installing fixture 'dump.json': Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/django/core/management/commands/loaddata.py", line 163, in handle
obj.save()
File "/usr/lib/python2.6/site-packages/django/core/serializers/base.py", line 163, in save
models.Model.save_base(self.object, raw=True)
File "/usr/lib/python2.6/site-packages/django/db/models/base.py", line 495, in save_base
rows = manager.filter(pk=pk_val)._update(values)
File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 448, in _update
return query.execute_sql(None)
File "/usr/lib/python2.6/site-packages/django/db/models/sql/subqueries.py", line 124, in execute_sql
cursor = super(UpdateQuery, self).execute_sql(result_type)
File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 2347, in execute_sql
cursor.execute(sql, params)
File "/usr/lib/python2.6/site-packages/django/db/backends/util.py", line 19, in execute
return self.cursor.execute(sql, params)
IntegrityError: duplicate key value violates unique constraint "django_content_type_app_label_key"
```
Is this not the correct way to move data from one database to another? How should I switch database backend safely?
|
2010/06/19
|
[
"https://Stackoverflow.com/questions/3076928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/66107/"
] |
The problem is simply that you're getting the content types defined twice - once when you do `syncdb`, and once from the exported data you're trying to import. Since you may well have other items in your database that depend on the original content type definitions, I would recommend keeping those.
So, after running `syncdb`, do `manage.py dbshell` and in your database do `TRUNCATE django_content_type;` to remove all the newly-defined content types. Then you shouldn't get any conflicts - on that part of the process, in any case.
|
There is a big discussion about it on the [Django ticket 7052](http://code.djangoproject.com/ticket/7052). The right way now is to use the `--natural` parameter, example: `./manage.py dumpdata --natural --format=xml --indent=2 > fixture.xml`
In order for `--natural` to work with your models, they must implement `natural_key` and `get_by_natural_key`, as described on [the Django documentation regarding natural keys](http://docs.djangoproject.com/en/1.3/topics/serialization/).
Having said that, you might still need to edit the data before importing it with `./manage.py loaddata`. For instance, if your applications changed, `syncdb` will populate the table `django_content_type` and you might want to delete the respective entries from the xml-file before loading it.
| 11,533
|
74,451,481
|
I have built a python script that uses python socket to build a connection between my python application and my python server. I have encrypted the data sent between the two systems. I was wondering if I should think of any other things related to security against hackers. Can they do something that could possibly steal data from my computer.
thanks in advance for the effort.
I have encrypted the data sent between the two systems.
|
2022/11/15
|
[
"https://Stackoverflow.com/questions/74451481",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/20513644/"
] |
To make this slightly more extensible, you can convert it to an object:
```js
function process(input) {
let data = input.split("\n\n"); // split by double new line
data = data.map(i => i.split("\n")); // split each pair
data = data.map(i => i.reduce((obj, cur) => {
const [key, val] = cur.split(": "); // get the key and value
obj[key.toLowerCase()] = val; // lowercase the value to make it a nice object
return obj;
}, {}));
return data;
}
const input = `Package: apple
Settings: scim
Architecture: amd32
Size: 2312312312
Package: banana
Architecture: xsl64
Version: 94.3223.2
Size: 23232
Package: orange
Architecture: bbl64
Version: 14.3223.2
Description: Something descrip
more description to orange
Package: friday
SHA215: d3d223d3f2ddf2323d3
Person: XCXCS
Size: 2312312312`;
const data = process(input);
const { version } = data.find(({ package }) => package === "banana"); // query data
console.log("Banana version:", version);
```
|
These kinds of text extraction are always pretty fragile, so let me know if this works for your real inputs... Anyways, if we split by empty lines (which are really just double line breaks, `\n\n`), and then split each "paragraph" by `\n`, we get chunks of lines we can work with.
Then we can just find the chunk that has the banana package, and then inside that chunk, we find the line that contains the version.
Finally, we slice off `Version:` to get the version text.
```js
const text = `\
Package: apple
Settings: scim
Architecture: amd32
Size: 2312312312
Package: banana
Architecture: xsl64
Version: 94.3223.2
Size: 23232
Package: orange
Architecture: bbl64
Version: 14.3223.2
Description: Something descrip
more description to orange
SHA215: d3d223d3f2ddf2323d3
Person: XCXCS
Size: 2312312312
`;
const chunks = text.split("\n\n").map((p) => p.split("\n"));
const version = chunks
.find((info) =>
info.some((line) => line === "Package: banana")
)
.find((line) =>
line.startsWith("Version: ")
)
.slice("Version: ".length);
console.log(version);
```
| 11,538
|
73,430,007
|
I'm building a website using python and Django, but when I looked in the admin, the names of the model items aren't showing up.
[](https://i.stack.imgur.com/bPvtX.png)
[](https://i.stack.imgur.com/Hfm0y.png)
So, the objects that I am building in the admin aren't showing their names.
admin.py:
```
from .models import Article, Author
# Register your models here.
@admin.register(Article)
class ArticleAdmin(admin.ModelAdmin):
list_display = ['title', 'main_txt', 'date_of_publication']
list_display_links = None
list_editable = ['title', 'main_txt']
def __str__(self):
return self.title
@admin.register(Author)
class AuthorAdmin(admin.ModelAdmin):
list_display = ['first_name', 'last_name', 'join_date', 'email', 'phone_num']
list_display_links = ['join_date']
list_editable = ['email', 'phone_num', ]
def __str__(self):
return f"{self.first_name} {self.last_name[0]}"
```
models.py:
```
# Create your models here.
class Author(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
date_of_birth = models.DateField()
email = models.CharField(max_length=300)
phone_num = models.CharField(max_length=15)
join_date = models.DateField()
participated_art = models.ManyToManyField('Article', blank=True)
class Article(models.Model):
title = models.CharField(max_length=500)
date_of_publication = models.DateField()
creaters = models.ManyToManyField('Author', blank=False)
main_txt = models.TextField()
notes = models.TextField()
```
|
2022/08/20
|
[
"https://Stackoverflow.com/questions/73430007",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19771403/"
] |
Add [`__str__()`](https://docs.python.org/3/reference/datamodel.html#object.__str__) method in the model itself instead of admin.py, so:
```py
class Author(models.Model):
first_name = models.CharField(max_length=100)
last_name = models.CharField(max_length=100)
date_of_birth = models.DateField()
email = models.CharField(max_length=300)
phone_num = models.CharField(max_length=15)
join_date = models.DateField()
participated_art = models.ManyToManyField('Article', blank=True)
def __str__(self):
return f"{self.first_name} {self.last_name}"
class Article(models.Model):
title = models.CharField(max_length=500)
date_of_publication = models.DateField()
creaters = models.ManyToManyField('Author', blank=False)
main_txt = models.TextField()
notes = models.TextField()
def __str__(self):
return self.title
```
|
You need to specify a `def __str__(self)` method.
Example:
```
class Author(models..):
....
def __str__(self):
return self.first_name + ' ' + self.last_name
```
| 11,539
|
57,124,038
|
I am a beginner with Python and I am learning how to treat images.
Given a square image (NxN), I would like to make it into a (N+2)x(N+2) image with a new layer of zeros around it. I would prefer not to use numpy and only stick with the basic python programming. Any idea on how to do so ?
Right now, I used .extend to add zeros on the right side and on the bottom but can't do it up and left.
Thank you for your help!
|
2019/07/20
|
[
"https://Stackoverflow.com/questions/57124038",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10154400/"
] |
We can create a padding function that adds layers of zeros around an image (padding it).
```
def pad(img,layers):
#img should be rectangular
return [[0]*(len(img[0])+2*layers)]*layers + \
[[0]*layers+r+[0]*layers for r in img] + \
[[0]*(len(img[0])+2*layers)]*layers
```
We can test with a sample image, such as:
```
i = [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
```
So,
```
pad(i,2)
```
gives:
```
[[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 2, 3, 0, 0],
[0, 0, 4, 5, 6, 0, 0],
[0, 0, 7, 8, 9, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]]
```
|
Im assuming that by image we're talking about a matrix, in that case you could do this:
```
img = [[5, 5, 5], [5, 5, 5], [5, 5, 5]]
row_len = len(img)
col_len = len(img[0])
new_image = list()
for n in range(col_len+2): # Adding two more rows
if n == 0 or n == col_len + 1:
new_image.append([0] * (row_len + 2)) # First and last row is just zeroes
else:
new_image.append([0] + img[n - 1] + [0]) # Append a zero to the front and back of each row
print(new_image) # [[0, 0, 0, 0, 0], [0, 5, 5, 5, 0], [0, 5, 5, 5, 0], [0, 5, 5, 5, 0], [0, 0, 0, 0, 0]]
```
| 11,540
|
53,440,086
|
I am trying to right click with mouse and click save as Image in selenium python.
I was able to perform right click with follwing method, however the next action to perform right click does not work any more. How can I solve this problem?
```
from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys
from selenium import webdriver
driver.get(url)
# get the image source
img = driver.find_element_by_xpath('//img')
actionChains = ActionChains(driver)
actionChains.context_click(img).send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform()
```
|
2018/11/23
|
[
"https://Stackoverflow.com/questions/53440086",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8263870/"
] |
You can do the same functionality using pyautogui. Assuming you are using Windows.
-->pyautogui.position()
(187, 567) #prints the current cursor position
-->pyautogui.moveTo(100, 200)#move to location where right click is req.
-->pyautogui.click(button='right')
-->pyautogui.hotkey('ctrl', 'c') - Ctrl+C in keyboard(Copy shortcut)
Refer to below link for further
<https://pyautogui.readthedocs.io/en/latest/keyboard.html>
|
You have to first move to the element where you want to perform the context click
```py
from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys
from selenium import webdriver
driver.get(url)
# get the image source
img = driver.find_element_by_xpath('//img')
actionChains = ActionChains(driver)
actionChains.move_to_element(img).context_click().send_keys(Keys.ARROW_DOWN).send_keys(Keys.ARROW_DOWN).send_keys(Keys.RETURN).perform()
```
| 11,541
|
32,406,711
|
I'm trying to write a python script using BeautifulSoup that crawls through a webpage <http://tbc-python.fossee.in/completed-books/> and collects necessary data from it. Basically it has to fetch all the `page loading errors, SyntaxErrors, NameErrors, AttributeErrors, etc` present in the chapters of all the books to a text file `errors.txt`. There are around 273 books. The script written is doing the task well. I am using bandwidth with good speed. But the code takes much time to scrape through all the books. Please help me to optimize the python script with necessary tweaks, maybe use of functions, etc. Thanks
```
import urllib2, urllib
from bs4 import BeautifulSoup
website = "http://tbc-python.fossee.in/completed-books/"
soup = BeautifulSoup(urllib2.urlopen(website))
errors = open('errors.txt','w')
# Completed books webpage has data stored in table format
BookTable = soup.find('table', {'class': 'table table-bordered table-hover'})
for BookCount, BookRow in enumerate(BookTable.find_all('tr'), start = 1):
# Grab book names
BookCol = BookRow.find_all('td')
BookName = BookCol[1].a.string.strip()
print "%d: %s" % (BookCount, BookName)
# Open each book
BookSrc = BeautifulSoup(urllib2.urlopen('http://tbc-python.fossee.in%s' %(BookCol[1].a.get("href"))))
ChapTable = BookSrc.find('table', {'class': 'table table-bordered table-hover'})
# Check if each chapter page opens, if not store book & chapter name in error.txt
for ChapRow in ChapTable.find_all('tr'):
ChapCol = ChapRow.find_all('td')
ChapName = (ChapCol[0].a.string.strip()).encode('ascii', 'ignore') # ignores error : 'ascii' codec can't encode character u'\xef'
ChapLink = 'http://tbc-python.fossee.in%s' %(ChapCol[0].a.get("href"))
try:
ChapSrc = BeautifulSoup(urllib2.urlopen(ChapLink))
except:
print '\t%s\n\tPage error' %(ChapName)
errors.write("Page; %s;%s;%s;%s" %(BookCount, BookName, ChapName, ChapLink))
continue
# Check for errors in chapters and store the errors in error.txt
EgError = ChapSrc.find_all('div', {'class': 'output_subarea output_text output_error'})
if EgError:
for e, i in enumerate(EgError, start=1):
errors.write("Example;%s;%s;%s;%s\n" %(BookCount,BookName,ChapName,ChapLink)) if 'ipython-input' or 'Error' in i.pre.get_text() else None
print '\t%s\n\tExample errors: %d' %(ChapName, e)
errors.close()
```
|
2015/09/04
|
[
"https://Stackoverflow.com/questions/32406711",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5283513/"
] |
Try to change your buttons to:
```
{
display: "Hello There",
action: functionA
}
```
And to invoke:
```
btn[i].action();
```
I changed the name `function` to `action` because `function` is a reserved word and cannot be used as an object property name.
|
You can store references to the functions in your array, just lose the `"` signs around their names *(which currently makes them strings instead of function references)*, creating the array like this:
```
var btn = [{
x: 50,
y: 100,
width: 80,
height: 50,
display: "Hello There",
function: functionA
}, {
x: 150,
y: 100,
width: 80,
height: 50,
display: "Why Not?",
function: functionB
}]
```
Then you can call either by writing `btn[i].function()`.
| 11,544
|
44,412,844
|
I have been trying to use CloudFormation to deploy to API Gateway, however, I constantly run into the same issue with my method resources. The stack deployments keep failing with 'Invalid Resource identifier specified'.
Here is my method resource from my CloudFormation template:
```
"UsersPut": {
"Type": "AWS::ApiGateway::Method",
"Properties": {
"ResourceId": "UsersResource",
"RestApiId": "MyApi",
"ApiKeyRequired": true,
"AuthorizationType": "NONE",
"HttpMethod": "PUT",
"Integration": {
"Type": "AWS_PROXY",
"IntegrationHttpMethod": "POST",
"Uri": {
"Fn::Join": ["", ["arn:aws:apigateway:", {
"Ref": "AWS::Region"
}, ":lambda:path/2015-03-31/functions/", {
"Fn::GetAtt": ["MyLambdaFunc", "Arn"]
}, "/invocations"]]
}
},
"MethodResponses": [{
"StatusCode": 200
}]
}
}
```
Is anyone able to help me figure out why this keeps failing the stack deployment?
UPDATE: I forgot to mention that I had also tried using references to add the resource ID, that also gave me the same error:
```
"UsersPut": {
"Type": "AWS::ApiGateway::Method",
"Properties": {
"ResourceId": {
"Ref": "UsersResource"
},
"RestApiId": "MyApi",
"ApiKeyRequired": true,
"AuthorizationType": "NONE",
"HttpMethod": "PUT",
"Integration": {
"Type": "AWS_PROXY",
"IntegrationHttpMethod": "POST",
"Uri": {
"Fn::Join": ["", ["arn:aws:apigateway:", {
"Ref": "AWS::Region"
}, ":lambda:path/2015-03-31/functions/", {
"Fn::GetAtt": ["MyLambdaFunc", "Arn"]
}, "/invocations"]]
}
},
"MethodResponses": [{
"StatusCode": 200
}]
}
}
```
Here is the full CloudFormation template:
```
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"LambdaDynamoDBRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}]
},
"Path": "/",
"Policies": [{
"PolicyName": "DynamoReadWritePolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Sid": "1",
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:UpdateItem"
],
"Effect": "Allow",
"Resource": "*"
}, {
"Sid": "2",
"Resource": "*",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow"
}]
}
}]
}
},
"MyFirstLambdaFn": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": "myfirstlambdafn",
"S3Key": "lambda_handler.py.zip"
},
"Description": "",
"FunctionName": "MyFirstLambdaFn",
"Handler": "lambda_function.lambda_handler",
"MemorySize": 512,
"Role": {
"Fn::GetAtt": [
"LambdaDynamoDBRole",
"Arn"
]
},
"Runtime": "python2.7",
"Timeout": 3
},
"DependsOn": "LambdaDynamoDBRole"
},
"MySecondLambdaFn": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": "mysecondlambdafn",
"S3Key": "lambda_handler.py.zip"
},
"Description": "",
"FunctionName": "MySecondLambdaFn",
"Handler": "lambda_function.lambda_handler",
"MemorySize": 512,
"Role": {
"Fn::GetAtt": [
"LambdaDynamoDBRole",
"Arn"
]
},
"Runtime": "python2.7",
"Timeout": 3
},
"DependsOn": "LambdaDynamoDBRole"
},
"MyApi": {
"Type": "AWS::ApiGateway::RestApi",
"Properties": {
"Name": "Project Test API",
"Description": "Project Test API",
"FailOnWarnings": true
}
},
"FirstUserPropertyModel": {
"Type": "AWS::ApiGateway::Model",
"Properties": {
"ContentType": "application/json",
"Name": "FirstUserPropertyModel",
"RestApiId": {
"Ref": "MyApi"
},
"Schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "FirstUserPropertyModel",
"type": "object",
"properties": {
"Email": {
"type": "string"
}
}
}
}
},
"SecondUserPropertyModel": {
"Type": "AWS::ApiGateway::Model",
"Properties": {
"ContentType": "application/json",
"Name": "SecondUserPropertyModel",
"RestApiId": {
"Ref": "MyApi"
},
"Schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "SecondUserPropertyModel",
"type": "object",
"properties": {
"Name": {
"type": "string"
}
}
}
}
},
"ErrorCfn": {
"Type": "AWS::ApiGateway::Model",
"Properties": {
"ContentType": "application/json",
"Name": "ErrorCfn",
"RestApiId": {
"Ref": "MyApi"
},
"Schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Error Schema",
"type": "object",
"properties": {
"message": {
"type": "string"
}
}
}
}
},
"UsersResource": {
"Type": "AWS::ApiGateway::Resource",
"Properties": {
"RestApiId": {
"Ref": "MyApi"
},
"ParentId": {
"Fn::GetAtt": ["MyApi", "RootResourceId"]
},
"PathPart": "users"
}
},
"UsersPost": {
"Type": "AWS::ApiGateway::Method",
"Properties": {
"ResourceId": {
"Ref": "UsersResource"
},
"RestApiId": "MyApi",
"ApiKeyRequired": true,
"AuthorizationType": "NONE",
"HttpMethod": "POST",
"Integration": {
"Type": "AWS_PROXY",
"IntegrationHttpMethod": "POST",
"Uri": {
"Fn::Join": ["", ["arn:aws:apigateway:", {
"Ref": "AWS::Region"
}, ":lambda:path/2015-03-31/functions/", {
"Fn::GetAtt": ["MyFirstLambdaFn", "Arn"]
}, "/invocations"]]
}
},
"MethodResponses": [{
"ResponseModels": {
"application/json": {
"Ref": "FirstUserPropertyModel"
}
},
"StatusCode": 200
}, {
"ResponseModels": {
"application/json": {
"Ref": "ErrorCfn"
}
},
"StatusCode": 404
}, {
"ResponseModels": {
"application/json": {
"Ref": "ErrorCfn"
}
},
"StatusCode": 500
}]
}
},
"UsersPut": {
"Type": "AWS::ApiGateway::Method",
"Properties": {
"ResourceId": {
"Ref": "UsersResource"
},
"RestApiId": "MyApi",
"ApiKeyRequired": true,
"AuthorizationType": "NONE",
"HttpMethod": "PUT",
"Integration": {
"Type": "AWS_PROXY",
"IntegrationHttpMethod": "POST",
"Uri": {
"Fn::Join": ["", ["arn:aws:apigateway:", {
"Ref": "AWS::Region"
}, ":lambda:path/2015-03-31/functions/", {
"Fn::GetAtt": ["MySecondLambdaFn", "Arn"]
}, "/invocations"]]
}
},
"MethodResponses": [{
"ResponseModels": {
"application/json": {
"Ref": "SecondUserPropertyModel"
}
},
"StatusCode": 200
}, {
"ResponseModels": {
"application/json": {
"Ref": "ErrorCfn"
}
},
"StatusCode": 404
}, {
"ResponseModels": {
"application/json": {
"Ref": "ErrorCfn"
}
},
"StatusCode": 500
}]
}
},
"RestApiDeployment": {
"Type": "AWS::ApiGateway::Deployment",
"Properties": {
"RestApiId": {
"Ref": "MyApi"
},
"StageName": "Prod"
},
"DependsOn": ["UsersPost", "UsersPut"]
}
},
"Description": "Project description"
```
}
|
2017/06/07
|
[
"https://Stackoverflow.com/questions/44412844",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3067870/"
] |
ResourceId must be a reference to a cloudformation resource, not a simple string.
e.g.
```
ResourceId:
Ref: UsersResource
```
|
When you create an API resource(1), a default root resource(2) for the API is created for path /. In order to get the id for the MyApi resource(1) root resource(2) use:
```
"ResourceId": {
"Fn::GetAtt": [
"MyApi",
"RootResourceId"
]
}
```
(1) The stack resource
(2) The API resource
| 11,546
|
18,150,518
|
So i have a python script which generates an image, and saves over the old image which used to be the background image.
I tried to make it run using `crontab`, but couldn't get that to work, so now i just have a bash script which runs once in my `.bashrc` whdn i first log in (i have a `if [ firstRun ]` kind of thing in there).
The problem is, that every now and then, when the background updates, it flashes black before it does - which is not very nice!
I currently have it running once a second, but i don't think it's the python that is causing the black screens, and more the way the image is changed over...
Is there a way I can prevent these ugly black screens between updates?
Here's all the code to run it, if you want to try it out...
from PIL import Image, ImageDraw, ImageFilter
import colorsys
from random import gauss
```
xSize, ySize = 1600,900
im = Image.new('RGBA', (xSize, ySize), (0, 0, 0, 0))
draw = ImageDraw.Draw(im)
class Cube(object):
def __init__(self):
self.tl = (0,0)
self.tm = (0,0)
self.tr = (0,0)
self.tb = (0,0)
self.bl = (0,0)
self.bm = (0,0)
self.br = (0,0)
def intify(self):
for prop in [self.tl, self.tm, self.tr, self.tb, self.bl, self.bm, self.br]:
prop = [int(i) for i in prop]
def drawCube((x,y), size, colour=(255,0,0)):
p = Cube()
colours = [list(colorsys.rgb_to_hls(*[c/255.0 for c in colour])) for _ in range(3)]
colours[0][1] -= 0
colours[1][1] -= 0.2
colours[2][1] -= 0.4
colours = [tuple([int(i*255) for i in colorsys.hls_to_rgb(*colour)]) for colour in colours]
p.tl = x,y #Top Left
p.tm = x+size/2, y-size/4 #Top Middle
p.tr = x+size, y #Top Right
p.tb = x+size/2, y+size/4 #Top Bottom
p.bl = x, y+size/2 #Bottom Left
p.bm = x+size/2, y+size*3/4 #Bottom Middle
p.br = x+size, y+size/2 #Bottom Right
p.intify()
draw.polygon((p.tl, p.tm, p.tr, p.tb), fill=colours[0])
draw.polygon((p.tl, p.bl, p.bm, p.tb), fill=colours[1])
draw.polygon((p.tb, p.tr, p.br, p.bm), fill=colours[2])
lineColour = (0,0,0)
lineThickness = 2
draw.line((p.tl, p.tm), fill=lineColour, width=lineThickness)
draw.line((p.tl, p.tb), fill=lineColour, width=lineThickness)
draw.line((p.tm, p.tr), fill=lineColour, width=lineThickness)
draw.line((p.tb, p.tr), fill=lineColour, width=lineThickness)
draw.line((p.tl, p.bl), fill=lineColour, width=lineThickness)
draw.line((p.tb, p.bm), fill=lineColour, width=lineThickness)
draw.line((p.tr, p.br), fill=lineColour, width=lineThickness)
draw.line((p.bl, p.bm), fill=lineColour, width=lineThickness)
draw.line((p.bm, p.br), fill=lineColour, width=lineThickness)
# -------- Actually do the drawing
size = 100
#Read in file of all colours, and random walk them
with open("/home/will/Documents/python/cubeWall/oldColours.dat") as coloursFile:
for line in coloursFile:
oldColours = [int(i) for i in line.split()]
oldColours = [int(round(c + gauss(0,1.5)))%255 for c in oldColours]
colours = [[ int(c*255) for c in colorsys.hsv_to_rgb(i/255.0, 1, 1)] for i in oldColours]
with open("/home/will/Documents/python/cubeWall/oldColours.dat", "w") as coloursFile:
coloursFile.write(" ".join([str(i) for i in oldColours]) + "\n")
for i in range(xSize/size+2):
for j in range(2*ySize/size+2):
if j%3 == 0:
drawCube((i*size,j*size/2), size, colour=colours[(i+j)%3])
elif j%3 == 1:
drawCube(((i-0.5)*size,(0.5*j+0.25)*size), size, colour=colours[(i+j)%3])
im2 = im.filter(ImageFilter.SMOOTH)
im2.save("cubes.png")
#im2.show()
```
and then just run this:
#!/bin/sh
```
while [ 1 ]
do
python drawCubes.py
sleep 1
done
```
And set the desktop image to be `cubes.png`
|
2013/08/09
|
[
"https://Stackoverflow.com/questions/18150518",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/432913/"
] |
Well, you can change the current wallaper (in Gnome 3 compatible desktops) by running
```
import os
os.system("gsettings set org.gnome.desktop.background picture-uri file://%(path)s" % {'path':absolute_path})
os.system("gsettings set org.gnome.desktop.background picture-options wallpaper")
```
|
If you're using MATE, you're using a fork of Gnome 2.x. The method I found for Gnome 2 is : `gconftool-2 --set --type string --set /desktop/gnome/background/picture_filename <absolute image path>`.
The method we tried before would have worked in Gnome Shell.
| 11,548
|
64,963,033
|
I am using below code to excute a python script every 5 minutes but when it executes next time its not excecuting at excact time as before.
example if i am executing it at exact 9:00:00 AM, next time it executes at 9:05:25 AM and next time 9:10:45 AM. as i run the python script every 5 minutes for long time its not able to record at exact time.
import schedule
import time
from datetime import datetime
```
# Functions setup
def geeks():
print("Shaurya says Geeksforgeeks")
now = datetime.now()
current_time = now.strftime("%H:%M:%S")
print("Current Time =", current_time)
# Task scheduling
# After every 10mins geeks() is called.
schedule.every(2).minutes.do(geeks)
# Loop so that the scheduling task
# keeps on running all time.
while True:
# Checks whether a scheduled task
# is pending to run or not
schedule.run_pending()
time.sleep(1)
```
Is there any easy fix for this so that the script runs exactly at 5 minutes next time.
please don't suggest me to use crontab as I have tried crontabs ut not working for me.
I am using python script in different os
|
2020/11/23
|
[
"https://Stackoverflow.com/questions/64963033",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14642703/"
] |
You can use the `ObservableList.contains` method to quickly check any occurrence of a similar item:
```
public void diplaysubjects() {
String item = select_subject.getSelectionModel().getSelectedItem();
ObservableList<String> courses = course_list.getItems(); // Only here to clarify the code
if (!courses.contains(item))
courses.add(item);
}
```
I don't know which pattern did you use for your code but if it is a MVC-like, I suggest you to work directly with the model's observable list instead of calling `course_list.getItems()`.
|
I suggest you to use observable list with ListView.
Do like this
```
public class Controller implements Initializable {
@FXML
private ComboBox<String> select_subject;
@FXML
private ListView<String> course_list;
private ObservableList<String> list = FXCollections.observableArrayList();
@Override
public void initialize(URL url, ResourceBundle resourceBundle) {
course_list.setItems(list);
}
public void diplaysubjects() {
String course = select_subject.getSelectionModel().getSelectedItem();
//Instead of adding directly to listview add to observablelist
//course_list.getItems().add(course.toString());
list.add(course);
}
//To check the list contains the item
private boolean doesExists(String element){
return list.contains(element);
}
}
```
| 11,549
|
58,040,556
|
I have a Pandas DataFrame with date columns. The data is imported from a csv file. When I try to fit the regression model, I get the error `ValueError: could not convert string to float: '2019-08-30 07:51:21`.
.
How can I get rid of it?
Here is dataframe.
**source.csv**
```
event_id tsm_id rssi_ts rssi batl batl_ts ts_diff
0 417736018 4317714 2019-09-05 20:00:07 140 100.0 2019-09-05 18:11:49 01:48:18
1 417735986 4317714 2019-09-05 20:00:07 132 100.0 2019-09-05 18:11:49 01:48:18
2 418039386 4317714 2019-09-06 01:00:08 142 100.0 2019-09-06 00:11:50 00:48:18
3 418039385 4317714 2019-09-06 01:00:08 122 100.0 2019-09-06 00:11:50 00:48:18
4 420388010 4317714 2019-09-07 15:31:07 143 100.0 2019-09-07 12:11:50 03:19:17
```
Here is my code:
```
model = pd.read_csv("source.csv")
model.describe()
event_id tsm_id. rssi batl
count 5.000000e+03 5.000000e+03 5000.000000 3784.000000
mean 3.982413e+08 4.313492e+06 168.417200 94.364429
std 2.200899e+07 2.143570e+03 35.319516 13.609917
min 3.443084e+08 4.310312e+06 0.000000 16.000000
25% 3.852882e+08 4.310315e+06 144.000000 97.000000
50% 4.007999e+08 4.314806e+06 170.000000 100.000000
75% 4.171803e+08 4.314815e+06 195.000000 100.000000
max 4.258451e+08 4.317714e+06 242.000000 100.000000
labels_b = np.array(model['batl'])
features_r= model.drop('batl', axis = 1)
features_r = np.array(features_r)
from sklearn.model_selection import train_test_split
train_features, test_features, train_labels, test_labels = train_test_split(features_r,
labels_b, test_size = 0.25, random_state = 42)
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
rf.fit(train_features, train_labels);
```
**Here is error msg:**
```
ValueError Traceback (most recent call last)
<ipython-input-28-bc774a9d8239> in <module>
4 rf = RandomForestRegressor(n_estimators = 1000, random_state = 42)
5 # Train the model on training data
----> 6 rf.fit(train_features, train_labels);
~/ml/env/lib/python3.7/site-packages/sklearn/ensemble/forest.py in fit(self, X, y, sample_weight)
247
248 # Validate or convert input data
--> 249 X = check_array(X, accept_sparse="csc", dtype=DTYPE)
250 y = check_array(y, accept_sparse='csc', ensure_2d=False, dtype=None)
251 if sample_weight is not None:
~/ml/env/lib/python3.7/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
494 try:
495 warnings.simplefilter('error', ComplexWarning)
--> 496 array = np.asarray(array, dtype=dtype, order=order)
497 except ComplexWarning:
498 raise ValueError("Complex data not supported\n"
~/ml/env/lib/python3.7/site-packages/numpy/core/numeric.py in asarray(a, dtype, order)
536
537 """
--> 538 return array(a, dtype, copy=False, order=order)
539
540
ValueError: could not convert string to float: '2019-08-30 07:51:21'
```
|
2019/09/21
|
[
"https://Stackoverflow.com/questions/58040556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/451435/"
] |
The last entry in your list is missing the item property which should be the URL for the page it references.
I suspect the last one is the page itself, which is not needed in the list anyhow.
|
Just for the record, we found the same error message which seemed to be because the referred URL was not available to the tool (it was inside our company network but not publicly available).
Pointing to a public valid URL fixed the error message on our side.
| 11,551
|
30,400,777
|
I'm trying to install Pylzma via pip on python 2.7.9 and I'm getting the following error:
```
C:\Python27\Scripts>pip.exe install pylzma
Downloading/unpacking pylzma
Running setup.py (path:c:\users\username\appdata\local\temp\pip_build_username\pylzma\setup.py) egg_info for package pylzma
warning: no files found matching '*.py' under directory 'test'
warning: no files found matching '*.7z' under directory 'test'
no previously-included directories found matching 'src\sdk.orig'
Installing collected packages: pylzma
Running setup.py install for pylzma
adding support for multithreaded compression
building 'pylzma' extension
C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DWITH_COMPAT=1 -DPYLZMA_VERSION="0.4.6" -DCOMPRESS_MF_MT=1 -Isrc/sdk -IC:\Python27\include -IC:\Python27\PC /Tcsrc/pylzma/pylzma.c /Fobuild\temp.win32-2.7\Release\src/pylzma/pylzma.obj /MT
cl : Command line warning D9025 : overriding '/MD' with '/MT'
pylzma.c
src/pylzma/pylzma.c(284) : error C2440: 'function' : cannot convert from 'double' to 'const char *'
src/pylzma/pylzma.c(284) : warning C4024: 'PyModule_AddStringConstant' : different types for formal and actual parameter 3
src/pylzma/pylzma.c(284) : error C2143: syntax error : missing ')' before 'constant'
src/pylzma/pylzma.c(284) : error C2059: syntax error : ')'
error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win32-2.7
copying py7zlib.py -> build\lib.win32-2.7
running build_ext
adding support for multithreaded compression
building 'pylzma' extension
creating build\temp.win32-2.7
creating build\temp.win32-2.7\Release
creating build\temp.win32-2.7\Release\src
creating build\temp.win32-2.7\Release\src\pylzma
creating build\temp.win32-2.7\Release\src\sdk
creating build\temp.win32-2.7\Release\src\7zip
creating build\temp.win32-2.7\Release\src\7zip\C
creating build\temp.win32-2.7\Release\src\compat
C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DWITH_COMPAT=1 -DPYLZMA_VERSION="0.4.6" -DCOMPRESS_MF_MT=1 -Isrc/sdk -IC:\Python27\include -IC:\Python27\PC /Tcsrc/pylzma/pylzma.c /Fobuild\temp.win32-2.7\Release\src/pylzma/pylzma.obj /MT
cl : Command line warning D9025 : overriding '/MD' with '/MT'
pylzma.c
src/pylzma/pylzma.c(284) : error C2440: 'function' : cannot convert from 'double' to 'const char *'
src/pylzma/pylzma.c(284) : warning C4024: 'PyModule_AddStringConstant' : different types for formal and actual parameter 3
src/pylzma/pylzma.c(284) : error C2143: syntax error : missing ')' before 'constant'
src/pylzma/pylzma.c(284) : error C2059: syntax error : ')'
error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
----------------------------------------
Cleaning up...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\username\appdata\local\temp\pip_build_username\pylzma
Storing debug log for failure in C:\Users\username\pip\pip.log
```
Here is the debug log:
```
------------------------------------------------------------
C:\Python27\Scripts\pip run on 05/22/15 09:32:07
Downloading/unpacking pylzma
Getting page https://pypi.python.org/simple/pylzma/
URLs to search for versions for pylzma:
* https://pypi.python.org/simple/pylzma/
Analyzing links from page https://pypi.python.org/simple/pylzma/
Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.3.0-py2.3-win32.egg#md5=68b539bc322e44e5a087c79c25d82543 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.3.0.win32-py2.3.exe#md5=cbbaf0541e32c8d1394eea89ce3910b7 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .exe
Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.4.1-py2.3-win32.egg#md5=03829ce881b4627d6ded08c138cc8997 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.4.2-py2.3-win32.egg#md5=1ae4940ad183f220e5102e32a7f5b496 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.3/p/pylzma/pylzma-0.4.4-py2.3-win32.egg#md5=26849b5afede8a44117e6b6cb0e4fc4d (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.3.0-py2.4-win32.egg#md5=221208a0e4e9bcbffbb2c0ce80eafc11 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.3.0.win32-py2.4.exe#md5=7152a76c28905ada5290c8e6c459d715 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .exe
Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.4.1-py2.4-win32.egg#md5=c773b74772799b8cc021ea8e7249db46 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.4.2-py2.4-win32.egg#md5=bf837af2374358f167008585c19d2f26 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.4/p/pylzma/pylzma-0.4.4-py2.4-win32.egg#md5=9a657211e107da0261ed7a2f029566c4 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.3.0-py2.5-win32.egg#md5=911d4e0b3cbf27c8e62abea1b6ded60e (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.3.0.win32-py2.5.exe#md5=bc1c3d4a402984056acf85a251ba347c (from https://pypi.python.org/simple/pylzma/); unknown archive format: .exe
Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.4.1-py2.5-win32.egg#md5=429f2087bf14390191faf6d85292186c (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.4.2-py2.5-win32.egg#md5=bf8036d15fd61d6a47bb1caf0df45e69 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.5/p/pylzma/pylzma-0.4.4-py2.5-win32.egg#md5=3c8f6361bee16292fdbfda70f1dc0006 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.6/p/pylzma/pylzma-0.4.1-py2.6-win32.egg#md5=4248c0e618532f137860b021e6915b32 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.6/p/pylzma/pylzma-0.4.2-py2.6-win32.egg#md5=2c5f136a75b3c114a042f5f61bdd5d8a (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.6/p/pylzma/pylzma-0.4.4-py2.6-win32.egg#md5=8c7ae08bafbfcfd9ecbdffe9e4c9c6c5 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Skipping link https://pypi.python.org/packages/2.7/p/pylzma/pylzma-0.4.4-py2.7-win32.egg#md5=caee91027d5c005b012e2132e434f425 (from https://pypi.python.org/simple/pylzma/); unknown archive format: .egg
Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.3.0.tar.gz#md5=7ab1a1706cf3e19f2d10579d795babf7 (from https://pypi.python.org/simple/pylzma/), version: 0.3.0
Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.1.tar.gz#md5=b64557e8c4bcd0973f037bb4ddc413c6 (from https://pypi.python.org/simple/pylzma/), version: 0.4.1
Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.2.tar.gz#md5=ab37d6ce2374f4308447bff963ae25ef (from https://pypi.python.org/simple/pylzma/), version: 0.4.2
Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.3.tar.gz#md5=e53d40599ca2b039dedade6069724b7b (from https://pypi.python.org/simple/pylzma/), version: 0.4.3
Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.4.tar.gz#md5=a2be89cb2288174ebb18bec68fa559fb (from https://pypi.python.org/simple/pylzma/), version: 0.4.4
Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.5.tar.gz#md5=4fda4666c60faa9a092524fdda0e2f98 (from https://pypi.python.org/simple/pylzma/), version: 0.4.5
Found link https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.6.tar.gz#md5=140038c8c187770eecfe7041b34ec9b9 (from https://pypi.python.org/simple/pylzma/), version: 0.4.6
Using version 0.4.6 (newest of versions: 0.4.6, 0.4.5, 0.4.4, 0.4.3, 0.4.2, 0.4.1, 0.3.0)
Downloading from URL https://pypi.python.org/packages/source/p/pylzma/pylzma-0.4.6.tar.gz#md5=140038c8c187770eecfe7041b34ec9b9 (from https://pypi.python.org/simple/pylzma/)
Running setup.py (path:c:\users\username\appdata\local\temp\pip_build_username\pylzma\setup.py) egg_info for package pylzma
running egg_info
creating pip-egg-info\pylzma.egg-info
writing requirements to pip-egg-info\pylzma.egg-info\requires.txt
writing pip-egg-info\pylzma.egg-info\PKG-INFO
writing top-level names to pip-egg-info\pylzma.egg-info\top_level.txt
writing dependency_links to pip-egg-info\pylzma.egg-info\dependency_links.txt
writing manifest file 'pip-egg-info\pylzma.egg-info\SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
reading manifest file 'pip-egg-info\pylzma.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.py' under directory 'test'
warning: no files found matching '*.7z' under directory 'test'
no previously-included directories found matching 'src\sdk.orig'
writing manifest file 'pip-egg-info\pylzma.egg-info\SOURCES.txt'
Source in c:\users\username\appdata\local\temp\pip_build_username\pylzma has version 0.4.6, which satisfies requirement pylzma
Installing collected packages: pylzma
Running setup.py install for pylzma
Running command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile
running install
running build
running build_py
creating build
creating build\lib.win32-2.7
copying py7zlib.py -> build\lib.win32-2.7
running build_ext
adding support for multithreaded compression
building 'pylzma' extension
creating build\temp.win32-2.7
creating build\temp.win32-2.7\Release
creating build\temp.win32-2.7\Release\src
creating build\temp.win32-2.7\Release\src\pylzma
creating build\temp.win32-2.7\Release\src\sdk
creating build\temp.win32-2.7\Release\src\7zip
creating build\temp.win32-2.7\Release\src\7zip\C
creating build\temp.win32-2.7\Release\src\compat
C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DWITH_COMPAT=1 -DPYLZMA_VERSION="0.4.6" -DCOMPRESS_MF_MT=1 -Isrc/sdk -IC:\Python27\include -IC:\Python27\PC /Tcsrc/pylzma/pylzma.c /Fobuild\temp.win32-2.7\Release\src/pylzma/pylzma.obj /MT
cl : Command line warning D9025 : overriding '/MD' with '/MT'
pylzma.c
src/pylzma/pylzma.c(284) : error C2440: 'function' : cannot convert from 'double' to 'const char *'
src/pylzma/pylzma.c(284) : warning C4024: 'PyModule_AddStringConstant' : different types for formal and actual parameter 3
src/pylzma/pylzma.c(284) : error C2143: syntax error : missing ')' before 'constant'
src/pylzma/pylzma.c(284) : error C2059: syntax error : ')'
error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
Complete output from command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win32-2.7
copying py7zlib.py -> build\lib.win32-2.7
running build_ext
adding support for multithreaded compression
building 'pylzma' extension
creating build\temp.win32-2.7
creating build\temp.win32-2.7\Release
creating build\temp.win32-2.7\Release\src
creating build\temp.win32-2.7\Release\src\pylzma
creating build\temp.win32-2.7\Release\src\sdk
creating build\temp.win32-2.7\Release\src\7zip
creating build\temp.win32-2.7\Release\src\7zip\C
creating build\temp.win32-2.7\Release\src\compat
C:\Users\username\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\Bin\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -DWITH_COMPAT=1 -DPYLZMA_VERSION="0.4.6" -DCOMPRESS_MF_MT=1 -Isrc/sdk -IC:\Python27\include -IC:\Python27\PC /Tcsrc/pylzma/pylzma.c /Fobuild\temp.win32-2.7\Release\src/pylzma/pylzma.obj /MT
cl : Command line warning D9025 : overriding '/MD' with '/MT'
pylzma.c
src/pylzma/pylzma.c(284) : error C2440: 'function' : cannot convert from 'double' to 'const char *'
src/pylzma/pylzma.c(284) : warning C4024: 'PyModule_AddStringConstant' : different types for formal and actual parameter 3
src/pylzma/pylzma.c(284) : error C2143: syntax error : missing ')' before 'constant'
src/pylzma/pylzma.c(284) : error C2059: syntax error : ')'
error: command 'C:\\Users\\username\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2
----------------------------------------
Cleaning up...
Removing temporary dir c:\users\username\appdata\local\temp\pip_build_username...
Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\username\appdata\local\temp\pip_build_username\pylzma
Exception information:
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\pip\basecommand.py", line 122, in main
status = self.run(options, args)
File "C:\Python27\lib\site-packages\pip\commands\install.py", line 283, in run
requirement_set.install(install_options, global_options, root=options.root_path)
File "C:\Python27\lib\site-packages\pip\req.py", line 1435, in install
requirement.install(install_options, global_options, *args, **kwargs)
File "C:\Python27\lib\site-packages\pip\req.py", line 706, in install
cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False)
File "C:\Python27\lib\site-packages\pip\util.py", line 697, in call_subprocess
% (command_desc, proc.returncode, cwd))
InstallationError: Command C:\Python27\python.exe -c "import setuptools, tokenize;__file__='c:\\users\\username\\appdata\\local\\temp\\pip_build_username\\pylzma\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\username\appdata\local\temp\pip-4onyx_-record\install-record.txt --single-version-externally-managed --compile failed with error code 1 in c:\users\username\appdata\local\temp\pip_build_username\pylzma
```
I've made sure I'm running the prompt as admin, I've rebooted and I've googled and I can't find anything. Any suggestions?
|
2015/05/22
|
[
"https://Stackoverflow.com/questions/30400777",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4929649/"
] |
The first step to do is to find where is your python.
You can do it with which or where command (which for unix where for windows). Once you have this information you will know what is actually executed as "python" command. Then you need to change it for windows (i believe) you need to change the PATH variable in such a way that your python 3.4 will be found earlier then 2.6
For the unix you need to either do the same or link it in your package manager.
|
You need to use `python3` to use python 3.4. For example, to know version of Python use:
```
python3 -V
```
This will use python 3.4 to interpret your program or you can use the [shebang](http://en.wikipedia.org/wiki/Shebang_%28Unix%29) to make it executable. The first line of your program should be:
```
#!/usr/bin/env python3
```
If you want python3 to be used when you type `python` on the terminal, you can use an alias. To add a new alias, open your `~/.bash_aliases` file using `gedit ~/.bash_aliases` and type the following:
```
alias python=python3
```
and then save and exit and type
```
source ~/.bash_aliases
```
and then you can type
```
python -V
```
to use python3 as your default python interpreter.
| 11,552
|
34,643,747
|
**Is there a way to use and plot with opencv2 with ipython notebook?**
I am fairly new to python image analysis. I decided to go with the notebook work flow to make nice record as I process and it has been working out quite well using matplotlib/pylab to plot things.
An initial hurdle I had was how to plot things within the notebook. Easy, just use magic:
```
%matplotlib inline
```
Later, I wanted to perform manipulations with interactive plots but plotting in a dedicated window would always freeze. Fine, I learnt again that you need to use magic. Instead of just importing the modules:
```
%pylab
```
Now I have moved onto working with opencv. I am now back to the same problem, where I either want to plot inline or use dedicated, interactive windows depending on the task at hand. Is there similar magic to use? Is there another way to get things working? Or am I stuck and need to just go back to running a program from IDLE?
As a side note: I know that opencv has installed correctly. Firstly, because I got no errors either installing or importing the cv2 module. Secondly, because I can read in images with cv2 and then plot them with something else.
|
2016/01/06
|
[
"https://Stackoverflow.com/questions/34643747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5754595/"
] |
This is my empty template:
```
import cv2
import matplotlib.pyplot as plt
import numpy as np
import sys
%matplotlib inline
im = cv2.imread('IMG_FILENAME',0)
h,w = im.shape[:2]
print(im.shape)
plt.imshow(im,cmap='gray')
plt.show()
```
[See online sample](https://colab.research.google.com/drive/1WbOfcIwShtxaw7-Ig5YppNINWgLucQ4i#scrollTo=vy7Be3RMreWG)
|
There is also that little function that was used into the Google Deepdream Notebook:
```python
import cv2
import numpy as np
from IPython.display import clear_output, Image, display
from cStringIO import StringIO
import PIL.Image
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 255))
f = StringIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
```
Then you can do :
```python
img = cv2.imread("an_image.jpg")
```
And simply :
```python
showarray(img)
```
Each time you need to render the image in a cell
| 11,553
|
68,003,878
|
I am new in python and would like to ask anyone of a solution related to dividing 2 rows in a data set that contains 25000 rows. It is easier to understand it by looking at my screenshot.
Thanks for a help!
[](https://i.stack.imgur.com/upP3d.png)
|
2021/06/16
|
[
"https://Stackoverflow.com/questions/68003878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16243841/"
] |
Looks like your dataframe has a [MultiIndex](https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html). Let's take the first four rows as an example. It could be problematic to let one of the row index levels have the same name (`loan_default`) as the column, so I'd change the column name to `count`:
```py
import pandas as pd
df = pd.DataFrame({(1954, 0): [9],
(1954, 1): [1],
(1955, 0): [91],
(1955, 1): [15]}).T
df.columns = ['count']
print(df)
```
```
count
1954 0 9
1 1
1955 0 91
1 15
```
You can select all rows where a certain level of the MultiIndex has a certain value with [`df.xs()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.xs.html). This will give you two sub-series that you can divide by each other, which will be done element-wise:
```py
defaulted = df.xs(1, level=1)
not_defaulted = df.xs(0, level=1)
odds = defaulted / not_defaulted
odds.columns = ['defaulting_odds']
print(odds)
```
```
defaulting_odds
1954 0.111111
1955 0.164835
```
Note that this produces the odds of a loan defaulting for each year. If you would rather have the probabilities, you have to change the denominator. To get the percentage, just multiply by 100:
```py
prob = defaulted / (defaulted + not_defaulted)
prob.columns = ['defaulting_probability']
prob['defaulting_percent'] = prob.defaulting_probability * 100
print(prob)
```
```
defaulting_probability defaulting_percent
1954 0.100000 10.000000
1955 0.141509 14.150943
```
|
You can try dividing with `shift` and groups.
```
import pandas as pd
df = pd.DataFrame()
df['year_of_birth'] = ['1954','1954','1955','1955', '1956', '1956']
df['loan_default'] = ['9','1','91','15','194','32']
```
Calculate the ratio:
```
df['percentage'] = df['loan_default'].div(df.groupby('year_of_birth')['loan_default'].shift(1))
```
Drop the NaNs:
```
df = df.dropna(subset=['percentage'])
```
Convert ratio into a percentage:
```
df['percentage'] = df['percentage']*100
```
| 11,556
|
45,194,587
|
This is not a duplicate of [this](https://stackoverflow.com/questions/6681743/splitting-a-number-into-the-integer-and-decimal-parts-in-python), I'll explain here.
Consider `x = 1.2`. I'd like to separate it out into `1` and `0.2`. I've tried all these methods as outlined in the linked question:
```
In [370]: x = 1.2
In [371]: divmod(x, 1)
Out[371]: (1.0, 0.19999999999999996)
In [372]: math.modf(x)
Out[372]: (0.19999999999999996, 1.0)
In [373]: x - int(x)
Out[373]: 0.19999999999999996
In [374]: x - int(str(x).split('.')[0])
Out[374]: 0.19999999999999996
```
Nothing I try gives me exactly `1` and `0.2`.
Is there any way to *reliably* convert a floating number to its decimal and floating point equivalents that is not hindered by the limitation of floating point representation?
I understand this might be due to the limitation of how the number is itself stored, so I'm open to any suggestion (like a package or otherwise) that overcomes this.
Edit: Would prefer a way that didn't involve string manipulation, *if possible*.
|
2017/07/19
|
[
"https://Stackoverflow.com/questions/45194587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4909087/"
] |
Solution
--------
It may seem like a hack, but you could separate the string form (actually repr) and convert it back to ints and floats:
```
In [1]: x = 1.2
In [2]: s = repr(x)
In [3]: p, q = s.split('.')
In [4]: int(p)
Out[4]: 1
In [5]: float('.' + q)
Out[5]: 0.2
```
How it works
------------
The reason for approaching it this way is that the [internal algorithm](https://bugs.python.org/issue1580) for displaying `1.2` is very sophisticated (a fast variant of [David Gay's algorithm](http://www.ampl.com/REFS/abstracts.html#rounding)). It works hard to show the shortest of the possible representations of numbers that cannot be represented exactly. By splitting the *repr* form, you're taking advantage of that algorithm.
Internally, the value entered as `1.2` is stored as the binary fraction, `5404319552844595 / 4503599627370496` which is actually equal to `1.1999999999999999555910790149937383830547332763671875`. The Gay algorithm is used to display this as the string `1.2`. The *split* then reliably extracts the integer portion.
```
In [6]: from decimal import Decimal
In [7]: Decimal(1.2)
Out[7]: Decimal('1.1999999999999999555910790149937383830547332763671875')
In [8]: (1.2).as_integer_ratio()
Out[8]: (5404319552844595, 4503599627370496)
```
Rationale and problem analysis
------------------------------
As stated, your problem roughly translates to "I want to split the integral and fractional parts of the number as it appears visually rather that according to how it is actually stored".
Framed that way, it is clear that the solution involves parsing how it is displayed visually. While it make feel like a hack, this is the most direct way to take advantage of the very sophisticated display algorithms and actually match what you see.
This way may the only *reliable* way to match what you see unless you manually reproduce the internal display algorithms.
Failure of alternatives
-----------------------
If you want to stay in realm of integers, you could try rounding and subtraction but that would give you an unexpected value for the floating point portion:
```
In [9]: round(x)
Out[9]: 1.0
In [10]: x - round(x)
Out[10]: 0.19999999999999996
```
|
You could try converting 1.2 to string, splitting on the '.' and then converting the two strings ("1" and "2") back to the format you want.
Additionally padding the second portion with a '0.' will give you a nice format.
| 11,557
|
9,509,096
|
How do you load a Django fixture so that models referenced via natural keys don't conflict with pre-existing records?
I'm trying to load such a fixture, but I'm getting IntegrityErrors from my MySQL backend, complaining about Django trying to insert duplicate records, which doesn't make any sense.
As I understand Django's natural key feature, in order to fully support dumpdata and loaddata usage, you need to define a `natural_key` method in the model, and a `get_by_natural_key` method in the model's manager.
So, for example, I have two models:
```
class PersonManager(models.Manager):
def get_by_natural_key(self, name):
return self.get(name=name)
class Person(models.Model):
objects = PersonManager()
name = models.CharField(max_length=255, unique=True)
def natural_key(self):
return (self.name,)
class BookManager(models.Manager):
def get_by_natural_key(self, title, *person_key):
person = Person.objects.get_by_natural_key(*person_key)
return self.get(title=title, person=person)
class Book(models.Model):
objects = BookManager()
author = models.ForeignKey(Person)
title = models.CharField(max_length=255)
def natural_key(self):
return (self.title,) + self.author.natural_key()
natural_key.dependencies = ['myapp.Person']
```
My test database already contains a sample Person and Book record, which I used to generate the fixture:
```
[
{
"pk": null,
"model": "myapp.person",
"fields": {
"name": "bob"
}
},
{
"pk": null,
"model": "myapp.book",
"fields": {
"author": [
"bob"
],
"title": "bob's book",
}
}
]
```
I want to be able to take this fixture and load it into any instance of my database to recreate the records, regardless of whether or not they already exist in the database.
However, when I run `python manage.py loaddata myfixture.json` I get the error:
```
IntegrityError: (1062, "Duplicate entry '1-1' for key 'myapp_person_name_uniq'")
```
Why is Django attempting to re-create the Person record instead of reusing the one that's already there?
|
2012/03/01
|
[
"https://Stackoverflow.com/questions/9509096",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/247542/"
] |
Turns out the solution requires a very minor patch to Django's `loaddata` command. Since it's unlikely the Django devs would accept such a patch, I've [forked it](https://raw.githubusercontent.com/chrisspen/django-admin-steroids/master/admin_steroids/management/commands/loaddatanaturally.py) in my package of various Django admin related enhancements.
The key code change (lines 189-201 of `loaddatanaturally.py`) simply involves calling `get_natural_key()` to find any existing pk inside the loop that iterates over the deserialized objects.
|
Actually loaddata is not supposed to work with existing data in database, it is normally used for initial load of models.
Look at this question for another way of doing it: [Import data into Django model with existing data?](https://stackoverflow.com/questions/5940294/import-data-into-django-model-with-existing-data)
| 11,562
|
2,217,258
|
I'm looking for an easy, cross platform way to join path, directory and file names into a complete path in C++. I know python has `os.path.join()` and matlab has `fullfile()`. Does Qt has something similar? `QFileInfo` doesn't seem to be able to do this.
|
2010/02/07
|
[
"https://Stackoverflow.com/questions/2217258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9611/"
] |
[QDir](http://qt-project.org/doc/qt-5.0/qtcore/qdir.html) has `absoluteFilePath` and `relativeFilePath` to combine a path with a file name.
|
Offhand, I'm not sure about Qt, but Boost has a `filesystem` class that handles things like this. This has the advantage that it has been accepted as a proposal for TR2. That means it has a pretty good chance of becoming part of the C++ standard library (though probably with some minor modifications here or there).
| 11,563
|
3,108,951
|
I need to write a python script that launches a shell script and import the environment variables AFTER a script is completed.
Immagine you have a shell script "a.sh":
```
export MYVAR="test"
```
In python I would like to do something like:
```
import os
env={}
os.spawnlpe(os.P_WAIT,'sh', 'sh', 'a.sh',env)
print env
```
and get:
```
{'MYVAR'="test"}
```
Is that possible?
|
2010/06/24
|
[
"https://Stackoverflow.com/questions/3108951",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/375112/"
] |
Nope, any changes made to environment variables in a subprocess stay in that subprocess. (As far as I know, that is) When the subprocess terminates, its environment is lost.
I'd suggest getting the shell script to print its environment, or at least the variables you care about, to its standard output (or standard error, or it could write them to a file), and you can read that output from Python.
|
I agree with David's post.
Perl has a [Shell::Source](http://search.cpan.org/~pjcj/Shell-Source-0.01/Source.pm) module which does this. It works by running the script you want in a subprocess appended with an `env` which produces a list of variable value pairs separated by an `=` symbol. You can parse this and "import" the environment into your process. The module is worth looking at if you need this kind of behaviour.
| 11,564
|
29,548,982
|
I tried: `c:/python34/scripts/pip install http://bitbucket.org/pygame/pygame`
and got this error:
```
Cannot unpack file C:\Users\Marius\AppData\Local\Temp\pip-b60d5tho-unpack\pygame
(downloaded from C:\Users\Marius\AppData\Local\Temp\pip-rqmpq4tz-build, conte
nt-type: text/html; charset=utf-8); cannot detect archive format
Cannot determine archive format of C:\Users\Marius\AppData\Local\Temp\pip-rqmp
q4tz-build
```
Please if anyone have any solutions please feel free to share them!
I also tried
`pip install --allow-unverified`, but that gave me an error as well.
|
2015/04/09
|
[
"https://Stackoverflow.com/questions/29548982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4529330/"
] |
This is the only method that works for me.
```
pip install pygame==1.9.1release --allow-external pygame --allow-unverified pygame
```
--
These are the steps that lead me to this command (I put them so people finds it easily):
```
$ pip install pygame
Collecting pygame
Could not find any downloads that satisfy the requirement pygame
Some externally hosted files were ignored as access to them may be unreliable (use --allow-external pygame to allow).
No distributions at all found for pygame
```
Then, as suggestes I allow external:
```
$ pip install pygame --allow-external pygame
Collecting pygame
Could not find any downloads that satisfy the requirement pygame
Some insecure and unverifiable files were ignored (use --allow-unverified pygame to allow).
No distributions at all found for pygame
```
So I also allow unverifiable:
```
$ pip install pygame --allow-external pygame --allow-unverified pygame
Collecting pygame
pygame is potentially insecure and unverifiable.
HTTP error 400 while getting http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml)
Could not install requirement pygame because of error 400 Client Error: Bad Request
Could not install requirement pygame because of HTTP error 400 Client Error: Bad Request for URL http://www.pygame.org/../../ftp/pygame-1.6.2.tar.bz2 (from http://www.pygame.org/download.shtml)
```
So, after a visit to <http://www.pygame.org/download.shtml>, I thought about adding the version number (1.9.1release is the currently stable one).
--
Hope it helps.
|
I realized that the compatible Pygame version was simply corrupted or broken. Therfore i had to install a previous version of python to run Pygame. Which is actually fine as most modules aren't updated to be compatible with Python 3.4 yet so it only gives me more options.
| 11,565
|
29,205,752
|
I'm trying to produce a simple fibonacci algorithm with Cython.
I have fib.pyx:
```
def fib(int n):
cdef int i
cdef double a=0.0, b=1.0
for i in range(n):
a, b = a + b, a
return a
```
and setup.py in the same folder:
```
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules=cythonize('fib.pyx'))
```
Then I open cmd and cd my way to this folder and build the code with (I have [<http://www.microsoft.com/en-us/download/details.aspx?id=44266](this> compiler) :
```
python setup.py build
```
Which produces this result:
```
C:\Users\MyUserName\Documents\Python Scripts\Cython>python setup.py build
Compiling fib.pyx because it changed.
Cythonizing fib.pyx
running build
running build_ext
building 'fib' extension
creating build
creating build\temp.win-amd64-2.7
creating build\temp.win-amd64-2.7\Release
C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -mdll -O -Wall -IC:\Anaconda\include -IC:
\Anaconda\PC -c fib.c -o build\temp.win-amd64-2.7\Release\fib.o
writing build\temp.win-amd64-2.7\Release\fib.def
creating build\lib.win-amd64-2.7
C:\Anaconda\Scripts\gcc.bat -DMS_WIN64 -shared -s build\temp.win-amd64-2.7\Relea
se\fib.o build\temp.win-amd64-2.7\Release\fib.def -LC:\Anaconda\libs -LC:\Anacon
da\PCbuild\amd64 -lpython27 -lmsvcr90 -o build\lib.win-amd64-2.7\fib.pyd
```
So it seems the compiling worked and I should be able to import this module with
```
import fib
ImportError: No module named fib
```
Where did I go wrong?
Edit:
```
os.getcwd()
Out[6]: 'C:\\Users\\MyUserName\\Documents\\Python Scripts\\Cython\\build\\temp.win-amd64-2.7\\Release'
In [7]: import fib
Traceback (most recent call last):
File "<ipython-input-7-6c0ab2f0a4e0>", line 1, in <module>
import fib
ImportError: No module named fib
```
|
2015/03/23
|
[
"https://Stackoverflow.com/questions/29205752",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4657326/"
] |
Java implementation to remove all change notification registrations from the database
```
Statement stmt= conn.createStatement();
ResultSet rs = stmt.executeQuery("select regid,callback from USER_CHANGE_NOTIFICATION_REGS");
while(rs.next())
{
long regid = rs.getLong(1);
String callback = rs.getString(2);
((OracleConnection)conn).unregisterDatabaseChangeNotification(regid,callback);
}
rs.close();
stmt.close();
```
You need to have ojdbc6/7.jar in class path to execute this code.
Original post:<https://community.oracle.com/message/9315024#9315024>
|
You just can revoke change notification from current user and grant it again. I know, this isn't best solution, but it work.
| 11,570
|
70,161,899
|
I'm trying to build drake from source on Ubuntu 20.04 by following instructions from [here](https://drake.mit.edu/from_source.html). I already checked that my system meets all the requirements, and was ale to successfully run the mandatory platform-specific setup script (and it completed saying: 'install\_prereqs: success'). However, when I try to run cmake to build the python bindings, I'm confronted with the following error:
```
CMake Error at /usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:146 (message):
Could NOT find Python (missing: Python_NumPy_INCLUDE_DIRS NumPy) (found
suitable exact version "3.8.10")
Call Stack (most recent call first):
/usr/share/cmake-3.16/Modules/FindPackageHandleStandardArgs.cmake:393 (_FPHSA_FAILURE_MESSAGE)
/usr/share/cmake-3.16/Modules/FindPython/Support.cmake:2214 (find_package_handle_standard_args)
/usr/share/cmake-3.16/Modules/FindPython.cmake:304 (include)
CMakeLists.txt:240 (find_package)
-- Configuring incomplete, errors occurred!
```
I can't seem to think of any reason why this is happening (I made sure to remove conda from my PATH variable following the note [here](https://drake.mit.edu/python_bindings.html#installation). Any help around this issue is much appreciated!
EDIT: Want to mention that I'm trying to install Drake from [this PR](https://github.com/RobotLocomotion/drake/pull/16147) that includes a feature I need access to.
|
2021/11/29
|
[
"https://Stackoverflow.com/questions/70161899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3598807/"
] |
On another tack, you could try to temporarily work around the problem by doing (in Drake) a `bazel run //:install -- /path/to/somewhere` to install Drake, and thus skipping the CMake stuff that seems to be the problem here.
|
Here is some diagnostic output from my Ubuntu 20.04 system. Can you run the same, and check to see if anything looks different?
```
jwnimmer@call-cps:~$ which python
/usr/bin/python
jwnimmer@call-cps:~$ which python3
/usr/bin/python3
jwnimmer@call-cps:~$ file /usr/bin/python3
/usr/bin/python3: symbolic link to python3.8
jwnimmer@call-cps:~$ dpkg -l python3-numpy
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==============-=================-============-============================================
ii python3-numpy 1:1.17.4-5ubuntu3 amd64 Fast array facility to the Python 3 language
jwnimmer@call-cps:~$ ls -l /usr/include/python3.8/numpy
lrwxrwxrwx 1 root root 56 Feb 18 2020 /usr/include/python3.8/numpy -> ../../lib/python3/dist-packages/numpy/core/include/numpy
jwnimmer@call-cps:~$ ls -l /usr/lib/python3/dist-packages/numpy/core/include/numpy | head
total 388
-rw-r--r-- 1 root root 164 Jun 15 2019 arrayobject.h
-rw-r--r-- 1 root root 3509 Jun 15 2019 arrayscalars.h
-rw-r--r-- 1 root root 1878 Aug 30 2019 halffloat.h
-rw-r--r-- 1 root root 61098 Feb 18 2020 __multiarray_api.h
-rw-r--r-- 1 root root 56456 Feb 18 2020 multiarray_api.txt
-rw-r--r-- 1 root root 11496 Oct 15 2019 ndarrayobject.h
-rw-r--r-- 1 root root 65018 Nov 8 2019 ndarraytypes.h
-rw-r--r-- 1 root root 1861 Jun 15 2019 _neighborhood_iterator_imp.h
-rw-r--r-- 1 root root 6786 Aug 30 2019 noprefix.h
```
| 11,572
|
70,332,071
|
I'm trying to get a preprocessing function to work with the Dataset map, but I get the following error (full stack trace at the bottom):
```
ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable (e.g., `tf.Variable(lambda : tf.truncated_normal([10, 40]))`) when building functions. Please file a feature request if this restriction inconveniences you.
```
Below is a full snippet that reproduces the issue. My question is, why in one use case (crop only) it works, and when RandomFlip is used it doesn't? How can this be fixed?
```py
import functools
import numpy as np
import tensorflow as tf
def data_gen():
for i in range(10):
x = np.random.random(size=(80, 80, 3)) * 255 # rgb image
x = x.astype('uint8')
y = np.random.random(size=(40, 40, 1)) * 255 # downsized mono image
y = y.astype('uint8')
yield x, y
def preprocess(image, label, cropped_image_size, cropped_label_size, skip_augmentations=False):
x = image
y = label
x_size = cropped_image_size
y_size = cropped_label_size
if not skip_augmentations:
x = tf.keras.layers.RandomFlip(mode="horizontal")(x)
y = tf.keras.layers.RandomFlip(mode="horizontal")(y)
x = tf.keras.layers.RandomRotation(factor=1.0, fill_mode='constant')(x)
y = tf.keras.layers.RandomRotation(factor=1.0, fill_mode='constant')(y)
x = tf.keras.layers.CenterCrop(x_size, x_size)(x)
y = tf.keras.layers.CenterCrop(y_size, y_size)(y)
return x, y
print(tf.__version__) # 2.6.0
dataset = tf.data.Dataset.from_generator(data_gen, output_signature=(
tf.TensorSpec(shape=(80, 80, 3), dtype='uint8'),
tf.TensorSpec(shape=(40, 40, 1), dtype='uint8')
))
crop_only_fn = functools.partial(preprocess, cropped_image_size=50, cropped_label_size=25, skip_augmentations=True)
train_preprocess_fn = functools.partial(preprocess, cropped_image_size=50, cropped_label_size=25, skip_augmentations=False)
# This works
crop_dataset = dataset.map(crop_only_fn)
# This fails: ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable
train_dataset = dataset.map(train_preprocess_fn)
```
Full-stack trace:
```
Traceback (most recent call last):
File "./issue_dataaug.py", line 50, in <module>
train_dataset = dataset.map(train_preprocess_fn)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1861, in map
return MapDataset(self, map_func, preserve_cardinality=True)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4985, in __init__
use_legacy_function=use_legacy_function)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4218, in __init__
self._function = fn_factory()
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3151, in get_concrete_function
*args, **kwargs)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3116, in _get_concrete_function_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3463, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3308, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 1007, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4195, in wrapped_fn
ret = wrapper_helper(*args)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4125, in wrapper_helper
ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
File "/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper
raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:
./issue_dataaug.py:25 preprocess *
x = tf.keras.layers.RandomFlip(mode="horizontal")(x)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/keras/layers/preprocessing/image_preprocessing.py:414 __init__ **
self._rng = make_generator(self.seed)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/keras/layers/preprocessing/image_preprocessing.py:1375 make_generator
return tf.random.Generator.from_non_deterministic_state()
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:396 from_non_deterministic_state
return cls(state=state, alg=alg)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:476 __init__
trainable=False)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/stateful_random_ops.py:489 _create_variable
return variables.Variable(*args, **kwargs)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:268 __call__
return cls._variable_v2_call(*args, **kwargs)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:262 _variable_v2_call
shape=shape)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:243 <lambda>
previous_getter = lambda **kws: default_variable_creator_v2(None, **kws)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py:2675 default_variable_creator_v2
shape=shape)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/variables.py:270 __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1613 __init__
distribute_strategy=distribute_strategy)
/...//virtualenvs/cvi36/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1695 _init_from_args
raise ValueError("Tensor-typed variable initializers must either be "
ValueError: Tensor-typed variable initializers must either be wrapped in an init_scope or callable (e.g., `tf.Variable(lambda : tf.truncated_normal([10, 40]))`) when building functions. Please file a feature request if this restriction inconveniences you.
```
|
2021/12/13
|
[
"https://Stackoverflow.com/questions/70332071",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1213694/"
] |
found the solution.
```
private var isLoading = true
override fun onCreate(savedInstanceState: Bundle?) {
val splashScreen = installSplashScreen()
splashScreen.setKeepVisibleCondition { isLoading }
}
private fun doApiCalls(){
...
isLoading = false
}
```
|
@sujith's answer did not work for me for some reason.
I added a method in my viewModel like this:
```
fun isDataReady(): Boolean {
return isDataReady.value?:false
}
```
and used
```
splashScreen.setKeepVisibleCondition {
!viewModel.isDataReady()
}
```
This worked for me.
May be someone can explain to me why sujiths answer was not working for me( it was hiding the splash screen after some time). Because I know both of us are essentially doing the same thing.
| 11,573
|
73,879,190
|
A is a m*n matrix
B is a n*n matrix
I want to return matrix C of size m\*n such that:
[](https://i.stack.imgur.com/Uq3SK.png)
In python it could be like below
```
for i in range(m):
for j in range(n):
C[i][j] = 0
for k in range(n):
C[i][j] += max(0, A[i][j] - B[j][k])
```
this runs on O(m\*n^2)
if `A[i][j] - B[j][k]` is always > 0 it could easily be improved as
```
C[i][j] = n*A[i][j] - sum(B[j])
```
but it is possible to improve as well when there are cases of `A[i][j] - B[j][k]< 0` ? I think some divide and conquer algorithms might help here but I am not familiar with them.
|
2022/09/28
|
[
"https://Stackoverflow.com/questions/73879190",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12968928/"
] |
I would look on much simpler construct and go from there..
lets say the max between 0 and the addition wasn't there.
so the answer would be : a(i,j)*n - sum(b(j,)
on this you could just go linearly by sum each vector and erase it from a(i,j)*n
and because you need sum each vector in b only once per j it can be done in max(m*n,n*n)
now think about simple solution for the max problem...
if you would find which elements in b(j,) is bigger than a(i,j) you could just ignore their sum and substract their count from the multipication of a(i,j)
All of that can be done by ordering the vector b(j,) by size and make a summing array to each vector from the biggest to lowest (it can be done in n*n*log(n) because you order each b(j,) vector once)
then you only need to binary search where is the a(i,j) in the ordered vector and take the sum you already found and subtract it from a(i,j) \* the position you found in the binary search.
Eventually you'll get O( max( m*n*log(n),n*n*log(n) ) )
I got for you also the implementation:
```py
import numpy as np
M = 4
N = 7
array = np.random.randint(100, size=(M,N))
array2 = np.random.randint(100, size=(N,N))
def matrixMacossoOperation(a,b, N, M):
cSlow = np.empty((M,N))
for i in range(M):
for j in range(N):
cSlow[i][j] = 0
for k in range(N):
cSlow[i][j] += max(0, a[i][j] - b[j][k])
for i in range(N):
b[i].sort()
sumArr = np.copy(b)
for j in range(N):
for i in range(N - 1):
sumArr[j][i + 1] += sumArr[j][i]
c = np.empty((M,N))
for i in range(M):
for j in range(N):
sumIndex = np.searchsorted(b[j],a[i][j])
if sumIndex == 0:
c[i][j] = 0;
else:
c[i][j] = ((sumIndex) * a[i][j]) - sumArr[j][sumIndex - 1]
print(c)
assert(np.array_equal(cSlow,c))
matrixMacossoOperation(array,array2,N,M)
```
|
For each `j`, You can sort each column `B[j][:]` and compute cumulative sums.
Then for a given `A[i][j]` you can find the sum of `B[j][k]` that are larger than `A[i][j]` in O(log n) time using binary search. If there's `x` elements of `B[j][:]` that are greater than `A[i][j]` and their sum is S, then `C[i][j] = A[i][j] * x - S`.
This gives you an overall O((m+n)n log n) time algorithm.
| 11,575
|
62,667,225
|
For the last 3 days, I have been trying to set up virtual Env on Vs Code for python with some luck but I have a few questions that I cant seem to find the answer to.
1. Does Vs Code have to run in WSL for me to use venv?
2. When I install venv on my device it doesn't seem to install a Scripts folder inside the vevn folder. Is this out dated information or am I installing it incorrectly. I am installing onto Documents folder inside my D: drive using python3 - m venv venv. The folder does install and does run in WSL mode but I am trying to run it in clear VsCode so I can use other add-ons such as AREPL that doesn't seem to like being ran in WSL.
For extra context I have oh-my-ZSH set up and using the ubuntu command line on my windows device. Any information will be helpful at this point because I am losing my mind.
[venv folder in side D: drive](https://i.stack.imgur.com/0FHoW.png)
[result](https://i.stack.imgur.com/ecbDq.png)
|
2020/06/30
|
[
"https://Stackoverflow.com/questions/62667225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13843817/"
] |
If you have the python extension installed you should be able to select your python interpreter at the bottom.
[](https://i.stack.imgur.com/dsdfm.png)
You should then be able to select the appropriate path
[](https://i.stack.imgur.com/fJW40.png)
|
You don't have to create a virtual environment under WSL, it will work anywhere. But the reason you don't have a `Scripts/` directory is because (I bet) you're running VS Code with git bash and that makes Python think you're running under Unix. In that case it creates a `bin/` directory. That will also confuse VS Code because the extension thinks you're running under Windows.
I would either create a virtual environment using a Windows terminal like PowerShell or Command Prompt or use WSL2.
| 11,576
|
10,283,067
|
I wanted to create a simple gui with a play and stop button to play an mp3 file in python. I created a very simple gui using Tkinter that consists of 2 buttons (stop and play).
I created a function that does the following:
```
def playsound () :
sound = pyglet.media.load('music.mp3')
sound.play()
pyglet.app.run()
```
I added that function as a command to the button play. I also made a different function to stop music:
```
def stopsound ():
pyglet.app.exit
```
I added this function as a command to the second button. But the problem is that when I hit play, python and the gui freeze. I can try to close the window but it does not close, and the stop button is not responsive. I understand that this is because the pyglet.app.run() is executing till the song is over but how exactly do I prevent this? I want the gui to stop the music when I click on the button. Any ideas on where I can find a solution to this?
|
2012/04/23
|
[
"https://Stackoverflow.com/questions/10283067",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/947933/"
] |
You are mixing two UI libraries together - that is not intrinsically bad, but there are some problems. Notably, both of them need a main loop of their own to process their events. TKinter uses it to communicate with the desktop and user-generated events, and in this case, pyglet uses it to play your music.
Each of these loops prevents a normal "top down" program flow, as we are used to when we learn non-GUI programming, and the program should proceed basically with callbacks from the main loops. In this case, in the middle of a Tkinter callback, you put the pyglet mainloop (calling `pyglet.app.run`) in motion, and the control never returns to the Tkinter library.
Sometimes loops of different libraries can coexist on the same process, with no conflicts -- but of course you will be either running one of them or the other. If so, it may be possible to run each library's mainloop in a different Python thread.
If they can not exist together, you will have to deal with each library in a different process.
So, one way to make the music player to start in another thread could be:
```
from threading import Thread
def real_playsound () :
sound = pyglet.media.load('music.mp3')
sound.play()
pyglet.app.run()
def playsound():
global player_thread
player_thread = Thread(target=real_playsound)
player_thread.start()
```
If Tkinter and pyglet can coexist, that should be enough to get your music to start.
To be able to control it, however, you will need to implement a couple more things. My suggestion is to have a callback on the pyglet thread that is called by pyglet every second or so -- this callback checks the state of some global variables, and based on them chooses to stop the music, change the file being played, and so on.
|
There is a media player implementation in the pyglet documentation:
<http://www.pyglet.org/doc/programming_guide/playing_sounds_and_music.html>
The script you should look at is [media\_player.py](http://www.pyglet.org/doc/programming_guide/media_player.py)
Hopefully this will get you started
| 11,578
|
26,943,578
|
I'm making a simple guessing game using tkinter for my python class and was wondering if there was a way to loop it so that the player would have a maximum number of guesses before the program tells the player what the number was and changes the number, or kill the program after it tells them the answer. Heres my code so far:
```
# This program is a number guessing game using tkinter gui.
# Import all the necessary libraries.
import tkinter
import tkinter.messagebox
import random
# Set the variables.
number = random.randint(1,80)
attempts = 0
# Start coding the GUI
class numbergameGUI:
def __init__(self):
# Create the main window.
self.main_window = tkinter.Tk()
# Create four frames to group widgets.
self.top_frame = tkinter.Frame()
self.mid_frame1 = tkinter.Frame()
self.mid_frame2 = tkinter.Frame()
self.bottom_frame = tkinter.Frame()
# Create the widget for the top frame.
self.top_label = tkinter.Label(self.top_frame, \
text='The number guessing game!')
# Pack the widget for the top frame.
self.top_label.pack(side='left')
# Create the widgets for the upper middle frame
self.prompt_label = tkinter.Label(self.mid_frame1, \
text='Guess the number I\'m thinking of:')
self.guess_entry = tkinter.Entry(self.mid_frame1, \
width=10)
# Pack the widgets for the upper middle frame.
self.prompt_label.pack(side='left')
self.guess_entry.pack(side='left')
# Create the widget for the bottom middle frame.
self.descr_label = tkinter.Label(self.mid_frame2, \
text='Your Guess is:')
self.value = tkinter.StringVar()
# This tells user if guess was too high or low.
self.guess_label = tkinter.Label(self.mid_frame2, \
textvariable=self.value)
# Pack the middle frame's widgets.
self.descr_label.pack(side='left')
self.guess_label.pack(side='left')
# Create the button widgets for the bottom frame.
self.guess_button = tkinter.Button(self.bottom_frame, \
text='Guess', \
command=self.guess,)
self.quit_button = tkinter.Button(self.bottom_frame, \
text='Quit', \
command=self.main_window.destroy)
# Pack the buttons.
self.guess_button.pack(side='left')
self.quit_button.pack(side='left')
# Pack the frames
self.top_frame.pack()
self.mid_frame1.pack()
self.mid_frame2.pack()
self.bottom_frame.pack()
# Enter the tkinter main loop.
tkinter.mainloop()
# Define guess
def guess(self):
# Get the number they guessed.
guess1 = int(self.guess_entry.get())
# sattempts +=1
# Tell player too low if their guess was too low.
if guess1 < number:
self.value.set('too low')
# Tell player too high if their guess was too high.
elif guess1 > number:
self.value.set('too high')
# End the loop if the player attempts the correct number.
if guess1 == number:
tkinter.messagebox.showinfo('Result', 'Congratulations! You guessed right!')
start = numbergameGUI()
```
I tried to put a while loop inside of the guess function because I did that before the program was using tkinter but I haven't been able to get it to work yet.
|
2014/11/15
|
[
"https://Stackoverflow.com/questions/26943578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4255017/"
] |
You need to set the visibility of `$secret`
```
private $secret = "";
```
Then just remove that casting on the base64 and use `$this->secret` to access the property:
```
return base64_encode($this->secret);
```
So finally:
```
class mySimpleClass
{
// public $secret = "";
private $secret = '';
public function __construct($s)
{
$this->secret = $s;
}
public function getSecret()
{
return base64_encode($this->secret);
}
}
```
|
I suggest you to declare `$secret` as `public` or `private` & access it using `$this->`. Example:
```
class mySimpleClass {
public $secret = "";
public function __construct($s) {
$this -> secret = $s;
}
public function getSecret() {
return base64_encode($this->$secret);
}
}
```
| 11,583
|
51,309,341
|
So im trying to send a saved wave file from a client to a server with socket however every attempt at doing it fails, the closest ive got is this:
```
#Server.py
requests = 0
while True:
wavfile = open(str(requests)+str(addr)+".wav", "wb")
while True:
data = clientsocket.recv(1024)
if not data:
break
requests = requests+1
wavefile.write(data)
#Client.py
bytes = open("senddata", "rb")
networkmanager.send(bytes.encode())
```
the error with this code is "AttributeError: '\_io.BufferedReader' object has no attribute 'encode'" so is there anyway to fix this?, and im using python
|
2018/07/12
|
[
"https://Stackoverflow.com/questions/51309341",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9995176/"
] |
You could create e.g. a `utils` file, export your helpers from there, and import them when needed:
```
// utils.js
export function romanize(str) {
// ...
}
export function getDocumentType(doc) {
// ...
}
// App.js
import { romanize } from './utils';
```
|
The "react way" is to structure these files in the way that makes most sense for your application. Let me give you some examples of what react applications tend to look like to help you out.
React has a declarative tree structure for the view, and other related concepts have a tendency to fall into this declarative tree structure form as-well.
Let's look at two examples, one where the paradigm relates to the view hierarchy and one where it does not.
For one where it does not, we can think about your domain model. You may need to structure local state in stores that resemble your business model. You business model will usually look different from your view hierarchy, so we would have a separate hierarchy for this.
But what about the places where the business model needs to connect to the view layer. Since we are specifying data on a per component bases. Even though it isn't the view or styles or how the component behaves, this is still colocated in the same folder hierarchy as the react component because it fits into the same conceptual structure.
Now, there is your question of utilities. There are many approaches to this.
1. If they are all small and specific to your application but not any part, you can put them in the root under utils.
2. If there are a lot of utils and they fit into a structure separate from any of your existing hierarchies, make a new hierarchy.
3. If they are independent from your application, either of the above approaches could become an npm package.
4. If they relate to certain parts of your app, you can put them at the highest point in the hierarchy such that everything that uses the utility is beneath the directory where the utility lives.
| 11,584
|
64,708,781
|
I am trying to use multiprocessing in order to run a CPU-intensive job in the background. I'd like this process to be able to use peewee ORM to write its results to the SQLite database.
In order to do so, I am trying to override the Meta.database of my model class after thread creation so that I can have a separate db connection for my new process.
```
def get_db():
db = SqliteExtDatabase(path)
return db
class BaseModel(Model):
class Meta:
database = get_db()
# Many other models
class Batch(BaseModel):
def multi():
def background_proc():
# trying to override Meta's db connection.
BaseModel._meta.database = get_db()
job = Job.get_by_id(1)
print("working in the background")
process = multiprocessing.Process(target=background_proc)
process.start()
```
Error when executing `my_batch.multi()`
```
Process Process-1:
Traceback (most recent call last):
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 3099, in execute_sql
cursor.execute(sql, params or ())
sqlite3.OperationalError: disk I/O error
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/layne/.pyenv/versions/3.7.6/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/Users/layne/.pyenv/versions/3.7.6/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/Users/layne/Desktop/pydatasci/pydatasci/aidb/__init__.py", line 1249, in background_proc
job = Job.get_by_id(1)
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 6395, in get_by_id
return cls.get(cls._meta.primary_key == pk)
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 6384, in get
return sq.get()
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 6807, in get
return clone.execute(database)[0]
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 1886, in inner
return method(self, database, *args, **kwargs)
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 1957, in execute
return self._execute(database)
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 2129, in _execute
cursor = database.execute(self)
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 3112, in execute
return self.execute_sql(sql, params, commit=commit)
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 3106, in execute_sql
self.commit()
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 2873, in __exit__
reraise(new_type, new_type(exc_value, *exc_args), traceback)
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 183, in reraise
raise value.with_traceback(tb)
File "/Users/layne/.pyenv/versions/3.7.6/envs/jupyterlab/lib/python3.7/site-packages/peewee.py", line 3099, in execute_sql
cursor.execute(sql, params or ())
peewee.OperationalError: disk I/O error
```
I got this working using threads instead, but it's hard to actually terminate a thread (not just break from a loop) and CPU-intensive (not io delayed) jobs should be multiprocessed.
UPDATE: looking into peewee proxy <http://docs.peewee-orm.com/en/latest/peewee/database.html#dynamically-defining-a-database>
|
2020/11/06
|
[
"https://Stackoverflow.com/questions/64708781",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5739514/"
] |
Haskell doesn't allow this because it would be ambiguous. The value constructor `Prop` is effectively a function, which may be clearer if you ask GHCi about its type:
```
> :t Const
Const :: Bool -> Prop
```
If you attempt to add one more `Const` constructor in the same module, you'd have two 'functions' called `Const` in the same module. You can't have that.
|
This is somewhat horrible, but will basically let you do what you want:
```hs
{-# LANGUAGE PatternSynonyms, TypeFamilies, ViewPatterns #-}
data Prop = PropConst Bool
| PropVar Char
| PropNot Prop
| PropOr Prop Prop
| PropAnd Prop Prop
| PropImply Prop Prop
data Formula = FormulaConst Bool
| FormulaVar Prop
| FormulaNot Formula
| FormulaAnd Formula Formula
| FormulaOr Formula Formula
| FormulaImply Formula Formula
class PropOrFormula t where
type Var t
constructConst :: Bool -> t
deconstructConst :: t -> Maybe Bool
constructVar :: Var t -> t
deconstructVar :: t -> Maybe (Var t)
constructNot :: t -> t
deconstructNot :: t -> Maybe t
constructOr :: t -> t -> t
deconstructOr :: t -> Maybe (t, t)
constructAnd :: t -> t -> t
deconstructAnd :: t -> Maybe (t, t)
constructImply :: t -> t -> t
deconstructImply :: t -> Maybe (t, t)
instance PropOrFormula Prop where
type Var Prop = Char
constructConst = PropConst
deconstructConst (PropConst x) = Just x
deconstructConst _ = Nothing
constructVar = PropVar
deconstructVar (PropVar x) = Just x
deconstructVar _ = Nothing
constructNot = PropNot
deconstructNot (PropNot x) = Just x
deconstructNot _ = Nothing
constructOr = PropOr
deconstructOr (PropOr x y) = Just (x, y)
deconstructOr _ = Nothing
constructAnd = PropAnd
deconstructAnd (PropAnd x y) = Just (x, y)
deconstructAnd _ = Nothing
constructImply = PropImply
deconstructImply (PropImply x y) = Just (x, y)
deconstructImply _ = Nothing
instance PropOrFormula Formula where
type Var Formula = Prop
constructConst = FormulaConst
deconstructConst (FormulaConst x) = Just x
deconstructConst _ = Nothing
constructVar = FormulaVar
deconstructVar (FormulaVar x) = Just x
deconstructVar _ = Nothing
constructNot = FormulaNot
deconstructNot (FormulaNot x) = Just x
deconstructNot _ = Nothing
constructOr = FormulaOr
deconstructOr (FormulaOr x y) = Just (x, y)
deconstructOr _ = Nothing
constructAnd = FormulaAnd
deconstructAnd (FormulaAnd x y) = Just (x, y)
deconstructAnd _ = Nothing
constructImply = FormulaImply
deconstructImply (FormulaImply x y) = Just (x, y)
deconstructImply _ = Nothing
pattern Const x <- (deconstructConst -> Just x) where
Const x = constructConst x
pattern Var x <- (deconstructVar -> Just x) where
Var x = constructVar x
pattern Not x <- (deconstructNot -> Just x) where
Not x = constructNot x
pattern Or x y <- (deconstructOr -> Just (x, y)) where
Or x y = constructOr x y
pattern And x y <- (deconstructAnd -> Just (x, y)) where
And x y = constructAnd x y
pattern Imply x y <- (deconstructImply -> Just (x, y)) where
Imply x y = constructImply x y
{-# COMPLETE Const, Var, Not, Or, And, Imply :: Prop #-}
{-# COMPLETE Const, Var, Not, Or, And, Imply :: Formula #-}
```
If <https://gitlab.haskell.org/ghc/ghc/-/issues/8583> were ever done, then this could be substantially cleaned up.
| 11,590
|
46,700,236
|
when I using tensorflow ,I meet with a error:
```
[W 09:27:49.213 NotebookApp] 404 GET /api/kernels/4e889506-2258-481c-b18e-d6a8e920b606/channels?session_id=0665F3F07C004BBAA7CDF6601B6E2BA1 (127.0.0.1): Kernel does not exist: 4e889506-2258-481c-b18e-d6a8e920b606
[W 09:27:49.266 NotebookApp] 404 GET /api/kernels/4e889506-2258-481c-b18e-d6a8e920b606/channels?session_id=0665F3F07C004BBAA7CDF6601B6E2BA1 (127.0.0.1) 340.85ms referer=None
[W 09:27:50.337 NotebookApp] /home/dxq/g++ doesn't exist
[W 09:27:50.514 NotebookApp] /home/dxq/gcc doesn't exist
[I 09:28:03.159 NotebookApp] Kernel started: aa5e56b4-df58-4e74-8dc1-96a4cee847aa
[I 09:28:04.032 NotebookApp] Adapting to protocol v5.1 for kernel aa5e56b4-df58-4e74-8dc1-96a4cee847aa
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:111] successfully opened CUDA library libcurand.so locally
E tensorflow/core/common_runtime/direct_session.cc:132] Internal: failed initializing StreamExecutor for CUDA device ordinal 0: Internal: failed call to cuCtxCreate: CUDA_ERROR_OUT_OF_MEMORY; total memory reported: 18446744071514750976
```
what's wrong with here?
Here is the full spec:
```
ubuntu 16.04
cuda:8.0
python 2.7
```
|
2017/10/12
|
[
"https://Stackoverflow.com/questions/46700236",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7786383/"
] |
Run the following command in terminal:
```
nvidia-smi
```
You will get an output like this.
[](https://i.stack.imgur.com/FtqZs.png)
You will get a summary of the processes occupying the memory of your GPU. In notebooks, even if no cell is running currently, but previously being run and the local server is still on, the memory will be occupied. You will have to stop whichever process is occupying more memory to allocate some bandwidth for your current process to run.
|
Check the cuDNN version. It should be 5.1
| 11,591
|
23,790,460
|
I am new to Python and I installed the [`speech`](https://pypi.python.org/pypi/speech) library. But whenever I'm importing `speech` from Python shell it's giving the error
```
>>> import speech
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import speech
File "C:\Python34\lib\site-packages\speech-0.5.2-py3.4.egg\speech.py", line 55, in <module>
from win32com.client import constants as _constants
File "C:\Python34\lib\site-packages\win32com\__init__.py", line 5, in <module>
import win32api, sys, os
ImportError: DLL load failed: The specified module could not be found.
```
|
2014/05/21
|
[
"https://Stackoverflow.com/questions/23790460",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3661976/"
] |
So I have contacted the author of R2jags and he has added an addition argument to jags.parallel that lets you pass envir, which is then past onto clusterExport.
This works well except it allows clashes between the name of my data and variables in the jags.parallel function.
|
if you use intensively JAGS in parrallel I can suggest you to look the package `rjags` combined with the package `dclone`. I think `dclone` is realy powerfull because the syntaxe was exactly the same as `rjags`.
I have never see your problem with this package.
If you want to use `R2jags` I think you need to pass your variables and your init function to the workers with the function:
`clusterExport(cl, list("jags.data", "jags.params", "jags.inits"))`
| 11,592
|
61,163,289
|
I am pretty new to python and I am trying to swap the values of some variables in my code below:
```
def MutationPop(LocalBestInd,clmns,VNSdata):
import random
MutPop = []
for i in range(0,VNSdata[1]):
tmpMutPop = LocalBestInd
#generation of random numbers
RandomNums = []
while len(RandomNums) < 2:
r = random.randint(0,clmns-1)
if r not in RandomNums:
RandomNums.append(r)
RandomNums = sorted(RandomNums)
#apply swap to berths
tmpMutPop[0][RandomNums[0]] = LocalBestInd[0][RandomNums[1]]
tmpMutPop[0][RandomNums[1]] = LocalBestInd[0][RandomNums[0]]
#generation of random numbers
RandomNums = []
while len(RandomNums) < 2:
r = random.randint(0,clmns-1)
if r not in RandomNums:
RandomNums.append(r)
RandomNums = sorted(RandomNums)
#apply swap to vessels
tmpMutPop[1][RandomNums[0]] = LocalBestInd[1][RandomNums[1]]
tmpMutPop[1][RandomNums[1]] = LocalBestInd[1][RandomNums[0]]
MutPop.append(tmpMutPop)
Neighborhood = MutPop
return(Neighborhood)
```
my problem is that I do not want to change the variable "`LocalBestInd`" and want to use it as a reference to generate new "tmpMutPop"s in the loop, but the code put "`LocalBestInd`" equal to "`tmpMutPop`" every time that loop is iterated. The same problem happens for other assignments (e.g., `tmpMutPop[1][RandomNums[1]] = LocalBestInd[1][RandomNums[0]]`) in this code.
Would you please help me to solve this problem?
Thank you
Masoud
|
2020/04/11
|
[
"https://Stackoverflow.com/questions/61163289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13271276/"
] |
Based off the tutorial it looks like you've missed a crucial step.
You need to install `google-maps-react` dependency in your project.
In your console, navigate to your project root directory and run the following:
```
npm install --save google-maps-react
```
Another troubleshooting issue for those who are stuck is to DELETE your `node_modules` folder and the run `npm install` in the console.
This will reinstall all the required dependencies for your project.
---
**Note:**
Considering you've accidentally installed `google-map-react` instead of `google-maps-react`. I recommend uninstalling `google-map-react` since it's not being used.
Do that by run the following in your console:
```
npm uninstall --save google-map-react
```
|
I had the same issue. I fixed it by adding `declare module 'google-map-react'`; in file `react-app-env.d.ts`
Try it out and give a feedback by the way I am using TS with React
| 11,594
|
36,531,404
|
I have a scenario, where I have to call a certain Python script multiple times in another python script.
script1:
```
import sys
path=sys.argv
print "I am a test"
print "see! I do nothing productive."
print "path:",path[1]
```
script2:
```
import subprocess
l=list()
l.append('root')
l.append('root1')
l.append('root2')
for i in l:
cmd="python script1.py i"
subprocess.Popen(cmd,shell=True)
```
Here, my issue is that in script 2, I am not able to replace the value of "i" in the for loop.
Can you help with that?
|
2016/04/10
|
[
"https://Stackoverflow.com/questions/36531404",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5820814/"
] |
to substitute the value of i into the string you can concatenate it:
```
cmd="python script1.py "+i
```
or format it into the string:
```
cmd="python script1.py %s"%i
```
Either way you need to use the variable i instead of the string i.
|
I think you are looking for this:
```
cmd="python script1.py %s" % i
```
| 11,595
|
3,400,144
|
All,
I am familiar with the ability to fake GPS information to the emulator through the use of the `geo fix long lat altitude` command when connected through the emulator.
What I'd like to do is have a simulation running on potentially a different computer produce lat, long, altitudes that should be sent over to the Android device to fake the emulator into thinking it has received a GPS update.
I see various solutions for [scripting a telnet session](https://stackoverflow.com/questions/709801/creating-a-script-for-a-telnet-session); it seems like the best solution is, in pseudocode:
```
while true:
if position update generated / received
open subprocess and call "echo 'geo fix lon lat altitude' | nc localhost 5554"
```
This seems like a big hack, although it works on Mac (not on Windows). Is there a better way to do this? (I cannot generate the tracks ahead of time and feed them in as a route; the Android system is part of a real time simulation, but as it's running on an emulator there is no position updates. Another system is responsible for calculating these position updates).
edit:
Alternative method, perhaps more clean, is to use the telnetlib library of python.
```
import telnetlib
tn = telnetlib.Telnet("localhost",5554)
while True:
if position update generated / received
tn.write("geo fix longitude latitude altitude\r\n")
```
|
2010/08/03
|
[
"https://Stackoverflow.com/questions/3400144",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/155392/"
] |
The response you're seeing is an empty response which doesn't necessarily mean there's no metric data available. A few ideas what might cause this:
* Are you using a user access token? If yes, does the user own the page? Is the 'read\_insights' extended permission granted for the user / access token? How about 'offline\_access'?
* end\_time needs should be specified as midnight, Pacific Time.
* Valid periods are 86400, 604800, 2592000 (day, week, month)
* Does querying 'page\_fan\_adds' metric yield meaningful results for a given period?
While I haven't worked with the insights table, working with Facebook's FQL taught me not to expect error messages, error codes but try to follow the documentation (if available) and then experiment with it...
As for the date, use the following ruby snippet for midnight, today:
```
Date.new(2010,9,14).to_time.to_i
```
---
I also found the following on the Graph API documentation page:
>
> **Impersonation**
>
>
> You can impersonate pages administrated by your users by requesting the "manage\_pages" extended permission.
>
>
> Once a user has granted your application the "manage\_pages" permission, the "accounts" connection will yield an additional access\_token property for every page administrated by the current user. These access\_tokens can be used to make calls on behalf of a page. The permissions granted by a user to your application will now also be applicable to their pages. ([source](http://developers.facebook.com/docs/api))
>
>
>
Have you tried requesting this permission and use &metadata=1 in a Graph API query to get the access token for each account?
|
If you want to know the number of fans a facebook page has, use something like:
```
https://graph.facebook.com/cocacola
```
The response contains a `fan_count` property.
| 11,597
|
44,364,458
|
Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in `.robot` files.
Test suites have test cases which are called "Keywords".
**Test Cases**
Create New Vehicle
```
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
```
Those "Keywords" are imported from python library and look like:
```
@keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
```
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
|
2017/06/05
|
[
"https://Stackoverflow.com/questions/44364458",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8113230/"
] |
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
[RED Debug with PyDev](http://nokia.github.io/RED/help/user_guide/launching/robot_python_debug.html)
|
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to `sideBarPage.createNewVehicle()` should throw an exception such as "unable to create new vehicle". Likewise, the call to `basicVehicleCreation.setKennzeichen(registrationno)` should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
```
@keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...
```
| 11,604
|
54,752,681
|
I am working on a thesis regarding Jacobsthal sequences (A001045) and how they can be considered as being composed of some number of distinct sub-sequences. I have made a comment on A077947 indicating this and have included a python program. Unfortunately the program as written leaves a lot to be desired and so of course I wanted to turn to Stack to see if anyone here knows how to improve the code!
**Here is the code:**
```
a=1
b=1
c=2
d=5
e=9
f=18
for x in range(0, 100):
print(a, b, c, d, e, f)
a = a+(36*64**x)
b = b+(72*64**x)
c = c+(144*64**x)
d = d+(288*64**x)
e = e+(576*64**x)
f = f+(1152*64**x)
```
**I explain the reasoning behind this as follows:**
>
> The sequence A077947 is generated by 6 digital root preserving sequences
> stitched together; per the Python code these sequences initiate at the
> seed values a-f. The number of iterations required to calculate a given
> A077947 a(n) is ~n/6. The code when executed returns all the values for
> A077947 up to range(x), or ~x\*6 terms of A077947. I find the repeated
> digital roots interesting as I look for periodic digital root preservation
> within sequences as a method to identify patterns within data. For
> example, digital root preserving sequences enable time series analysis of
> large datasets when estimating true-or-false status for alarms in large IT
> ecosystems that undergo maintenance (mod7 environments); such analysis is
> also related to predicting consumer demand / patterns of behavior.
> Appropriating those methods of analysis, carving A077947 into 6 digital
> root preserving sequences was meant to reduce complexity; the Python code
> reproduces A077947 across 6 "channels" with seed values a-f. This long
> paragraph boils down to statement, "The digital roots of the terms of the
> sequence repeat in the pattern (1, 1, 2, 5, 9, 9)." The bigger statement
> is that all sequences whose digital roots repeat with a pattern can be
> partitioned/separated into an equal number of distinct sequences and those
> sequences can be calculated independently. There was a bounty related to
> this sequence.
>
>
>
This code is ugly but I cannot seem to get the correct answer without coding it this way;
I have not figured out how to write this as a function due to the fact I cannot seem to get the recurrence values to store properly in a function.
So of course if this yields good results we hope to link the discussion to the OEIS references.
**Here is a link to the sequence:**
<https://oeis.org/A077947>
|
2019/02/18
|
[
"https://Stackoverflow.com/questions/54752681",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1204443/"
] |
Here's an alternative way to do it without a second for loop:
```
sequences = [ 1, 1, 2, 5, 9, 18 ]
multipliers = [ 36, 72, 144, 288, 576, 1152 ]
for x in range(100):
print(*sequences)
sequences = [ s + m*64**x for s,m in zip(sequences,multipliers) ]
```
[EDIT] Looking at the values I noticed that this particular sequence could also be obtained with:
N[i+1] = 2 \* N[i] + (-1,0,1 in rotation)
or
N[i+1] = 2 \* N[i] + i mod 3 - 1 *(assuming a zero based index)*
```
i N[i] [-1,0,1] N[i+1]
0 1 -1 --> 2*1 - 1 --> 1
1 1 0 --> 2*1 + 0 --> 2
2 2 1 --> 2*2 + 1 --> 5
3 5 -1 --> 2*5 - 1 --> 9
4 9 0 --> 2*9 + 0 --> 18
...
```
So a simpler loop to produce the sequence could be:
```
n = 1
for i in range(100):
print(n)
n = 2*n + i % 3 - 1
```
Using the reduce function from functools can make this even more concise:
```
from functools import reduce
sequence = reduce(lambda s,i: s + [s[-1]*2 + i%3 - 1],range(20),[1])
print(sequence)
>>> [1, 1, 2, 5, 9, 18, 37, 73, 146, 293, 585, 1170, 2341, 4681, 9362, 18725, 37449, 74898, 149797, 299593, 599186]
```
Using your multi-channel approach and my suggested formula this would give:
```
sequences = [ 1, 1, 2, 5, 9, 18 ]
multipliers = [ 36, 72, 144, 288, 576, 1152 ]
allSequences = reduce(lambda ss,x: ss + [[ s + m*64**x for s,m in zip(ss[-1],multipliers) ]],range(100),[sequences])
for seq in allSequences: print(*seq) # print 6 by 6
```
[EDIT2] If all your sequences are going to have a similar pattern (i.e. starting channels, multipliers and calculation formula), you could generalize the printing of such sequences in a function thus only needing one line per sequence:
```
def printSeq(calcNext,sequence,multipliers,count):
for x in range(count):
print(*sequence)
sequence = [ calcNext(x,s,m) for s,m in zip(sequence,multipliers) ]
printSeq(lambda x,s,m:s*2+m*64**x,[1,1,2,5,9,18],multipliers=[36,72,144,288,576,1152],count=100)
```
[EDIT3] Improving on the printSeq function.
I believe you will not always need an array of multipliers to compute the next value in each channel. An improvement on the function would be to provide a channel index to the lambda function instead of a multiplier. This will allow you to use an an array of multiplier if you need to but will also let you use a more general calculation.
```
def printSeq(name,count,calcNext,sequence):
p = len(sequence)
for x in range(count):
print(name, x,":","\t".join(str(s) for s in sequence))
sequence = [ calcNext(x,s,c,p) for c,s in enumerate(sequence) ]
```
The lambda function is given 4 parameters and is expected to return the next sequence value for the specified channel:
```
s : current sequence value for the channel
x : iteration number
c : channel index (zero based)
p : number of channels
```
So, using an array inside the formula would express it like this:
```
printSeq("A077947",100,lambda x,s,c,p: s + [36,72,144,288,576,1152][c] * 64**x, [1,1,2,5,9,18])
```
But you could also use a more general formula that is based on the channel index (and number of channels):
```
printSeq("A077947",100,lambda x,s,c,p: s + 9 * 2**(p*x+c+2), [1,1,2,5,9,18])
```
or ( 6 channels based on 2\*S + i%3 - 1 ):
```
printSeq("A077947",100,lambda x,s,c,p: 64*s + 9*(c%3*2 - (c+2)%3 - 1) ,[1,1,2,5,9,18])
printSeq("A077947",100,lambda x,s,c,p: 64*s + 9*[-3,1,2][c%3],[1,1,2,5,9,18])
```
My reasoning here is that if you have a function that can compute the next value based on the current index and value in the sequence, you should be able to define a striding function that will compute the value that is N indexes farther.
Given F(i,S[i]) --> i+1,S[i+1]
```
F2(i,S[i]) --> i+2,S[i+2] = F(F(i,S[i]))
F3(i,S[i]) --> i+3,S[i+3] = F(F(F(i,S[i])))
...
F6(i,S[i]) --> i+6,S[i+6] = F(F(F(F(F(F(i,S[i]))))))
...
Fn(i,S[i]) --> i+n,S[i+n] = ...
```
This will always work and should not require an array of multipliers. Most of the time it should be possible to simplify Fn using mere algebra.
for example A001045 : F(i,S) = i+1, 2\*S + (-1)\*\*i
```
printSeq("A001045",20,lambda x,s,c,p: 64*s + 21*(-1)**(x*p+c),[0,1,1,3,5,11])
```
Note that from the 3rd value onward, the next value in that sequence can be computed without knowing the index:
A001045: F(S) = 2\*S + 1 - 2\*0\*\*((S+1)%4)
|
This will behave identically to your code, and is arguably prettier. You'll probably see ways to make the magic constants less arbitrary.
```
factors = [ 1, 1, 2, 5, 9, 18 ]
cofactors = [ 36*(2**n) for n in range(6) ]
for x in range(10):
print(*factors)
for i in range(6):
factors[i] = factors[i] + cofactors[i] * 64**x
```
To calculate just one of the subsequences, it would be enough to keep `i` fixed as you iterate.
| 11,605
|
12,884,512
|
I am in my first steps in learning python so excuse my questions please. I want to run the code below (taken from: <http://docs.python.org/library/ssl.html>) :
```
import socket, ssl, pprint
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# require a certificate from the server
ssl_sock = ssl.wrap_socket(s,
ca_certs="F:/cert",
cert_reqs=ssl.CERT_REQUIRED)
ssl_sock.connect(('www.versign.com', 443))
print repr(ssl_sock.getpeername())
print ssl_sock.cipher()
print pprint.pformat(ssl_sock.getpeercert())
# Set a simple HTTP request -- use httplib in actual code.
ssl_sock.write("""GET / HTTP/1.0\r
Host: www.verisign.com\r\n\r\n""")
# Read a chunk of data. Will not necessarily
# read all the data returned by the server.
data = ssl_sock.read()
# note that closing the SSLSocket will also close the underlying socket
ssl_sock.close()
```
I got the following errors:
>
> Traceback (most recent call last):
> File "C:\Users\e\workspace\PythonTesting\source\HelloWorld.py", line 38, in
> ssl\_sock.connect(('www.versign.com', 443))
>
>
> File "C:\Python27\lib\ssl.py", line 331, in connect
>
>
>
> ```
> self._real_connect(addr, False)
>
> ```
>
> File "C:\Python27\lib\ssl.py", line 314, in \_real\_connect
>
>
> self.ca\_certs, self.ciphers)
>
>
> ssl.SSLError: [Errno 185090050] \_ssl.c:340: error:0B084002:x509 certificate routines:X509\_load\_cert\_crl\_file:system lib
>
>
>
The error reporting in python does not look guiding to find the source of the problem. i might be mistaken. Can anybody help in telling me what is the problem in the code ?
|
2012/10/14
|
[
"https://Stackoverflow.com/questions/12884512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1476749/"
] |
Your code is referring to a certificate *file* on drive 'F:' (using the `ca_certs` parameter), which is not found during execution -- is there one?
See the relevant [documentation](http://docs.python.org/library/ssl.html#ssl.wrap_socket):
>
> The ca\_certs file contains a set of concatenated “certification
> authority” certificates, which are used to validate certificates
> passed from the other end of the connection.
>
>
>
|
Does the certificate referenced exist on your filesystem? I think that error is in response to invalid cert from this code:
ssl\_sock = ssl.wrap\_socket(s,ca\_certs="F:/cert",cert\_reqs=ssl.CERT\_REQUIRED)
| 11,608
|
68,900,182
|
If I have a string which is the same as a python data type and I would like to check if another variable is that type how would I do it? Example below.
```
dtype = 'str'
x = 'hello'
bool = type(x) == dtype
```
The above obviously returns False but I'd like to check that type('hello') is a string.
|
2021/08/23
|
[
"https://Stackoverflow.com/questions/68900182",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16737078/"
] |
You can use `eval`:
```
bool = type(x) is eval(dtype)
```
but beware, `eval` will execute any python code, so if you're taking `dtype` as user input, they can execute their own code in this line.
|
If your code *actually* looks like the example you showed and `dtype` isn't coming from user input, then also keep in mind that `str` (as a value in Python) is a valid object which represents the string type. Consider
```
dtype = str
x = 'hello'
print(isinstance(x, dtype))
```
`str` is a value like any other and can be assigned to variables. No `eval` magic required.
| 11,610
|
24,944,863
|
I would like to use the Decimal() data type in python and convert it to an integer and exponent so I can send that data to a microcontroller/plc with full precision and decimal control. <https://docs.python.org/2/library/decimal.html>
I have got it to work, but it is hackish; does anyone know a better way? If not what path would I take to write a lower level "as\_int()" function myself?
Example code:
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
if t[0] == 0:
sign=1
else:
sign=-1
digits= t[1]
theExponent=t[2]
theInteger=sign * int(''.join(map(str,digits)))
theExponent
theInteger
```
For those that havent programmed PLCs, my alternative to this is to use an int and declare the decimal point in both systems or use a floating point (that only some PLCs support) and is lossy. So you can see why being able to do this would be awesome!
Thanks in advance!
|
2014/07/24
|
[
"https://Stackoverflow.com/questions/24944863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3862210/"
] |
```
from functools import reduce # Only in Python 3, omit this in Python 2.x
from decimal import *
d = Decimal('3.14159')
t = d.as_tuple()
theInteger = reduce(lambda rst, x: rst * 10 + x, t.digits)
theExponent = t.exponent
```
|
```
from decimal import *
d=Decimal('3.14159')
t=d.as_tuple()
digits=t.digits
theInteger=0
for x in range(len(digits)):
theInteger=theInteger+digits[x]*10**(len(digits)-x)
```
| 11,614
|
3,950,330
|
Is there a way to change python2.x source code to python 3.x manually. I guess using lib2to3 this can be done but I don't know exactly how to do this ?
|
2010/10/16
|
[
"https://Stackoverflow.com/questions/3950330",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/441459/"
] |
Thanks. Here is the answer I was looking for:
```
from lib2to3.refactor import RefactoringTool, get_fixers_from_package
"""assume `files` to a be a list of all filenames you want to convert"""
r = RefactoringTool(get_fixers_from_package('lib2to3.fixes'))
r.refactor(files, write=True)
```
|
Yes, porting is what you are looking here.
Porting is a non-trivial task that requires making various decisions about your code. For instance, whether or not you want to maintaing backward compatibility. There is no single, universal solution to porting. The way you port depends on your specific requirements.
The best resource I have found for porting apps from Python 2 to 3 is the wiki page [PortingPythonToPy3k](http://wiki.python.org/moin/PortingPythonToPy3k). The page contains several approaches to porting as well as a lot of links to resources that are potentially helpful in porting work.
| 11,620
|
6,156,358
|
The example from [this post](https://stackoverflow.com/questions/6144274/string-replace-utility-conversion-from-python-to-f) has an example
```
open System.IO
let lines =
File.ReadAllLines("tclscript.do")
|> Seq.map (fun line ->
let newLine = line.Replace("{", "{{").Replace("}", "}}")
newLine )
File.WriteAllLines("tclscript.txt", lines)
```
that gives an error when compilation.
```
error FS0001: This expression was expected to have type
string []
but here has type
seq<string>
```
How to convert seq to string[] to remove this error message?
|
2011/05/27
|
[
"https://Stackoverflow.com/questions/6156358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
Building on Jaime's answer, since `ReadAllLines()` returns an array, just use `Array.map` instead of `Seq.map`
```
open System.IO
let lines =
File.ReadAllLines("tclscript.do")
|> Array.map (fun line ->
let newLine = line.Replace("{", "{{").Replace("}", "}}")
newLine )
File.WriteAllLines("tclscript.txt", lines)
```
|
You can use
```
File.WriteAllLines("tclscript.txt", Seq.toArray lines)
```
or alternatively just attach
```
|> Seq.toArray
```
after the Seq.map call.
(Also note that in .NET 4, there is an overload of WriteAllLines that does take a Seq)
| 11,621
|
34,464,872
|
I have download some mesh exporter script to learn how to write an export script in python for blender(2.6.3).
The script follows the standard register/unregister in order to register or unregister the script.
```
### REGISTER ###
def menu_func(self, context):
self.layout.operator(Export_objc.bl_idname, text="Objective-C Header (.h)")
def register():
bpy.utils.register_module(__name__)
bpy.types.INFO_MT_file_export.append(menu_func)
def unregister():
bpy.utils.unregister_module(__name__)
bpy.types.INFO_MT_file_export.remove(menu_func)
###if __name__ == "__main__":
### register()
unregister()
```
The issue is that when I use runScript to run the script from the text editor(after changing it to unregister upon run) it removes the the script but leaves an unclickbale leftover in the export menu which I cannot remove.
If I run the register again, it will turn back the inactive menu option into a clickable exporter menu item but in addition it will add another copy of the menu item.
The reason I want to keep registering and unregistering is mostly because I want to make changes and test them out...
Maybe I should run the function directly without registering but even though now I have this in my export menu:
[](https://i.stack.imgur.com/YAeX2.png)
How do I remove these items and not have many versions of my script in the export menu(depending if I made changes), also should I just put the function instead of the register/unregister when I am fiddling with the script and trying thigns out?
|
2015/12/25
|
[
"https://Stackoverflow.com/questions/34464872",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1097185/"
] |
Well I have found a workable way...
If you press 'F8' it will reload all plugins and remove the "dead" menu items.
That solves the multiple additions of the same addon.
So now if I want to change the addon and test it I do something like this:
1. Run script with unregister
2. Press F8
3. Run script with register
That is how I update the addon and there is the additional step of actually running it from the export/import menu.
If you have an easier way to test changes for the addon please let me know...
|
I am not 100% certain of the cause but it relates to running an addon script that adds a menu item within blender's text editor. Even blender's template scripts do the same thing.
I think the best solution is to use it like a real addon - that is save it to disk and enable/disable it in the addon preferences. You can either save it to the installed addons folder, within your [user settings folder](http://www.blender.org/manual/getting_started/installing_blender/directorylayout.html) or create a folder and set the [file path for scripts](http://www.blender.org/manual/preferences/file.html#scripts-path). You could also use the [Install from File](http://www.blender.org/manual/advanced/scripting/python/add_ons.html#installation-of-a-3rd-party-add-on) button in the addons preferences.
| 11,624
|
48,967,621
|
I will admit I'm stuck on a school project right now.
I have defined functions that will generate random numbers for me, as well as a random operator (+, -, or \*).
I have also defined a function that will display a problem using these random numbers.
I have created a program that will generate and display a random problem and ask the user for the solution. If the user is correct, the program prints 'Correct', and the opposite if the user is incorrect.
I have put all of this inside of a loop that will make it repeat 10 times. My issue is that I need it to generate 10 different problems instead of the same problem that it randomized the first time, 10 times.
Sorry for the weird wording.
\*I am using python but am showing the code here using the CSS viewer because I couldn't get it to display any other way.
Thank you.
```
import random
max = 10
def getOp(max): #generates a random number between 1 and 10
randNum = random.randint(0,max)
return randNum
randNum = getOp(max)
def getOperator(): #gets a random operator
opValue = random.randint(1,3)
if opValue == 1:
operator1 = '+'
elif opValue == 2:
operator1 = '-'
elif opValue == 3:
operator1 = '*'
return operator1
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
def doIt(operand1, operand2, operator): #does the problem so we can verify with user
if operator == '+':
answer = operand1 + operand2
elif operator == '-':
answer = operand1 - operand2
elif operator == '*':
answer = operand1 * operand2
return answer
answer = doIt(operand1, operand2, operator)
def displayProblem(operand1, operand2, operator): #displays the problem
print(operand1, operator, operand2, '=')
###My program:
for _ in range(10): #loops the program 10 times
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
|
2018/02/24
|
[
"https://Stackoverflow.com/questions/48967621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7856878/"
] |
Just move your code that is generating the random values into your for loop:
```
for _ in range(10): #loops the program 10 times
randNum = getOp(max)
operand1 = getOp(max)
operand2 = getOp(max)
operator = getOperator()
answer = doIt(operand1, operand2, operator)
displayProblem(operand1, operand2, operator)
userSolution = int(input('Please enter your solution: '))
if userSolution == doIt(operand1, operand2, operator):
print('Correct')
elif userSolution != doIt(operand1, operand2, operator):
print('Incorrect')
```
That way its called before every time you ask the user for input.
|
You generate the problem and then show it 10 times in the loop:
```
generateProblem()
for _ in range(10):
showProblem()
```
of course you will get the same problem shown 10 times. To fix this, generate the problem *inside* the loop:
```
for _ in range(10):
generateProblem()
showProblem()
```
| 11,625
|
17,370,820
|
I have come across some python code with slice notation that I am having trouble figuring out.
It looks like slice notation but uses a comma and a list:
```
list[:, [1, 2, 3]]
```
Is this syntax valid? If so what does it do?
**edit** looks like it is a 2D numpy array
|
2013/06/28
|
[
"https://Stackoverflow.com/questions/17370820",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2502012/"
] |
Assuming that the object is really a `numpy` array, this is known as [advanced indexing](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#advanced-indexing), and picks out the specified columns:
```
>>> import numpy as np
>>> a = np.arange(12).reshape(3,4)
>>> a
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11]])
>>> a[:, [1,2,3]]
array([[ 1, 2, 3],
[ 5, 6, 7],
[ 9, 10, 11]])
>>> a[:, [1,3]]
array([[ 1, 3],
[ 5, 7],
[ 9, 11]])
```
Note that this won't work with the standard Python list:
```
>>> a.tolist()
[[0, 1, 2, 3], [4, 5, 6, 7], [8, 9, 10, 11]]
>>> a.tolist()[:,[1,2,3]]
Traceback (most recent call last):
File "<ipython-input-17-7d77de02047a>", line 1, in <module>
a.tolist()[:,[1,2,3]]
TypeError: list indices must be integers, not tuple
```
|
It generates a complex value and passes it to [`__*item__()`](http://docs.python.org/2/reference/datamodel.html#object.__getitem__):
```
>>> class Foo(object):
... def __getitem__(self, val):
... print val
...
>>> Foo()[:, [1, 2, 3]]
(slice(None, None, None), [1, 2, 3])
```
What it actually *performs* depends on the type being indexed.
| 11,628
|
61,996,756
|
When I install npm on my project ionic with Angular. There is a failed install of node-sass/ node-gyp
error show like this :
>
> $ npm install
>
>
>
> >
> > node-sass@4.10.0 install C:\Users\d\Documents\project\app\node\_modules\node-sass
> > node scripts/install.js
> >
> >
> >
>
>
> Downloading binary from
> <https://github.com/sass/node-sass/releases/download/v4.10.0/win32-x64-72_binding.node>
> Cannot download
> "<https://github.com/sass/node-sass/releases/download/v4.10.0/win32-x64-72_binding.node>":
>
>
> HTTP error 404 Not Found
>
>
> Hint: If github.com is not accessible in your location
> try setting a proxy via HTTP\_PROXY, e.g.
>
>
>
> ```
> export HTTP_PROXY=http://example.com:1234
>
> ```
>
> or configure npm proxy via
>
>
>
> ```
> npm config set proxy http://example.com:8080
>
> ```
>
>
> >
> > node-sass@4.10.0 postinstall C:\Users\d\Documents\project\app\node\_modules\node-sass
> > node scripts/build.js
> >
> >
> >
>
>
> Building: C:\Program Files\nodejs\node.exe
> C:\Users\d\Documents\project\app\node\_modules\node-gyp\bin\node-gyp.js
> rebuild --verbose --libsass\_ext= --libsass\_cflags= --libsass\_ldflags=
> --libsass\_library= gyp info it worked if it ends with ok gyp verb cli [ gyp verb cli 'C:\Program Files\nodejs\node.exe', gyp verb cli
>
> 'C:\Users\d\Documents\project\app\node\_modules\node-gyp\bin\node-gyp.js',
> gyp verb cli 'rebuild', gyp verb cli '--verbose', gyp verb cli
>
> '--libsass\_ext=', gyp verb cli '--libsass\_cflags=', gyp verb cli
>
> '--libsass\_ldflags=', gyp verb cli '--libsass\_library=' gyp verb cli
> ] gyp info using node-gyp@3.8.0 gyp info using node@12.13.1 | win32 |
> x64 gyp verb command rebuild [] gyp verb command clean [] gyp verb
> clean removing "build" directory gyp verb command configure [] gyp
> verb check python checking for Python executable
> "C:\Users\d.windows-build-tools\python27\python.exe" in the PATH gyp
> verb `which` failed Error: not found:
> C:\Users\d.windows-build-tools\python27\python.exe gyp verb `which`
> failed at getNotFoundError
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:13:12)
> gyp verb `which` failed at F
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:68:19)
> gyp verb `which` failed at E
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:80:29)
> gyp verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\which\which.js:89:16 gyp
> verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\isexe\index.js:42:5 gyp
> verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\isexe\windows.js:36:5
> gyp verb `which` failed at FSReqCallback.oncomplete (fs.js:158:21)
> gyp verb `which` failed
> C:\Users\d.windows-build-tools\python27\python.exe Error: not found:
> C:\Users\d.windows-build-tools\python27\python.exe gyp verb `which`
> failed at getNotFoundError
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:13:12)
> gyp verb `which` failed at F
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:68:19)
> gyp verb `which` failed at E
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:80:29)
> gyp verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\which\which.js:89:16 gyp
> verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\isexe\index.js:42:5 gyp
> verb `which` failed at
> C:\Users\d\Documents\project\app\node\_modules\isexe\windows.js:36:5
> gyp verb `which` failed at FSReqCallback.oncomplete (fs.js:158:21)
> { gyp verb `which` failed stack: 'Error: not found:
> C:\Users\d\.windows-build-tools\python27\python.exe\n' + gyp verb
> `which` failed ' at getNotFoundError
> (C:\Users\d\Documents\project\app\node\_modules\which\which.js:13:12)\n'
> + gyp verb `which` failed ' at F (C:\Users\d\Documents\project\app\node\_modules\which\which.js:68:19)\n'
> + gyp verb `which` failed ' at E (C:\Users\d\Documents\project\app\node\_modules\which\which.js:80:29)\n'
> + gyp verb `which` failed ' at C:\Users\d\Documents\project\app\node\_modules\which\which.js:89:16\n'
> + gyp verb `which` failed ' at C:\Users\d\Documents\project\app\node\_modules\isexe\index.js:42:5\n'
> + gyp verb `which` failed ' at C:\Users\d\Documents\project\app\node\_modules\isexe\windows.js:36:5\n'
> + gyp verb `which` failed ' at FSReqCallback.oncomplete (fs.js:158:21)', gyp verb `which` failed code: 'ENOENT' gyp verb
> `which` failed } gyp verb could not find
> "C:\Users\d.windows-build-tools\python27\python.exe". checking python
> launcher gyp verb could not find
> "C:\Users\d.windows-build-tools\python27\python.exe". guessing
> location gyp verb ensuring that file exists: C:\Python27\python.exe
> gyp verb check python version `C:\Python27\python.exe -c "import sys;
> print "2.7.16 gyp verb check python version .%s.%s" %
> sys.version_info[:3];"` returned: %j gyp verb get node dir no --target
> version specified, falling back to host node version: 12.13.1 gyp verb
> command install [ '12.13.1' ] gyp verb install input version string
> "12.13.1" gyp verb install installing version: 12.13.1 gyp verb
> install --ensure was passed, so won't reinstall if already installed
> gyp verb install version is already installed, need to check
> "installVersion" gyp verb got "installVersion" 9 gyp verb needs
> "installVersion" 9 gyp verb install version is good gyp verb get node
> dir target node version installed: 12.13.1 gyp verb build dir
> attempting to create "build" dir:
> C:\Users\d\Documents\project\app\node\_modules\node-sass\build gyp verb
> build dir "build" dir needed to be created?
> C:\Users\d\Documents\project\app\node\_modules\node-sass\build gyp verb
> find vs2017 Found installation at: C:\Program Files (x86)\Microsoft
> Visual Studio\2019\Enterprise gyp verb find vs2017 - Found
> Microsoft.VisualStudio.Component.Windows10SDK.18362 gyp verb find
> vs2017 - Found Microsoft.VisualStudio.Component.VC.Tools.x86.x64 gyp
> verb find vs2017 - Found Microsoft.VisualStudio.VC.MSBuild.Base gyp
> verb find vs2017 - Using this installation with Windows 10 SDK gyp
> verb find vs2017 using installation: C:\Program Files (x86)\Microsoft
> Visual Studio\2019\Enterprise gyp verb build/config.gypi creating
> config file gyp verb build/config.gypi writing out config file:
> C:\Users\d\Documents\project\app\node\_modules\node-sass\build\config.gypi
> gyp verb config.gypi checking for gypi file:
> C:\Users\d\Documents\project\app\node\_modules\node-sass\config.gypi
> gyp verb common.gypi checking for gypi file:
> C:\Users\d\Documents\project\app\node\_modules\node-sass\common.gypi
> gyp verb gyp gyp format was not specified; forcing "msvs" gyp info
> spawn C:\Python27\python.exe gyp info spawn args [ gyp info spawn args
> 'C:\Users\d\Documents\project\app\node\_modules\node-gyp\gyp\gyp\_main.py',
> gyp info spawn args 'binding.gyp', gyp info spawn args '-f', gyp
> info spawn args 'msvs', gyp info spawn args '-G', gyp info spawn
> args 'msvs\_version=2015', gyp info spawn args '-I', gyp info spawn
> args
>
> 'C:\Users\d\Documents\project\app\node\_modules\node-sass\build\config.gypi',
> gyp info spawn args '-I', gyp info spawn args
>
> 'C:\Users\d\Documents\project\app\node\_modules\node-gyp\addon.gypi',
> gyp info spawn args '-I', gyp info spawn args
>
> 'C:\Users\d\.node-gyp\12.13.1\include\node\common.gypi', gyp
> info spawn args '-Dlibrary=shared\_library', gyp info spawn args
>
> '-Dvisibility=default', gyp info spawn args
>
> '-Dnode\_root\_dir=C:\Users\d\.node-gyp\12.13.1', gyp info spawn
> args
>
> '-Dnode\_gyp\_dir=C:\Users\d\Documents\project\app\node\_modules\node-gyp',
> gyp info spawn args
>
> '-Dnode\_lib\_file=C:\Users\d\.node-gyp\12.13.1\<(target\_arch)\node.lib', gyp info spawn args
>
> '-Dmodule\_root\_dir=C:\Users\d\Documents\project\app\node\_modules\node-sass',
> gyp info spawn args '-Dnode\_engine=v8', gyp info spawn args
>
> '--depth=.', gyp info spawn args '--no-parallel', gyp info spawn
> args '--generator-output', gyp info spawn args
>
> 'C:\Users\d\Documents\project\app\node\_modules\node-sass\build',
> gyp info spawn args '-Goutput\_dir=.' gyp info spawn args ] gyp verb
> command build [] gyp verb build type Release gyp verb architecture x64
> gyp verb node dev dir C:\Users\d.node-gyp\12.13.1 gyp verb found
> first Solution file build/binding.sln gyp verb using MSBuild:
> C:\Program Files (x86)\Microsoft Visual
> Studio\2019\Enterprise\MSBuild\15.0\Bin\MSBuild.exe gyp info spawn
> C:\Program Files (x86)\Microsoft Visual
> Studio\2019\Enterprise\MSBuild\15.0\Bin\MSBuild.exe gyp info spawn
> args [ gyp info spawn args 'build/binding.sln', gyp info spawn args
> '/nologo', gyp info spawn args
>
> '/p:Configuration=Release;Platform=x64' gyp info spawn args ] gyp ERR!
> UNCAUGHT EXCEPTION gyp ERR! stack Error: spawn C:\Program Files
> (x86)\Microsoft Visual
> Studio\2019\Enterprise\MSBuild\15.0\Bin\MSBuild.exe ENOENT gyp ERR!
> stack at Process.ChildProcess.\_handle.onexit
> (internal/child\_process.js:264:19) gyp ERR! stack at onErrorNT
> (internal/child\_process.js:456:16) gyp ERR! stack at
> processTicksAndRejections (internal/process/task\_queues.js:80:21) gyp
> ERR! System Windows\_NT 10.0.18362 gyp ERR! command "C:\Program
> Files\nodejs\node.exe"
> "C:\Users\d\Documents\project\app\node\_modules\node-gyp\bin\node-gyp.js"
> "rebuild" "--verbose" "--libsass\_ext=" "--libsass\_cflags="
> "--libsass\_ldflags=" "--libsass\_library=" gyp ERR! cwd
> C:\Users\d\Documents\project\app\node\_modules\node-sass gyp ERR! node
> -v v12.13.1 gyp ERR! node-gyp -v v3.8.0 gyp ERR! This is a bug in `node-gyp`. gyp ERR! Try to update node-gyp and file an Issue if it
> does not help: gyp ERR!
>
> <https://github.com/nodejs/node-gyp/issues> Build failed with error
> code: 7 npm WARN angular-ng-autocomplete@1.1.12 requires a peer of
> @angular/common@^6.0.0-rc.0 || ^6.0.0 but none is installed. You must
> install peer dependencies yourself. npm WARN
> angular-ng-autocomplete@1.1.12 requires a peer of
> @angular/core@^6.0.0-rc.0 || ^6.0.0 but none is installed. You must
> install peer dependencies yourself. npm WARN
> angular-resize-event@1.2.1 requires a peer of @angular/core@^8.2.14
> but none is installed. You must install peer dependencies yourself.
> npm WARN angular-resize-event@1.2.1 requires a peer of rxjs@~6.5.4 but
> none is installed. You must install peer dependencies yourself.
>
> npm WARN angular-resize-event@1.2.1 requires a peer of core-js@^3.6.1
> but none is installed. You must install peer dependencies yourself.
>
> npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.4
> (node\_modules\fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY:
> Unsupported platform for fsevents@1.2.4: wanted
> {"os":"darwin","arch":"any"} (current:
> {"os":"win32","arch":"x64"})win32","arch":"x64"}) npm WARN optional
> SKIPPING OPTIONAL DEPENDENCY: node-sass@4.10.0
> (node\_modules\node-sass): npm WARN optional SKIPPING OPTIONAL
> DEPENDENCY: node-sass@4.10.0 postinstall: `node scripts/build.js` npm
> WARN optional SKIPPING OPTIONAL DEPENDENCY: Exit status 1
>
>
> added 83 packages from 166 contributors, removed 618 packages, updated
> 191 packages and audited 1597 packages in 52.38s found 2966
> vulnerabilities (2197 low, 11 moderate, 756 high, 2 critical) run
> `npm audit fix` to fix them, or `npm audit` for details
>
>
>
package.json
```
{
"name": "project",
"version": "0.0.1",
"author": "Ionic Framework",
"homepage": "http://ionicframework.com/",
"scripts": {
"ng": "ng",
"start": "ng serve",
"build": "ng build",
"test": "ng test",
"lint": "ng lint",
"e2e": "ng e2e"
},
"private": true,
"dependencies": {
"@angular/animations": "7.1.4",
"@angular/cdk": "7.1.0",
"@angular/common": "7.1.4",
"@angular/core": "7.1.4",
"@angular/forms": "7.1.4",
"@angular/http": "7.1.4",
"@angular/platform-browser": "7.1.4",
"@angular/platform-browser-dynamic": "7.1.4",
"@angular/router": "7.1.4",
"@fortawesome/fontawesome-free": "5.12.0",
"@ionic-native/core": "5.1.0",
"@ionic-native/file": "5.1.0",
"@ionic-native/file-path": "5.1.0",
"@ionic-native/file-transfer": "5.1.0",
"@ionic-native/in-app-browser": "5.5.1",
"@ionic-native/native-page-transitions": "5.5.1",
"@ionic-native/splash-screen": "5.1.0",
"@ionic-native/status-bar": "5.1.0",
"@ionic/angular": "4.0.0-beta.15",
"@kolkov/angular-editor": "^0.15.1",
"@progress/kendo-angular-buttons": "^4.0.0",
"@progress/kendo-angular-charts": "3.9.0",
"@progress/kendo-angular-dateinputs": "2 - 3",
"@progress/kendo-angular-dropdowns": "2 - 3",
"@progress/kendo-angular-excel-export": "1 - 2",
"@progress/kendo-angular-grid": "^3.14.4",
"@progress/kendo-angular-inputs": "2 - 5",
"@progress/kendo-angular-intl": "^1.0.0",
"@progress/kendo-angular-l10n": "^1.1.0",
"@progress/kendo-angular-popup": "^2.0.0",
"@progress/kendo-data-query": "^1.0.0",
"@progress/kendo-drawing": "^1.0.0",
"@progress/kendo-theme-default": "latest",
"angular-gridster2": "^7.2.0",
"angular-ng-autocomplete": "1.1.12",
"angular-resize-event": "1.2.1",
"cordova-android": "8.0.0",
"cordova-ios": "5.0.1",
"cordova-plugin-device": "2.0.2",
"cordova-plugin-ionic-webview": "2.3.1",
"cordova-plugin-splashscreen": "5.0.2",
"cordova-plugin-statusbar": "2.4.2",
"cordova-plugin-whitelist": "1.3.3",
"core-js": "^2.4.1",
"file-saver": "^2.0.2",
"hammerjs": "2.0.0",
"ionic": "4.6.0",
"jspdf": "^1.5.3",
"jszip": "^3.2.2",
"lodash": "4.17.15",
"moment": "2.24.0",
"mydatepicker": "2.6.6",
"ng-select": "1.0.2",
"ng2-ace-editor": "0.3.9",
"ngx-bootstrap": "5.3.2",
"ngx-color-picker": "^5.3.8",
"ngx-dropzone": "1.2.0",
"ngx-perfect-scrollbar": "7.2.1",
"release": "6.0.1",
"rxjs": "6.3.3",
"rxjs-compat": "^6.0.0",
"stream": "0.0.2",
"tslib": "1.9.0",
"zone.js": "0.8.26"
},
"devDependencies": {
"@angular-devkit/architect": "0.11.4",
"@angular-devkit/build-angular": "0.11.4",
"@angular-devkit/core": "7.1.4",
"@angular-devkit/schematics": "7.1.4",
"@angular/cli": "7.1.4",
"@angular/compiler": "7.1.4",
"@angular/compiler-cli": "7.1.4",
"@angular/language-service": "7.1.4",
"@ionic/angular-toolkit": "1.2.0",
"@types/node": "10.12.0",
"@types/jasmine": "2.8.8",
"@types/jasminewd2": "2.0.3",
"codelyzer": "4.5.0",
"jasmine-core": "2.99.1",
"jasmine-spec-reporter": "4.2.1",
"karma": "3.1.4",
"karma-chrome-launcher": "2.2.0",
"karma-coverage-istanbul-reporter": "2.0.1",
"karma-jasmine": "1.1.2",
"karma-jasmine-html-reporter": "0.2.2",
"protractor": "5.4.0",
"ts-node": "7.0.0",
"tslint": "5.12.0",
"typescript": "3.1.6",
"@svgdotjs/svg.js": "3.0.16"
},
"description": "An Ionic project",
"cordova": {
"plugins": {
"cordova-plugin-whitelist": {},
"cordova-plugin-statusbar": {},
"cordova-plugin-device": {},
"cordova-plugin-splashscreen": {},
"cordova-plugin-ionic-webview": {
"ANDROID_SUPPORT_ANNOTATIONS_VERSION": "27.+"
},
"cordova-plugin-ionic-keyboard": {},
"com.telerik.plugins.nativepagetransitions": {},
"cordova-plugin-inappbrowser": {}
},
"platforms": [
"android",
"ios"
]
}
}
```
npm version: 6.14.4
|
2020/05/25
|
[
"https://Stackoverflow.com/questions/61996756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3766841/"
] |
Short answer: **Avoid global variables!**
In your `delete` function you set the value of the global variable `temp_node`.
Then you call the function `count`. In `count` you also use the global variable `temp_node`. You change it until it has the value NULL.
Then back in the `delete` function, you do:
```
temp_node = temp_node->next;
```
Dereference of a NULL pointer! That is real bad and crashes your program.
So to start with: **Get rid of all global variables**
As an example, your `count` function should be:
```
int count(NODE* p)
{
int count = 0;
while (p != NULL) {
count++;
p = p->next;
}
return count;
}
```
and call it like: `counter = count(first_node);`
And your `delete` function could look like:
```
NODE* delete(NODE* first_node) { ... }
```
That said ...
The principle in your `delete` function is wrong. You don't need to count the number of nodes. Simply iterate until you reach the end, i.e. `next` is NULL.
Further - why do you `malloc` memory in the `delete` function? And why do you overwrite the pointer just after `malloc`? Then you have a memory leak.
```
temp_node = (NODE*)malloc(sizeof(NODE)); // WHY??
temp_node = first_node; // UPS... temp_node assigned new value.
// So malloc'ed memory is lost.
```
Now - what happens when you find the matching node:
```
if (flightno == data) {
temp_node = temp_node->next;
first_node = temp_node; // UPS.. first_node changed
printf("\nFlight log deleted.\n");
}
```
Then you change first\_node. So all nodes **before** the current node is lost! That's not what you want. You only want to change `first_node` when the match is on the very first node in the linked list.
Then: `for (j = 0; j <= counter; j++)` --> `for (j = 0; j < counter; j++)` But as I said before... don't use this kind of loop.
Use something like:
```
while (temp_node != NULL)
{
...
temp_node = temp_node->next;
}
```
BTW: Why do you do a print out in every loop? Move the negative print out outside the loop.
A `delete` function can be implemented in many ways. The below example is not the most compact implementation but it's pretty simple to understand.
```
NODE* delete(NODE* head, int value_to_match)
{
NODE* p = head;
if (p == NULL) return NULL;
// Check first node
if (p->data == value_to_match)
{
// Delete first node
head = head->next; // Update head to point to next node
free(p); // Free (aka delete) the node
return head; // Return the new head
}
NODE* prev = p; // prev is a pointer to the node before
p = p->next; // the node that p points to
// Check remaining nodes
while(p != NULL)
{
if (p->data == value_to_match)
{
prev->next = p->next; // Take the node that p points to out
// of the list, i.e. make the node before
// point to the node after
free(p); // Free (aka delete) the node
return head; // Return head (unchanged)
}
prev = p; // Move prev and p forward
p = p->next; // in the list
};
return head; // Return head (unchanged)
}
```
and call it like:
```
head = delete(head, SOME_VALUE);
```
|
You are probably making an extra loop in your delete function. You should check if you are deleting an node which isn't part of your linked list.
| 11,629
|
1,664,587
|
first time poster.
I'm turning to my first question on stack overflow because I've found little resources in trying to find an answer. I'm looking to execute Selenium python tests from a C# application. I don't want to have to compile the C# Selenium tests each time; I want to take advantage of IronPython scripting for dynamic selenium testing. (note: I have little Python or ScriptEngine, et al experience.)
Selenium outputs unit tests in python in the following form:
```
from selenium import selenium
import unittest
class TestBlah(unittest.TestCase):
def setUp(self):
self.selenium = selenium(...)
self.selenium.start()
def test_blah(self):
sel = self.selenium
sel.open("http://www.google.com/webhp")
sel.type("q", "hello world")
sel.click("btnG")
sel.wait_for_page_to_load(5000)
self.assertEqual("hello world - Google Search", sel.get_title())
print "done"
def tearDown(self):
self.selenium.stop()
if __name__ == "__main__":
unittest.main()
```
I can get this to run, no problem, from the command line using ipy.exe:
```
ipy test_google.py
```
And I can see Selenium Server fire up a firefox browser instance and run the test.
I cannot achieve the same result using the ScriptEngine, et al API in C#, .NET 3.5, and I think it's centered around not being able to execute the main() function I'm guessing the following code is:
```
if __name__ == "__main__":
unittest.main()
```
I've tried engine.ExecuteFile(), engine.CreateScriptSourceFromString()/source.Execute(), and engine.CreateScriptSourceFromFile()/source.Execute(). I tried scope.SetVariable("`__name__`", "`__main__`"). I do get some success when I comment out the if `__name__` part of the py file and call engine.CreateScriptSourceFromString("unittest.main(module=None") after engine.Runtime.ExecuteFile() is called on the py file. I've tried storing the results in python and accessing them via scope.GetVariable(). I've also tried writing a python function I could call from C# to execute the unit tests.
(engine is an instance of ScriptEngine, source an instance of ScriptSource, etc.)
My ignorance of Python, ScriptEngine, or the unittest module could easily be behind my troubles. Has anyone had any luck executing python unittests using the ScriptEngine, etc API in C#? Has anyone successfully executed "main" code from ScriptEngine?
Additionally, I've read that unittest has a test runner that will help in accessing the errors via a TestResult object. I believe the syntax is the following. I haven't gotten here yet, but know I'll need to harvest the results.
```
unittest.TextTestRunner(verbosity=2).run(unittest.main())
```
Thanks in advance. I figured it'd be better to have more details than less. =P
|
2009/11/03
|
[
"https://Stackoverflow.com/questions/1664587",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/201308/"
] |
Looking at the [source code](http://ironpython.codeplex.com/SourceControl/ListDownloadableCommits.aspx) to the IronPython Console (ipy.exe), it looks like it eventually boils down to calling `ScriptSource.ExecuteProgram()`. You can get a `ScriptSource` from any of the various `ScriptEngine.CreateScriptSourceFrom*` methods.
For example:
```
import clr
clr.AddReference("IronPython")
from IronPython.Hosting import Python
engine = Python.CreateEngine()
src = engine.CreateScriptSourceFromString("""
if __name__ == "__main__":
print "this is __main__"
""")
src.ExecuteProgram()
```
Running this will print "this is **main**".
|
Try the following:
```
unittest.main(module=__name__)
```
| 11,630
|
73,675,635
|
I have 7 python dictionaries each named after the format `songn`, for example `song1`, `song2`, etc. Each dictionary includes the following information about a song: `name`, `duration`, `artist`. I created a list of songs, called `playlist full` of the form `[song1, song2, song3...,song7]`.
Here is my code:
```py
song1 = {"name": "Wake Me Up", "duration": 3.5, "artist": "Wham"}
song2 = {"name": "I Want Your...", "duration": 4.3, "artist": "Wham"}
song3 = {"name": "Thriller", "duration": 3.8, "artist": "MJ"}
song4 = {"name": "Monster", "duration": 3.5, "artist": "Rhianna and Eminem"}
song5 = {"name": "Poison", "duration": 5.0, "artist": "Bel Biv Devoe"}
song6 = {"name": "Classic", "duration": 2.5, "artist": "MKTO"}
song7 = {"name": "Edge of Seventeen", "duration": 5.3, "artist": "Stevie Nicks"}
playlist_full = []
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(song_i)
```
Now I am trying to use an item in the `playlist_full` list to in turn get the name of the song in the corresponding dictionary. For example, to see the name of `song3`, I would like to run:
```py
playlist_full[2].get("name")
```
The problem is that while `playlist[2]` is `song3`, python recognizes it only as a *string*, and I need python to realize that that string is also the name of a dictionary. What code will allow me to use that string name as the name of the corresponding dictionary?
Edit:
Based on the answer by @rob-g, the following additional lines of code produced the dictionary of songs that I wanted, as well as the method of accessing the name of `song3`:
```py
playlist_full = [eval(song) for song in playlist]
print(playlist_full[2]["name"]
```
|
2022/09/10
|
[
"https://Stackoverflow.com/questions/73675635",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6267463/"
] |
You could use [eval()](https://www.w3schools.com/python/ref_func_eval.asp) like:
```py
eval(playlist_full[2]).get("name")
```
which would do exactly what you want, evaluate the string as python code.
It's not great practice though. It would be better/safer to store the songs themselves in a dictionary or list that can have non-eval'd references.
|
You can use [`locals()`](https://docs.python.org/3/library/functions.html#locals) built-in function to do that:
```py
for i in range(1, 8):
song_i = "song"+str(i)
playlist_full.append(locals()[f'song{i}'])
```
| 11,631
|
67,180,248
|
How can I get the text of a button clicked and return it to python? The button is selected using a mouse-click generated by the user in the Selenium WebDriver browser.
I'm trying to do as follows:
```
x=driver.execute_script("$(document).click(function(event){var text= $(event.target).text(); return text})")
```
but when I print the contents of `x` it returns None. When I try to use an alert to display the contents of `text`, it returns the correct contents but I want to return it in Python.
What am I doing wrong?
|
2021/04/20
|
[
"https://Stackoverflow.com/questions/67180248",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12296610/"
] |
Once you click on the button,
You can extract the text of the button only if it's still available in the dom visible, else you can't.
|
```
# Identify element
element = driver.find_element_by_id("id")
# Click element
element.click()
# Get text
print("Text is: " + element.text)
# Or
print("Text is: " + element.get_attribute("innerHTML")
```
| 11,636
|
50,639,390
|
I am trying to write a music program in Python that takes some music written by the user in a text file and turns it into a midi. I'm not particularly experienced with python at this stage so I'm not sure what the reason behind this issue is. I am trying to write the source file parser for the program and part of this process is to create a list containing all the lines of the text file and breaking each line down into its own list to make them easier to work with. I'm successfully able to do that, but there is a problem.
I want the code to **ignore** lines that are only whitespace (So the user can make their file at least kind of readable without having all the lines thrown together one on top of the other), but I can't seem to figure out how to do that. I tried doing this
```
with open(path, "r") as infile:
for row in infile:
if len(row):
srclines.append(row.split())
```
And this **does** work as far as creating the list of lines and separating each word goes, BUT it still appends the empty lines that are only whitespace... I confirmed this by doing this
```
for entry in srclines:
print entry
```
Which gives, for example
```
['This', 'is']
[]
['A', 'test']
```
With the original text being
```
This is
A test
```
But strangely, if during the printing stage I do another len() check then the empty lines are actually **ignored** like I want, and it looks like this
```
['This', 'is']
['A', 'test']
```
What is the cause of this? Does this mean I can only go over the list and remove empty entries after I generate It? Or am I just doing the line import code wrong? I am using python 3.6 to test this code with by the way
|
2018/06/01
|
[
"https://Stackoverflow.com/questions/50639390",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6283375/"
] |
`row` contains a newline, so it's not empty. But `row.split()` doesn't find any non-whitespace characters, so it returns an empty list.
Use
```
if len(row.strip()):
```
to ignore the newline (and any other leading/trailing spaces).
Or more simply:
```
if row.strip():
```
since an empty string is falsy.
|
Try creating a [list comprehension](https://www.python-course.eu/python3_list_comprehension.php):
```
with open('d.txt', "r") as infile:
print([i.strip().split() for i in infile if i.strip()])
```
Output:
```
[['This', 'is'], ['A', 'test']]
```
| 11,637
|
14,633,021
|
I have an AppHarbor app that I'm using as an external service which will get requested by my other servers which use Google App Engine (python). The appharbor app is basically getting pinged a lot to process some data that I send it.
Because I'll be constantly pinging the service, and time is important, is it possible to reference my appharbor app through its IP address and not the hostname? Basically I want to eliminate having to do DNS lookups and speed up the response.
I'm using Google App Engine's urlfetch (<https://developers.google.com/appengine/docs/python/urlfetch/overview>) to do the request. Is caching the ip address something urlfetch is already doing under the covers? If not, is it possible to do so?
|
2013/01/31
|
[
"https://Stackoverflow.com/questions/14633021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/361897/"
] |
I doubt that DNS lookups will be your bottleneck, but anyway as far as I can see DNS lookups are cached by the system (for at least the TTL).
|
Sign up for the AppEngine Sockets Trusted Tester ([here](https://docs.google.com/a/postmaster.io/spreadsheet/viewform?formkey=dF9QR3pnQ2pNa0dqalViSTZoenVkcHc6MQ#gid=0)) and use the normal python:
```
socket.gethostbyname(...)
```
| 11,639
|
28,690,325
|
I have such problem I have this piece of code on python2.7. It works approximately 60 seconds for object with slightly more than 70000 items in the object. How it works? It gets an object with paths to another objects and convert them to the ASCII strings. I think the problem why it is so slow is loops.
```
def createPath(self, path, NameOfFile ):
temp = []
for j in range( path.shape[0] ):
rr = path[j][0]
obj = NameOfFile[rr]
string = ''.join(chr(i) for i in obj[:])
string = string.replace("aaaa","bbbb")
temp.append(string)
return ( np.array(temp) )
```
It is not my own code, I found it in the Web so what my question is ? How to make this piece of code faster ? I haven't huge experience in the python, but maybe there some useful libraries or tricks that may help. I appreciate all help, any ideas may be helpful.
|
2015/02/24
|
[
"https://Stackoverflow.com/questions/28690325",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4473386/"
] |
ID of an element must be unique, when you use id selector it will return only the first element with the id, so all the click handlers are added to the first button.
Use classes and event delegation
```
$(document).ready(function () {
$("#image-btn").click(function () {
var $imageElement = $("<div class='image_element' ><div class='image_holder' align='center'><input type='image' src='{{URL::asset('images/close-icon.png')}}' name='closeStory' class='closebtn' width='22px'><button type='button' class='img_btn' >Upload Image</button></div></div>");
$("#element-holder").append($imageElement);
$imageElement.fadeIn(1000);
});
$("#element-holder").on('click', '.closebtn', function(){
$(this).closest('.image_element').remove();
})
});
```
Demo: [Fiddle](http://jsfiddle.net/arunpjohny/mnxtL5k7/)
More Read:
* [Event binding on dynamically created elements?](https://stackoverflow.com/questions/203198/event-binding-on-dynamically-created-elements)
|
use $(this) instead of $imageElement
```
$(document).ready(function(){
$("#image-btn").click(function(){
var $imageElement = $("<div class='image_element' id='image-element'><div class='image_holder' align='center'><input type='image' src='{{URL::asset('images/close-icon.png')}}' name='closeStory' class='closebtn' id='close-img-btn' width='22px'><button type='button' class='img_btn' id='img-btn'>Upload Image</button></div></div>");
$("#element-holder").append($imageElement);
$imageElement.fadeIn(1000);
$("#close-img-btn").click(function(){
$(this).remove();
});
});
});
```
| 11,642
|
17,438,852
|
I want to pass in a string to my python script which contains escape sequences such as: `\x00` or `\t`, and spaces.
However when I pass in my string as:
```
some string\x00 more \tstring
```
python treats my string as a raw string and when I print that string from inside the script, it prints the string literally and it does not treat the `\` as an escape sequence.
i.e. it prints exactly the string above.
**UPDATE:(AGAIN)**
I'm using *python 2.7.5* to reproduce, create a script, lets call it `myscript.py`:
```
import sys
print(sys.argv[1])
```
now save it and call it from the windows command prompt as such:
```
c:\Python27\python.exe myscript.py "abcd \x00 abcd"
```
the result I get is:
```
> 'abcd \x00 abcd'
```
P.S in my actual script, I am using option parser, but both have the same effect. Maybe there is a parameter I can set for option parser to handle escape sequences?
|
2013/07/03
|
[
"https://Stackoverflow.com/questions/17438852",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2059819/"
] |
The string you receive in `sys.argv[1]` is exactly what you typed on the command line. Its backslash sequences are left intact, not interpreted.
To interpret them, follow [this answer](https://stackoverflow.com/questions/4020539/process-escape-sequences-in-a-string-in-python): basically use `.decode('string_escape')`.
|
I don't know that you can parse entire strings without writing a custom parser but optparse supports [sending inputs in different formats](http://docs.python.org/2/library/optparse.html#standard-option-types) (hexidecimal, binary, etc).
```
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-n", type="int", dest="num")
options, args = parser.parse_args()
print options
```
then when you run
```
C:\Users\John\Desktop\script.py -n 0b10
```
you get an output of
```
{'num': 2}
```
The reason I say you'll have to impletement a custom parser to make this work is because it isn't Python the changes the input but rather it is [something](https://stackoverflow.com/questions/5994827/how-to-read-argv-value-without-escape-the-string) [the shell does](https://stackoverflow.com/questions/9590838/python-escape-special-characters-in-sys-argv). Python might have a built in module to handle this but I am not aware of it if it exists.
| 11,645
|
61,512,822
|
Running in Jupyter-notebook
Python version 3.6
Pyspark version 2.4.5
Hadoop version 2.7.3
I essentially have the same issue described [Unable to write spark dataframe to a parquet file format to C drive in PySpark](https://stackoverflow.com/questions/59220832/unable-to-write-spark-dataframe-to-a-parquet-file-format-to-c-drive-in-pyspark/59223439#59223439?newreg=6dfdf1ebd3c94e118056c86a8691342a)
Steps I have taken:
1. Copied hadoop-2.7.1 binaries offered at <https://github.com/steveloughran/winutils> to folder in C root directory.
2. Created HADOOP\_HOME enviorment variable and pointed it to directory mentioned above (i.e C:\hadoop-2.7.1)
below is the command I am trying to run and the error I am getting
```
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession.builder.getOrCreate()
df_spark_scaled.write.format('parquet').save('ExoplanetSparkDF_ETL.parquet')
```
```
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-8-3c18766c3167> in <module>
4 #os.environ['HADOOP_HOME'] = "C:\hadoop-2.7.1"
5 #sys.path.append("C:\hadoop-2.7.1\bin")
----> 6 df_spark_scaled.write.format('parquet').save('ExoplanetSparkDF_ETL.parquet')
~\anaconda3\lib\site-packages\pyspark\sql\readwriter.py in save(self, path, format, mode, partitionBy, **options)
737 self._jwrite.save()
738 else:
--> 739 self._jwrite.save(path)
740
741 @since(1.4)
~\anaconda3\lib\site-packages\py4j\java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
~\anaconda3\lib\site-packages\pyspark\sql\utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
~\anaconda3\lib\site-packages\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o383.save.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:80)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:127)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:75)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Unknown Source)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 9.0 failed 1 times, most recent failure: Lost task 0.0 in stage 9.0 (TID 9, localhost, executor driver): ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:236)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1891)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1878)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1878)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:927)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:927)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2112)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2061)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2050)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:738)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:167)
... 32 more
Caused by: ExitCodeException exitCode=-1073741515:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
at org.apache.hadoop.util.Shell.run(Shell.java:479)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:866)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:849)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:225)
at org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:209)
at org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:307)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:296)
at org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:328)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:398)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:461)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)
at org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:248)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:390)
at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:349)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetOutputWriter.scala:37)
at org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat$$anon$1.newInstance(ParquetFileFormat.scala:151)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:236)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:123)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
... 1 more
```
|
2020/04/29
|
[
"https://Stackoverflow.com/questions/61512822",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13436683/"
] |
You have to use `$sum` to sum the size of each array like this
```js
{
"$group": {
"_id": {
"vehicleid": "$vehicleid",
"date": "$date"
},
"count": { "$sum": { "$size": "$points" } }
}
}
```
|
**You can follow this code**
```
$group : {
_id : {
"vehicleid":"$vehicleid",
"date":"$date"
count: { $sum: 1 }
}
}
```
| 11,648
|
33,365,055
|
Hi I am using pandas to convert a column to month.
When I read my data they are objects:
```
Date object
dtype: object
```
So I am first making them to date time and then try to make them as months:
```
import pandas as pd
file = '/pathtocsv.csv'
df = pd.read_csv(file, sep = ',', encoding='utf-8-sig', usecols= ['Date', 'ids'])
df['Date'] = pd.to_datetime(df['Date'])
df['Month'] = df['Date'].dt.month
```
Also if that helps:
```
In [10]: df['Date'].dtype
Out[10]: dtype('O')
```
So, the error I get is like this:
```
/Library/Frameworks/Python.framework/Versions/2.7/bin/User/lib/python2.7/site-packages/pandas/core/series.pyc in _make_dt_accessor(self)
2526 return maybe_to_datetimelike(self)
2527 except Exception:
-> 2528 raise AttributeError("Can only use .dt accessor with datetimelike "
2529 "values")
2530
AttributeError: Can only use .dt accessor with datetimelike values
```
EDITED:
Date columns are like this:
```
0 2014-01-01
1 2014-01-01
2 2014-01-01
3 2014-01-01
4 2014-01-03
5 2014-01-03
6 2014-01-03
7 2014-01-07
8 2014-01-08
9 2014-01-09
```
Do you have any ideas?
Thank you very much!
|
2015/10/27
|
[
"https://Stackoverflow.com/questions/33365055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4738736/"
] |
#Convert date into the proper format so that date time operation can be easily performed
```
df_Time_Table["Date"] = pd.to_datetime(df_Time_Table["Date"])
# Cal Year
df_Time_Table['Year'] = df_Time_Table['Date'].dt.strftime('%Y')
```
|
When you write
```
df['Date'] = pd.to_datetime(df['Date'], errors='coerce')
df['Date'] = df['Date'].dt.strftime('%m/%d')
```
It can fixed
| 11,651
|
61,036,609
|
As illustrated below, I am looking for an easy way to combine two or more heat-maps into one, i.e., a heat-map with multiple colormaps.
The idea is to break each cell into multiple sub-cells. I couldn't find any python library with such a visualization function already implemented. Anybody knows something (at least) close to this?
[](https://i.stack.imgur.com/HmJM5.png)
|
2020/04/05
|
[
"https://Stackoverflow.com/questions/61036609",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3625770/"
] |
The heatmaps can be drawn column by column. White gridlines can mark the cell borders.
```py
import numpy as np
from matplotlib import pyplot as plt
a = np.random.random((5, 6))
b = np.random.random((5, 6))
vmina = a.min()
vminb = b.min()
vmaxa = a.max()
vmaxb = b.max()
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(10,3), gridspec_kw={'width_ratios':[1,1,2]})
ax1.imshow(a, cmap='Reds', interpolation='nearest', origin='lower', vmin=vmina, vmax=vmaxa)
ax1.set_xticks(np.arange(.5, a.shape[1]-1, 1), minor=True)
ax1.set_yticks(np.arange(.5, a.shape[0]-1, 1), minor=True)
ax2.imshow(b, cmap='Blues', interpolation='nearest', origin='lower', vmin=vminb, vmax=vmaxb)
ax2.set_xticks(np.arange(.5, a.shape[1]-1, 1), minor=True)
ax2.set_yticks(np.arange(.5, a.shape[0]-1, 1), minor=True)
for i in range(a.shape[1]):
ax3.imshow(a[:,i:i+1], extent=[2*i-0.5, 2*i+0.5, -0.5, a.shape[0]-0.5 ],
cmap='Reds', interpolation='nearest', origin='lower', vmin=vmina, vmax=vmaxa)
ax3.imshow(b[:,i:i+1], extent=[2*i+0.5, 2*i+1.5, -0.5, a.shape[0]-0.5 ],
cmap='Blues', interpolation='nearest', origin='lower', vmin=vminb, vmax=vmaxb)
ax3.set_xlim(-0.5, 2*a.shape[1] -0.5 )
ax3.set_xticks(np.arange(1.5, 2*a.shape[1]-1, 2), minor=True)
ax3.set_yticks(np.arange(.5, a.shape[0]-1, 1), minor=True)
for ax in (ax1, ax2, ax3):
ax.grid(color='white', which='minor', lw=2)
ax.set_xticks([])
ax.set_yticks([])
ax.tick_params(axis='both', which='both', size=0)
plt.show()
```
[](https://i.stack.imgur.com/FvxOx.png)
PS: If brevity were an important factor, all embellishments, details and comparisons could be left out:
```py
# import numpy as np
# from matplotlib import pyplot as plt
a = np.random.random((5, 6))
b = np.random.random((5, 6))
norma = plt.Normalize(vmin=a.min(), vmax=a.max())
normb = plt.Normalize(vmin=b.min(), vmax=b.max())
for i in range(a.shape[1]):
plt.imshow(a[:, i:i + 1], extent=[2*i-0.5, 2*i+0.5, -0.5, a.shape[0]-0.5], cmap='Reds', norm=norma)
plt.imshow(b[:, i:i + 1], extent=[2*i+0.5, 2*i+1.5, -0.5, a.shape[0]-0.5], cmap='Blues', norm=normb)
plt.xlim(-0.5, 2*a.shape[1]-0.5)
# plt.show()
```
|
You can restructure your arrays to have empty columns between you actual data then create a masked array to plot heatmaps with transparency. Here's one method (maybe not the best) to add empty columns:
```
arr1 = np.arange(20).reshape(4, 5)
arr2 = np.arange(20, 0, -1).reshape(4, 5)
filler = np.nan * np.zeros((4, 5))
c1 = np.vstack([arr1, filler]).T.reshape(10, 4).T
c2 = np.vstack([filler, arr2]).T.reshape(10, 4).T
c1 = np.ma.masked_array(c1, np.isnan(c1))
c2 = np.ma.masked_array(c2, np.isnan(c2))
plt.pcolormesh(c1, cmap='bone')
plt.pcolormesh(c2, cmap='jet')
```
You can also use `np.repeat` and mask every other column as @JohanC notes
```
c1 = np.ma.masked_array(np.repeat(arr1, 2, axis=1), np.tile([True, False], arr1.size))
c2 = np.ma.masked_array(np.repeat(arr2, 2, axis=1), np.tile([False, True], arr2.size))
```
[](https://i.stack.imgur.com/jTJXn.png)
| 11,661
|
14,459,258
|
Games from Valve use following [data format](http://media.steampowered.com/apps/440/scripts/items/items_game.9aee6b38c52d8814124b8fbfc8d13e7b1faa944f.txt)
```
"name1"
{
"name2" "value2"
"name3"
{
"name4" "value4"
}
}
```
Does this format have a name or is it just self made?
Can I parse it in python?
|
2013/01/22
|
[
"https://Stackoverflow.com/questions/14459258",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1670759/"
] |
I'm not sure that it has a name, but it seems very straightforward: a node consists of a key and either a value or a set of values that are themselves either plain strings or sets of key-value pairs. It would be trivial to parse recursively, and maps cleanly to a structure of nested python dictionaries.
|
Looks like their own format, called Valve Data Format. Documentation [here](https://developer.valvesoftware.com/wiki/KeyValues), I don't know if there is a parser available in python, but here is a question about [parsing it in php](https://stackoverflow.com/questions/9301511/parsing-valve-data-format-files-in-php)
| 11,662
|
50,917,003
|
I'm trying to create a simple program to convert a binary number, for example `111100010` to decimal `482`. I've done the same in Python, and it works, but I can't find what I'm doing wrong in C++.
When I execute the C++ program, I get `-320505788`. What have I done wrong?
This is the Python code:
```python
def digit_count(bit_number):
found = False
count = 0
while not found:
division = bit_number / (10 ** count)
if division < 1:
found = True
else:
count += 1
return count
def bin_to_number(bit_number):
digits = digit_count(bit_number)
number = 0
for i in range(digits):
exp = 10 ** i
if exp < 10:
digit = int(bit_number % 10)
digit = digit * (2 ** i)
number += digit
else:
digit = int(bit_number / exp % 10)
digit = digit * (2 ** i)
number += digit
print(number)
return number
bin_to_convert = 111100010
bin_to_number(bin_to_convert)
# returns 482
```
This is the C++ code:
```cpp
#include <iostream>
#include <cmath>
using namespace std;
int int_length(int bin_number);
int bin_to_int(int bin_number);
int main()
{
cout << bin_to_int(111100010) << endl;
return 0;
}
int int_length(int bin_number){
bool found = false;
int digit_count = 0;
while(!found){
int division = bin_number / pow(10, digit_count);
if(division < 1){
found = true;
}
else{
digit_count++;
}
}
return digit_count;
}
int bin_to_int(int bin_number){
int number_length = int_length(bin_number);
int number = 0;
for(int i = 0; i < number_length; i++){
int e = pow(10, i);
int digit;
if(e < 10){
digit = bin_number % 10;
digit = digit * pow(2, i);
number = number + digit;
}
else{
if((e % 10) == 0){
digit = 0;
}
else{
digit = bin_number / (e % 10);
}
digit = digit * pow(2, i);
number = number + digit;
}
}
return number;
}
```
|
2018/06/18
|
[
"https://Stackoverflow.com/questions/50917003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5533085/"
] |
The problem is that you converted this fragment of Python code
```
else:
digit = int(bit_number / exp % 10)
digit = digit * (2 ** i)
number += digit
```
into this:
```
else{
if((e % 10) == 0){
digit = 0;
}
else{
digit = bin_number / (e % 10);
}
digit = digit * pow(2, i);
number = number + digit;
}
```
In other words, you are trying to apply `/` *after* applying `%`, and protect from division by zero in the process.
This is incorrect: you should apply them the other way around, like this:
```
else{
digit = (bit_number / e) % 10;
digit = digit * pow(2, i);
number = number + digit;
}
```
[Demo 1](https://ideone.com/Oq6ZVk)
Note that the entire conditional is redundant - you can remove it from your `for` loop:
```
for(int i = 0; i < number_length; i++){
int e = pow(10, i);
int digit = (bit_number / e) % 10;
digit = digit * pow(2, i);
number = number + digit;
}
```
[Demo 2](https://ideone.com/7f4wsC)
|
One problem is that the 111100010 in main is not a [binary literal](https://en.cppreference.com/w/cpp/language/integer_literal) for 482 base 10 but is actually the decimal value of 111100010. If you are going to use a binary literal there is no need for any of your code, just write it out since an integer is an integer regardless of the representation.
If you are trying to process a binary string, you could do something like this instead
```
#include <iostream>
#include <algorithm>
using namespace std;
int bin_to_int(const std::string& binary_string);
int main()
{
cout << bin_to_int("111100010") << endl;
cout << 0b111100010 << endl;
return 0;
}
int bin_to_int(const std::string& bin_string){
//Strings index from the left but bits start from the right so reverse it
std::string binary = bin_string;
std::reverse(binary.begin(), binary.end());
int number_length = bin_string.size();
//cout << "bits " << number_length << "\n";
int number = 0;
for(int i = 0; i <= number_length; i++){
int bit_value = 1 << i;
if(binary[i] == '1')
{
//cout << "Adding " << bit_value << "\n";
number += bit_value;
}
}
return number;
}
```
Note that to use the binary literal you will need to compile for c++14.
| 11,664
|
49,677,110
|
I am trying to decorate a function which is already decorated by `@click` and called from the command line.
Normal decoration to capitalise the input could look like this:
**standard\_decoration.py**
```
def capitalise_input(f):
def wrapper(*args):
args = (args[0].upper(),)
f(*args)
return wrapper
@capitalise_input
def print_something(name):
print(name)
if __name__ == '__main__':
print_something("Hello")
```
Then from the command line:
```
$ python standard_decoration.py
HELLO
```
The first example from the [click documentation](http://click.pocoo.org/5/) looks like this:
**hello.py**
```
import click
@click.command()
@click.option('--count', default=1, help='Number of greetings.')
@click.option('--name', prompt='Your name',
help='The person to greet.')
def hello(count, name):
"""Simple program that greets NAME for a total of COUNT times."""
for x in range(count):
click.echo('Hello %s!' % name)
if __name__ == '__main__':
hello()
```
When run from the command line:
```
$ python hello.py --count=3
Your name: John
Hello John!
Hello John!
Hello John!
```
1. What is the correct way to apply a decorator which modifies the inputs to this click decorated function, eg make it upper-case just like the one above?
2. Once a function is decorated by click, would it be true to say that any positional arguments it has are transformed to keyword arguments? It seems that it matches things like `'--count'` with strings in the argument function and then the order in the decorated function no longer seems to matter.
|
2018/04/05
|
[
"https://Stackoverflow.com/questions/49677110",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4288043/"
] |
It appears that click passes keywords arguments. This should work. I think it needs to be the first decorator, i.e. it is called after all of the click methods are done.
```
def capitalise_input(f):
def wrapper(**kwargs):
kwargs['name'] = kwargs['name'].upper()
f(**kwargs)
return wrapper
@click.command()
@click.option('--count', default=1, help='Number of greetings.')
@click.option('--name', prompt='Your name',
help='The person to greet.')
@capitalise_input
def hello(count, name):
....
```
You could also try something like this to be specific about which parameter to capitalize:
```
def capitalise_input(key):
def decorator(f):
def wrapper(**kwargs):
kwargs[key] = kwargs[key].upper()
f(**kwargs)
return wrapper
return decorator
@capitalise_input('name')
def hello(count, name):
```
|
About click command groups - we need to take into account what the documentation says - <https://click.palletsprojects.com/en/7.x/commands/#decorating-commands>
So in the end a simple decorator like this:
```
def sample_decorator(f):
def run(*args, **kwargs):
return f(*args, param="yea", **kwargs)
return run
```
needs to be converted to work with click:
```
from functools import update_wrapper
def sample_decorator(f):
@click.pass_context
def run(ctx, *args, **kwargs):
return ctx.invoke(f, *args, param="yea", **kwargs)
return update_wrapper(run, f)
```
(The documentation suggests using `ctx.invoke(f, ctx.obj,` but that has led to an error of 'duplicite arguments'.)
| 11,665
|
66,035,003
|
Help! I'm trying to install cryptography on my m1. I know I can run terminal in rosetta mode, but I'm wondering if there is a way not to do that.
Output:
```
ERROR: Command errored out with exit status 1:
command: /opt/homebrew/opt/python@3.9/bin/python3.9 /opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/tmpl4sga84k
cwd: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-install-jko4b562/cryptography_7b1bbc9ece2f481a8e8e9ea03b1a0030
Complete output (55 lines):
=============================DEBUG ASSISTANCE=============================
If you are seeing a compilation error please try the following steps to
successfully install cryptography:
1) Upgrade to the latest pip and try again. This will fix errors for most
users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
2) Read https://cryptography.io/en/latest/installation.html for specific
instructions for your platform.
3) Check our frequently asked questions for more information:
https://cryptography.io/en/latest/faq.html
=============================DEBUG ASSISTANCE=============================
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/homebrew/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 44, in <module>
setup(
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 432, in __init__
_Distribution.__init__(self, {
File "/opt/homebrew/Cellar/python@3.9/3.9.1_7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 708, in finalize_options
ep(self)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 715, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/_cffi_src/build_openssl.py", line 77, in <module>
ffi = build_ffi_for_binding(
File "src/_cffi_src/utils.py", line 54, in build_ffi_for_binding
ffi = build_ffi(
File "src/_cffi_src/utils.py", line 74, in build_ffi
ffi = FFI()
File "/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/hj/5zfkv68d7lqgrfqt046bn23c0000gn/T/pip-build-env-9bqzge_f/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
```
I've tried to build and run like their instructions say in that code block to the same error. I've looked around and nobody has seemingly found the fix yet, but those things are two months old usually. What am I missing?
|
2021/02/03
|
[
"https://Stackoverflow.com/questions/66035003",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4366541/"
] |
I'm using Macbook Pro M1 2020 model and faced the same issue. The issue was only with my cffi and pip versions maybe. Because these 4 steps helped me -
1. Uninstalling old cffi `pip uninstall cffi`
2. Upgrading pip `python -m pip install --upgrade pip`
3. Reinstalling cffi `pip install cffi`
4. Intalling cryptography `pip install cryptography`
|
A little late to the party, but the solutions above didn't work for me. Paul got me on the right track, but my problem was that pyenv used the mac libffi for its build and cffi used the homebrew version. I read this somewhere, can't claim this unique insight.
My solution was to ensure that my python (3.8.13) was built by pyenv using the homebrew libffi by ensuring correct headers libraries and package config:
```
export LDFLAGS="-L$(brew --prefix zlib)/lib -L$(brew --prefix bzip2)/lib -L$(brew --prefix openssl@1.1)/lib -L$(brew --prefix libffi)/lib"
export CPPFLAGS="-I$(brew --prefix zlib)/include -I$(brew --prefix bzip2)/include -I$(brew --prefix openssl@1.1)/include -I$(brew --prefix libffi)/include"
export PKG_CONFIG_PATH="$(brew --prefix openssl@1.1)/lib/pkgconfig:$(brew --prefix libffi)/lib/pkgconfig"
```
rebuilding python...
```
pyenv uninstall 3.8.13
pyenv install 3.8.13
```
killing the pip cache
```
pip cache purge
```
and, finally, reinstalling my dependencies using pipenv
```
pipenv --rm
pipenv sync --dev
```
After these steps, I was free from the dreaded
```
ImportError: dlopen(/private/var/folders/k7/z3mq67_532bdr_rcm2grml240000gn/T/pip-build-env-apk5b25z/overlay/lib/python3.8/site-packages/_cffi_backend.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '_ffi_prep_closure'
```
| 11,667
|
10,442,913
|
I am working on HTML tables using python.
I want to know that how can i fetch different column values using lxml?
HTML table :
```
<table border="1">
<tr>
<td>Header_1</td>
<td>Header_2</td>
<td>Header_3</td>
<td>Header_4</td>
</tr>
<tr>
<td>row 1_cell 1</td>
<td>row 1_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 2_cell 1</td>
<td>row 2_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 3_cell 1</td>
<td>row 3_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 4_cell 1</td>
<td>row 4_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
</table>
```
and I am looking to get output as :
```
[
[
('Header_1', 'Header_2'),
('row 1_cell 1', 'row 1_cell 2'),
('row 2_cell 1', 'row 2_cell 2'),
('row 3_cell 1', 'row 3_cell 2'),
('row 4_cell 1', 'row 4_cell 2')
],
[
('Header_1', 'Header_3'),
('row 1_cell 1', 'row 1_cell 3'),
('row 2_cell 1', 'row 2_cell 3'),
('row 3_cell 1', 'row 3_cell 2'),
('row 4_cell 1', 'row 4_cell 3')
]
]
```
how can i fetch such different column and their values?
|
2012/05/04
|
[
"https://Stackoverflow.com/questions/10442913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/778942/"
] |
I do not know how do you make the choice of Header1+Header2, or Header1+Header3,... As the tables must be reasonably small, I suggest to collect all the data, and only then to extract the wanted subsets of the table. The following code show the possible solution:
```
import lxml.etree as ET
def parseTable(table_fragment):
header = None # init - only to create the variable (name)
rows = [] # init
# Parse the table with lxml (the standard xml.etree.ElementTree would be also fine).
tab = ET.fromstring(table_fragment)
for tr in tab:
lst = []
if header is None:
header = lst
else:
rows.append(lst)
for e in tr:
lst.append(e.text)
return header, rows
def extractColumns(header, rows, clst):
header2 = []
for i in clst:
header2.append(header[i - 1]) # one-based to zero-based
rows2 = []
for row in rows:
lst = []
rows2.append(lst)
for i in clst:
lst.append(row[i - 1]) # one-based to zero-based
return header2, rows2
def myRepr(header, rows):
out = [repr(tuple(header))] # init -- list with header
for row in rows:
out.append(repr(tuple(row))) # another row
return '[\n' + (',\n'.join(out)) + '\n]' # join to string
table_fragment = '''\
<table border="1">
<tr>
<td>Header_1</td>
<td>Header_2</td>
<td>Header_3</td>
<td>Header_4</td>
</tr>
<tr>
<td>row 1_cell 1</td>
<td>row 1_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 2_cell 1</td>
<td>row 2_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 3_cell 1</td>
<td>row 3_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
<tr>
<td>row 4_cell 1</td>
<td>row 4_cell 2</td>
<td>row 1_cell 3</td>
<td>row 1_cell 4</td>
</tr>
</table>'''
# Parse the table
header, rows = parseTable(table_fragment)
# For debugging...
print header
print rows
# Collect the representations of the selections. The extractColumns()
# returns a tuple. The * expands it to two arguments.
lst = []
lst.append(myRepr(header, rows))
lst.append(myRepr(*extractColumns(header, rows, [1, 2])))
lst.append(myRepr(*extractColumns(header, rows, [1, 3])))
lst.append(myRepr(*extractColumns(header, rows, [1, 2, 4])))
# Write the output.
with open('output.txt', 'w') as f:
f.write('[\n')
f.write(',\n'.join(lst))
f.write('\n]')
```
The output.txt now contains:
```
[
[
('Header_1', 'Header_2', 'Header_3', 'Header_4'),
('row 1_cell 1', 'row 1_cell 2', 'row 1_cell 3', 'row 1_cell 4'),
('row 2_cell 1', 'row 2_cell 2', 'row 1_cell 3', 'row 1_cell 4'),
('row 3_cell 1', 'row 3_cell 2', 'row 1_cell 3', 'row 1_cell 4'),
('row 4_cell 1', 'row 4_cell 2', 'row 1_cell 3', 'row 1_cell 4')
],
[
('Header_1', 'Header_2'),
('row 1_cell 1', 'row 1_cell 2'),
('row 2_cell 1', 'row 2_cell 2'),
('row 3_cell 1', 'row 3_cell 2'),
('row 4_cell 1', 'row 4_cell 2')
],
[
('Header_1', 'Header_3'),
('row 1_cell 1', 'row 1_cell 3'),
('row 2_cell 1', 'row 1_cell 3'),
('row 3_cell 1', 'row 1_cell 3'),
('row 4_cell 1', 'row 1_cell 3')
],
[
('Header_1', 'Header_2', 'Header_4'),
('row 1_cell 1', 'row 1_cell 2', 'row 1_cell 4'),
('row 2_cell 1', 'row 2_cell 2', 'row 1_cell 4'),
('row 3_cell 1', 'row 3_cell 2', 'row 1_cell 4'),
('row 4_cell 1', 'row 4_cell 2', 'row 1_cell 4')
]
]
```
|
Look into LXML as an html/xml parser that you could use. Then simply make a recursive function.
| 11,677
|
18,732,803
|
So I'm trying to build an insult generator that will take lists, randomize the inputs, and show the randomized code at the push of a button.
Right now, the code looks like...
```
import Tkinter
import random
section1 = ["list of stuff"]
section2 = ["list of stuff"]
section3 = ["list of stuff"]
class myapp(Tkinter.Tk):
def __init__(self,parent):
Tkinter.Tk.__init__(self, parent)
self.parent = parent
self.initialize()
def initialize(self):
self.grid() # creates grid layout manager where we can place our widgets within the window
button = Tkinter.Button(self, text=u"Generate!", command=self.OnButtonClick)
button.grid(column=1, row=0)
self.labelVariable = Tkinter.StringVar()
label = Tkinter.Label(self, textvariable=self.labelVariable, anchor='w', fg='white', bg='green')
label.grid(column=0, row=1, columnspan=2, sticky='EW')
self.labelVariable.set(u"Oh hi there !")
self.grid_columnconfigure(0, weight=1)
self.resizable(True, False)
self.update()
self.geometry(self.geometry())
def generator():
a = random.randint(0, int(len(section1))-1)
b = random.randint(0, int(len(section2))-1)
c = random.randint(0, int(len(section3))-1)
myText = "You are a "+ section1[a]+" "+section2[b]+'-'+section3[c]+"! Fucker."
return myText
def OnButtonClick(self):
self.labelVariable.set(myText + "(You clicked the button !)")
self.entry.focus_set()
self.entry.selection_range(0, Tkinter.END)
if __name__=='__main__':
app = myapp(None) # Instantiates the class
app.title('Random Insult Generator') # Names the window we're creating.
app.mainloop() # Program will loop indefinitely, awaiting input
```
Right now, the error it's giving is that the `myText` isn't defined.
Any thoughts on how to fix it?
**Edit:**
The error message is...
```
Exception in Tkinter callback
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 1470, in __call__
return self.func(*args)
File "...", line 41, in OnButtonClick
self.labelVariable.set(myText+"(You clicked the button !)")
NameError: global name 'myText' is not defined
```
|
2013/09/11
|
[
"https://Stackoverflow.com/questions/18732803",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2658570/"
] |
```
def OnButtonClick(self):
myText = self.generator() # CALL IT!
self.labelVariable.set(myText+"(You clicked the button !)")
self.entry.focus_set()
self.entry.selection_range(0,Tkinter.END)
```
AND
```
def generator(self):....
```
|
change with OnButtonClick function's second line and replace myText with generator()
| 11,678
|
62,403,240
|
I was doing some question in C and I was asked to provide the output of this question :
```
#include <stdio.h>
int main()
{
float a =0.7;
if(a<0.7)
{
printf("Yes");
}
else{
printf("No");
}
}
```
By just looking at the problem I thought the answer would be *NO* but after running I found that it was *YES*
I searched the web about float and found [0.30000000000000004.com](https://0.30000000000000004.com/)
Just out of curiosity I ran the same code in python :
```
x = float(0.7)
if x < 0.7 :
print("YES")
else :
print("NO")
```
Here the output is *NO*
I am confused!
Maybe I am missing something
Please help me with this problem.
Thanks in advance!
|
2020/06/16
|
[
"https://Stackoverflow.com/questions/62403240",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9715289/"
] |
```
float a = 0.7;
if(a<0.7)
```
The first line above takes the `double` `0.7` and crams it into a `float`, which almost certainly has less precision (so you may lose information).
The second line upgrades the `float a` to a `double` (because you're comparing it with a `double 0.7`, and that's one of the things C does for you) but it's too late at that point, the information is already gone.
You can see this effect with:
```
#include <stdio.h>
int main(void) {
float a = 0.7;
float b = 0.7f;
double c = 0.7;
printf("a %.50g\nb %.50g\nc %.50g\n", a, b, c);
return 0;
}
```
which generates something like:
```
a 0.699999988079071044921875
b 0.699999988079071044921875
c 0.69999999999999995559107901499373838305473327636719
```
*Clearly,* the `double c` variable has about double the precision (which is why they're often referred to as single and double precision) than both:
* the `double 0.7` crammed into the `float a` variable; and
* the `float b` variable that had the `float 0.7` stored into it.
*Neither* of them is exactly `0.7` due to the way floating point numbers work, but the `double` is closer to the desired value, hence not equal to the `float`.
It's like pouring a full four-litre bucket of water into a three-litre bucket and then back again. The litre you lost in the overflow of the smaller bucket doesn't magically re-appear :-)
If you change the type of your `a` to `double`, or use `float` literals like `0.7f`, you'll find things work more as you expect, since there's no loss of precision in that case.
---
The reason why you don't see the same effect in Python is because there's *one* underlying type for these floating point values:
```
>>> x = float(.7)
>>> type(x)
<class 'float'>
>>> type(.7)
<class 'float'>
```
From the Python docs:
>
> There are three distinct numeric types: integers, floating point numbers, and complex numbers. In addition, Booleans are a subtype of integers. Integers have unlimited precision. ***Floating point numbers are usually implemented using double in C.***
>
>
>
Hence no loss of precision in that case.
The use of `double` seems to be confirmed by (slightly reformatted):
```
>>> import sys
>>> print(sys.float_info)
sys.float_info(
max=1.7976931348623157e+308,
max_exp=1024,
max_10_exp=308,
min=2.2250738585072014e-308,
min_exp=-1021,
min_10_exp=-307,
dig=15,
mant_dig=53,
epsilon=2.220446049250313e-16,
radix=2,
rounds=1
)
```
The exponents and min/max values are identical to those found in [IEEE754 double precision](https://en.wikipedia.org/wiki/IEEE_754-1985) values.
|
In `a<0.7` the constant `0.7` is a `double` then `a` which is a `float` is promoted
to `double` before the comparison.
Nothing guarantees that these two constants (as `float` and as `double`) are the same.
As `float` the fractional part of `0.7` is `00111111001100110011001100110011`; as `double` the fractional part of `0.7` is `0110011001100110011001100110011001100110011001100110`.
The value converted from `float` will have its mantissa filled with `0`s when promoted to `double`.
Comparing these two sequences of bits shows that the `double` constant is greater than the `float` constant (the second bit already differs), which
leads to displaying `"Yes"`.
On the other hand, in python, only the `double` representation exists
for floating point numbers; thus there is no difference between what
is stored in `a` and the constant `0.7` of the comparison, which leads
to displaying `"No"`.
| 11,681
|
70,915,615
|
I am trying to use a parent class as a blueprint for new classes.
E.g. the `FileValidator` contains all generic attributesand methods for a generic file. Then I want to create for example a `ImageValidator` inheriting everything from the FileValidator but with additional, more specific attribtues, methods. etc. In this example the child class is called: `FileValidatorPlus`
My understanding was, that if I inherit the parent class I can just plug-in more attributes/methods without repeating anything, like just adding `min_size`. But the following code gives: `TypeError: FileValidatorPlus.__init__() got an unexpected keyword argument 'max_size'`
```
class File:
def __init__(self, file_size=100):
self.file_size = file_size
class FileValidator(object):
error_messages = {'max_size': 'some error msg'}
def __init__(self, max_size=None):
self.max_size = max_size
def __call__(self, file):
print(self.max_size > file.file_size)
class FileValidatorPlus(FileValidator):
error_messages = {'min_size': 'some other error msg'}
def __init__(self, min_size=None):
super(FileValidatorPlus, self).__init__()
self.error_messages.update(super(FileValidatorPlus, self).error_messages)
self.min_size = min_size
validator = FileValidatorPlus(max_size=10, min_size=20)
print(validator.error_messages)
print(validator.max_size)
print(validator.min_size)
```
Full Traceback:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
TypeError: FileValidatorPlus.__init__() got an unexpected keyword argument 'max_size'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/.pycharm_helpers/pydev/_pydev_bundle/pydev_code_executor.py", line 108, in add_exec
more, exception_occurred = self.do_add_exec(code_fragment)
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 90, in do_add_exec
command.run()
File "/opt/.pycharm_helpers/pydev/_pydev_bundle/pydev_console_types.py", line 35, in run
self.more = self.interpreter.runsource(text, '<input>', symbol)
File "/usr/local/lib/python3.10/code.py", line 74, in runsource
self.runcode(code)
File "/usr/local/lib/python3.10/code.py", line 94, in runcode
self.showtraceback()
File "/usr/local/lib/python3.10/code.py", line 148, in showtraceback
sys.excepthook(ei[0], ei[1], last_tb)
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 112, in info
traceback.print_exception(type, value, tb)
NameError: name 'traceback' is not defined
Traceback (most recent call last):
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 284, in process_exec_queue
interpreter.add_exec(code_fragment)
File "/opt/.pycharm_helpers/pydev/_pydev_bundle/pydev_code_executor.py", line 132, in add_exec
return more, exception_occurred
UnboundLocalError: local variable 'exception_occurred' referenced before assignment
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 511, in <module>
pydevconsole.start_server(port)
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 407, in start_server
process_exec_queue(interpreter)
File "/opt/.pycharm_helpers/pydev/pydevconsole.py", line 292, in process_exec_queue
traceback.print_exception(type, value, tb, file=sys.__stderr__)
UnboundLocalError: local variable 'traceback' referenced before assignment
Process finished with exit code 1
```
|
2022/01/30
|
[
"https://Stackoverflow.com/questions/70915615",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11971785/"
] |
Consider below approach
```
with example as (
select '670000000000100000000000000000000000000000000000000000000000000' as s
)
select s, (select sum(cast(num as int64)) from unnest(split(s,'')) num) result
from example
```
with output
[](https://i.stack.imgur.com/aiBCO.png)
|
Yet another [fun] option
```
create temp function sum_digits(expression string)
returns int64
language js as """
return eval(expression);
""";
with example as (
select '670000000000100000000000000000000000000000000000000000000000000' as s
)
select s, sum_digits(regexp_replace(replace(s, '0', ''), r'(\d)', r'+\1')) result
from example
```
with output
[](https://i.stack.imgur.com/BUflp.png)
What it does is -
* first it transform initial long string into shorter one - `671`.
* then it transforms it into expression - `+6+7+1`
* and finally pass it to javascript `eval` function *(unfortunatelly BigQuery does not have [hopefully yet] `eval` function)*
| 11,682
|
8,219,630
|
As a developer that has worked on more than one python project at once, I love the idea of Virtualenv. But, I'm currently trying to get Komodo IDE to play nice with VirtualEnv on a Windows box. I've downloaded virtualenvwrapper-win and got it working (btw, you are using Virtualenv on windows you should check it out):
<http://pypi.python.org/pypi/virtualenvwrapper-win>
however, I can't quite figure out what I need to do to get Komodo IDE to respect it all. I found the following for Mac users:
<http://blog.haydon.id.au/2010/11/taming-komodo-dragon-for-virtualenv-and.html>
But, so far, no luck. I'm pretty sure that I need to set a postactivate script to set some environment variables for Komodo to pick up.
Has anyone gotten this working before?
I'm using:
Win7, Python 2.6, Komodo IDE 6.1.3
Thanks in advance!
|
2011/11/21
|
[
"https://Stackoverflow.com/questions/8219630",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/265681/"
] |
I finally ended up posting the same question on the ActiveState forum. The reply was that it doesn't officially support VirtualEnv, yet. But, that you can make get it to work by adjusting the paths, etc. Here is the link to the question/reply.
<http://community.activestate.com/node/7499>
|
You can do this by adding the virtualenv's Python library to the project. Right-click on Project > Properties > Languages > Python > Additional Python Import Directories.
Now if someone could tell me how to add a folder like that in Mac when the virtualenv is under a hidden folder (without turning hidden folders on in Finder).
| 11,683
|
41,846,466
|
I am currently experimenting with Behavioral Driven Development. I am using behave\_django with selenium. I get the following output
```
Creating test database for alias 'default'...
Feature: Open website and print title # features/first_selenium.feature:1
Scenario: Open website # features/first_selenium.feature:2
Given I open seleniumframework website # features/steps/first_selenium.py:2 0.001s
Traceback (most recent call last):
File "/home/vagrant/newproject3/newproject3/venv/local/lib/python2.7/site-packages/behave/model.py", line 1456, in run
match.run(runner.context)
File "/home/vagrant/newproject3/newproject3/venv/local/lib/python2.7/site-packages/behave/model.py", line 1903, in run
self.func(context, *args, **kwargs)
File "features/steps/first_selenium.py", line 4, in step_impl
context.browser.get("http://www.seleniumframework.com")
File "/home/vagrant/newproject3/newproject3/venv/local/lib/python2.7/site-packages/behave/runner.py", line 214, in __getattr__
raise AttributeError(msg)
AttributeError: 'Context' object has no attribute 'browser'
Then I print the title # None
Failing scenarios:
features/first_selenium.feature:2 Open website
0 features passed, 1 failed, 0 skipped
0 scenarios passed, 1 failed, 0 skipped
0 steps passed, 1 failed, 1 skipped, 0 undefined
Took 0m0.001s
Destroying test database for alias 'default'...
```
Here is the code:
first\_selenium.feature
```
Feature: Open website and print title
Scenario: Open website
Given I open seleniumframework website
Then I print the title
```
first\_selenium.py
```
from behave import *
@given('I open seleniumframework website')
def step_impl(context):
context.browser.get("http://www.seleniumframework.com")
@then('I print the title')
def step_impl(context):
title = context.browser.title
assert "Selenium" in title
```
manage.py
```
#!/home/vagrant/newproject3/newproject3/venv/bin/python
import os
import sys
sys.path.append("/home/vagrant/newproject3/newproject3/site/v2/features")
import dotenv
if __name__ == "__main__":
path = os.path.realpath(os.path.dirname(__file__))
dotenv.load_dotenv(os.path.join(path, '.env'))
from configurations.management import execute_from_command_line
#from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
```
I'm not sure what this error means
|
2017/01/25
|
[
"https://Stackoverflow.com/questions/41846466",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7402682/"
] |
I know it is a late answer but maybe somebody is going to profit from it:
you need to declare the context.browser (in a before\_all/before\_scenario/before\_feature hook definition or just test method definition) before you use it, e.g.:
```
context.browser = webdriver.Chrome()
```
Please note that the hooks must be defined in a separate environment.py module
|
In my case the browser wasn't installed. That can be a case too. Also ensure path to geckodriver is exposed if you are working with Firefox.
| 11,686
|
18,005,365
|
I need to start a python script with bash using nohup passing an arg that aids in defining a constant in a script I import. There are lots of questions about passing args but I haven't found a successful way using nohup.
a simplified version of my bash script:
```
#!/bin/bash
BUCKET=$1
echo $BUCKET
script='/home/path/to/script/script.py'
echo "starting $script with nohup"
nohup /usr/bin/python $script $BUCKET &
```
the relevant part of my config script i'm importing:
```
FLAG = sys.argv[0]
if FLAG == "b1":
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket1"
AWS_SECRET_ACCESS_KEY = "secret"
elif FLAG == "b2":
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket2"
AWS_SECRET_ACCESS_KEY = "secret"
else:
AWS_ACCESS_KEY_ID = "key"
BUCKET = "bucket3"
AWS_SECRET_ACCESS_KEY = "secret"
```
the script thats using it:
```
from config import BUCKET, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
#do stuff with the values.
```
Frankly, since I'm passing the args to script.py, I'm not confident that they'll be in scope for the import script. That said, when I take a similar approach without using nohup, it works.
|
2013/08/01
|
[
"https://Stackoverflow.com/questions/18005365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1901847/"
] |
In general, the argument vector for any program starts with the program itself, and then all of its arguments and options. Depending on the language, the program may be `sys.argv[0]`, `argv[0]`, `$0`, or something else, but it's basically always argument #0.
Each program whose job is to run another program—like `nohup`, and like the Python interpreter itself—generally drops itself and all of its own options, and gives the target program the rest of the command line.
So, [`nohup`](http://linux.die.net/man/1/nohup) takes a `COMMAND` and zero or more `ARGS`. Inside that `COMMAND`, `argv[0]` will be `COMMAND` itself (in this case, `'/usr/bin/python'`), and `argv[1]` and later will be the additional arguments (`'/home/path/to/script/script.py'` and whatever `$BUCKET` resolves to).
Next, Python takes zero or more options, a script, and zero or more args to that script, and exposes the script and its args as [`sys.argv`](http://docs.python.org/3/library/sys.html#sys.argv). So, in your script, `sys.argv[0]` will be `'/home/path/to/script/script.py'`, and `sys.argv[1]` will be whatever `$BUCKET` resolves to.
And `bash` works similarly to Python; `$1` will be the first argument to the bash wrapper script (`$0` will be the script itself), and so on. So, `sys.argv[1]` in the inner Python script will end up getting the first argument passed to the bash wrapper script.
Importing doesn't affect `sys.argv` at all. So, in both your `config` module and your top-level script, if you `import sys`, `sys.argv[1]` will hold the `$1` passed to the bash wrapper script.
(On some platforms, in some circumstances `argv[0]` may not have the complete path, or may even be empty. But that isn't relevant here. What you care about is the eventual `sys.argv[1]`, and `bash`, `nohup`, and `python` are all guaranteed to pass that through untouched.)
|
```
nohup python3 -u ./train.py --dataset dataset_directory/ --model model_output_directory > output.log &
```
Here Im executing train.py file with python3, Then -u is used to ignore buffering and show the logs on the go without storing, specifying my **dataset\_directory** with argument style and **model\_output\_directory** then **Greater than symbol(>)**
then the logs is stored in **output.log** and them atlast and(**&**) symbol is used
To terminate this process
```
ps ax | grep train
```
then note the process\_ID
```
sudo kill -9 Process_ID
```
| 11,687
|
18,968,607
|
I'm trying to select timestamps columns from Cassandra 2.0 using cqlengine or cql(python), and i'm getting wrong results.
This is what i get from cqlsh ( or thrift ):
"2013-09-23 00:00:00-0700"
This is what i get from cqlengine and cql itself:
"\x00\x00\x01AG\x0b\xd5\xe0"
If you wanna reproduce the error, try this:
* open cqlsh
* create table test (name varchar primary key, dt timestamp)
* insert into table test ('Test', '2013-09-23 12:00') <<< Yes, i have tried to add by another ways....
* select \* from test ( Here it's everything fine )
* Now go on cqlengine or cql itself and select that table and you will get a broken hexadecimal.
Thanks !
|
2013/09/23
|
[
"https://Stackoverflow.com/questions/18968607",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2808750/"
] |
Unfortunately, cqlengine is not currently compatible with cassandra 2.0
There were some new types introduced with Cassandra 2.0, and we haven't had a chance to make cqlengine compatible with them. I'm also aware of a problem with blob columns.
This particular issue is caused by the cql driver returning the timestamp as a raw string of bytes, as opposed to an integer.
Since cqlengine does not support Cassandra 2.0 yet, your best bet is to use Cassandra 1.2.x until we can get it updated, cqlengine doesn't support any of the new 2.0 features anyway. If you really need to use 2.0, you can work around this problem by subclassing the DateTime column like so:
```
class NewDateTime(DateTime):
def to_python(self, val):
if isinstance(val, basestring):
val = struct.unpack('!Q', val)[0] / 1000.0
return super(NewDateTime, self).to_python(val)
```
|
The `timestamp` datatype stores values as the number of milliseconds since the epoch, in a long. It seems that however you are printing it is interpreting it as a string. This works for me using cql-dbapi2 after creating and inserting as in the question:
```
>>> import cql
>>> con = cql.connect('localhost', keyspace='ks', cql_version='3.0.0')
>>> cursor = con.cursor()
>>> cursor.execute('select * from test;')
True
>>> cursor.fetchone()
[u'Test', 1379934000.0]
```
| 11,688
|
29,320,466
|
I have tried to use [emcee](http://dan.iel.fm/emcee/current/user/advanced/) library to implement Monte Carlo Markov Chain inside a class and also make multiprocessing module works but after running such a test code:
```
import numpy as np
import emcee
import scipy.optimize as op
# Choose the "true" parameters.
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# Generate some synthetic data from the model.
N = 50
x = np.sort(10*np.random.rand(N))
yerr = 0.1+0.5*np.random.rand(N)
y = m_true*x+b_true
y += np.abs(f_true*y) * np.random.randn(N)
y += yerr * np.random.randn(N)
class modelfit():
def __init__(self):
self.x=x
self.y=y
self.yerr=yerr
self.m=-0.6
self.b=2.0
self.f=0.9
def get_results(self):
def func(a):
model=a[0]*self.x+a[1]
inv_sigma2 = 1.0/(self.yerr**2 + model**2*np.exp(2*a[2]))
return 0.5*(np.sum((self.y-model)**2*inv_sigma2 + np.log(inv_sigma2)))
result = op.minimize(func, [self.m, self.b, np.log(self.f)],options={'gtol': 1e-6, 'disp': True})
m_ml, b_ml, lnf_ml = result["x"]
return result["x"]
def lnprior(self,theta):
m, b, lnf = theta
if -5.0 < m < 0.5 and 0.0 < b < 10.0 and -10.0 < lnf < 1.0:
return 0.0
return -np.inf
def lnprob(self,theta):
lp = self.lnprior(theta)
likelihood=self.lnlike(theta)
if not np.isfinite(lp):
return -np.inf
return lp + likelihood
def lnlike(self,theta):
m, b, lnf = theta
model = m * self.x + b
inv_sigma2 = 1.0/(self.yerr**2 + model**2*np.exp(2*lnf))
return -0.5*(np.sum((self.y-model)**2*inv_sigma2 - np.log(inv_sigma2)))
def run_mcmc(self,nstep):
ndim, nwalkers = 3, 100
pos = [self.get_results() + 1e-4*np.random.randn(ndim) for i in range(nwalkers)]
self.sampler = emcee.EnsembleSampler(nwalkers, ndim, self.lnprob,threads=10)
self.sampler.run_mcmc(pos, nstep)
test=modelfit()
test.x=x
test.y=y
test.yerr=yerr
test.get_results()
test.run_mcmc(5000)
```
I got this error message :
```
File "MCMC_model.py", line 157, in run_mcmc
self.sampler.run_mcmc(theta0, nstep)
File "build/bdist.linux-x86_64/egg/emcee/sampler.py", line 157, in run_mcmc
File "build/bdist.linux-x86_64/egg/emcee/ensemble.py", line 198, in sample
File "build/bdist.linux-x86_64/egg/emcee/ensemble.py", line 382, in _get_lnprob
File "build/bdist.linux-x86_64/egg/emcee/interruptible_pool.py", line 94, in map
File "/vol/aibn84/data2/zahra/anaconda/lib/python2.7/multiprocessing/pool.py", line 558, in get
raise self._value
cPickle.PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed
```
I reckon it has something to do with how I have used **multiprocessing** in the **class** but I could not figure out how I could keep the structure of my class the way it is and meanwhile use multiprocessing as well??!!
I will appreciate for any tips.
P.S. I have to mention the code works perfectly if I remove `threads=10` from the last function.
|
2015/03/28
|
[
"https://Stackoverflow.com/questions/29320466",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2811074/"
] |
There are a number of SO questions that discuss what's going on:
1. <https://stackoverflow.com/a/21345273/2379433>
2. <https://stackoverflow.com/a/28887474/2379433>
3. <https://stackoverflow.com/a/21345308/2379433>
4. <https://stackoverflow.com/a/29129084/2379433>
…including this one, which seems to be your response… to nearly the same question:
5. <https://stackoverflow.com/a/25388586/2379433>
However, the difference here is that you are not using `multiprocessing` directly -- but`emcee` is. Therefore, the `pathos.multiprocessing` solution (from the links above) is not available for you. Since `emcee` uses `cPickle`, you'll have to stick to things that `pickle` knows how to serialize. You are out of luck for class instances. Typical workarounds are to either use `copy_reg` to register the type of object you want to serialize, or to add a `__reduce__` method to tell python how to serialize it. You can see several of the answers from the above links suggest similar things… but none enable you to keep the class the way you have written it.
|
For the record, you can now create a `pathos.multiprocessing` pool, and pass it to emcee using the `pool` argument. However, be aware that the overhead of multiprocessing can actually slow things down, unless your likelihood is particularly time-consuming to compute.
| 11,689
|
70,747,394
|
I am trying to check if a user input as a string exists in a list called categoriesList which appends categories from a text file named categories.txt. If the user inputs a category that then exists in categoriesList my code should be able to print out "Category exists", otherwise "Category doesn't exist".
Here is the code:
```
categoriesList = []
with open("categories.txt", "r") as OpenCategories:
for category in (OpenCategories):
categoriesList.append(category)
while True:
inputCategories = input("Please enter a category:")
if inputCategories in categoriesList:
print("Category exists")
break
else:
print("Category doesn't exist")
break
```
When I run this code it always outputs Category doesn't exist even if the category I enter actually exists in categoriesList. How would I solve this problem in the code? Furthermore, I want to be able to get one input from the user for entering a category so I don't want "Please enter a category" to come up several times, I just want the code to make it come up just once.
Also, it would be much appreciated if I could know the code on how I would then do all of the above in tkinter as I need to do the above in GUI. I think you need to have labels and allow the user to enter a category in a box on the screen.
I have tried to make code which tries to check a user input exists in a list after getting the input on a tk screen as its not enough for me to just have the check happening in a python console and its not doing it properly, so here is the code:
```
import tkinter as tk
from tkinter import ttk
window=tk.Tk()
canvas1 = tk.Canvas(window, width = 400, height = 300)
canvas1.pack()
label1 = Label(window, text="Please enter a category:")
label1.pack()
entry = Entry(window, width=50)
entry.pack()
def for_button():
checkUserInput = entry.get()
button = Button(window, text="Check", command=for_button)
button.pack()
for i in categoriesList:
if button in categoriesList:
categoryExist = Label(window, text="Category exists")
categoryExist.pack()
else:
categoryNotExist= Label(window, text="Category doesn't exist")
categoryNotExist.pack()
window.mainloop()
```
It uses the list categoriesList from the code eariler in the post that was given and I am trying to get the user to enter a category into the text box on the tk screen and click "check" button afterwards but before the user can give an input "category doesn't exist" comes up numerous times which is what I don't want the code to be doing.
|
2022/01/17
|
[
"https://Stackoverflow.com/questions/70747394",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17928821/"
] |
Your `STATICFILES_FINDERS` setting tells Django that it should look for static files in the following places:
* `FileSystemFinder` tells it to look in whichever locations are listed in STATICFILES\_DIRS;
* `AppDirectoriesFinder` tells it to look in the `static` folder of each registered app in INSTALLED\_APPS.
In normal circumstances, STATICFILES\_DIRS should not make a difference to Wagtail's own static files. This is because Wagtail's static files are stored within the apps that make up the Wagtail package, and will be pulled in by AppDirectoriesFinder - FileSystemFinder (and STATICFILES\_DIRS) do not come into play.
The fact that you're seeing a difference suggests to me that you've previously customised Wagtail's JS / CSS by placing static files within your project's 'static' folder, in a location such as `myproject/static/wagtailadmin/css/`, to override the built-in files. These customisations would presumably have been made against Wagtail 2.8 and will not behave correctly against Wagtail 2.15. The solution is to remove these custom files from your project.
|
Try changing:
`STATICFILES_DIRS = [os.path.join(PROJECT_DIR, 'static'),]`
to
`STATICFILES_DIRS = [os.path.join(BASE_DIR, 'static'),]`
| 11,690
|
15,128,404
|
I am making a GUI in wxpython.
I want to place images next to radio buttons.
How should i do that in wxpython?
|
2013/02/28
|
[
"https://Stackoverflow.com/questions/15128404",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2118322/"
] |
I suggest using wx.ToggleButton with bitmap labels if you are using 2.9, or one of the bitmap toggle button classes in wx.lib.buttons if you are still on 2.8. You can then implement the "radio button" functionality yourself by untoggling all other buttons in the group when one of them is toggled. Using the bitmap itself as the radio button will look nicer and will save space.
|
I'm not sure what you mean. Are you wanting images instead of the actual radio button itself? That is not supported. If you want an image in addition to the radio button, then just use a group of horizontal box sizers or one of the grid sizers. Add the image and then the radio button. And you're done!
| 11,691
|
60,976,753
|
well i have this DF in python
```
folio id_incidente nombre app apm \
0 1 1 SIN DATOS SIN DATOS SIN DATOS
1 131 100085 JUAN DOMINGO GONZALEZ DELGADO
2 132 100085 FRANCISCO JAVIER VELA RAMIREZ
3 133 100087 JUAN CARLOS PEREZ MEDINA
4 134 100088 ARMANDO SALINAS SALINAS
... ... ... ... ... ...
1169697 1223258 866846 IVAN RIVERA SILVA
1169698 1223259 866847 EDUARDO PLASCENCIA MARTINEZ
1169699 1223260 866848 FRANCISCO JAVIER PLASCENCIA MARTINEZ
1169700 1223261 866849 JUAN ALBERTO MARTINEZ ARELLANO
1169701 1223262 866850 JOSE DE JESUS SERRANO GONZALEZ
foto_barandilla fecha_hora_registro
0 1.jpg 0/0/0000 00:00:00
1 131.jpg 2008-08-07 15:42:25
2 132.jpg 2008-08-07 15:50:42
3 133.jpg 2008-08-07 16:37:24
4 134.jpg 2008-08-07 17:18:12
... ... ...
1169697 20200330103123_239288573.jpg 2020-03-30 10:32:10
1169698 20200330103726_1160992585.jpg 2020-03-30 10:38:25
1169699 20200330103837_999151106.jpg 2020-03-30 10:39:44
1169700 20200330104038_29275767.jpg 2020-03-30 10:41:52
1169701 20200330104145_640780023.jpg 2020-03-30 10:45:35
```
here the app and apm are the mother and father surnames, then i tried these in order to get another column with the whole name
```
names = {}
for i in range(1,df.shape[0]+1):
try:
names[i] = df["nombre"].iloc[i]+' '+df["app"].iloc[i]+' '+df["apm"].iloc[i]
except:
print(df["folio"].iloc[i], df["nombre"].iloc[i],df["app"].iloc[i],df["apm"].iloc[i])
```
but i get these
```
400085 nan nan nan
400631 nan nan nan
401267 nan nan nan
401933 nan nan nan
401942 nan nan nan
402030 nan nan nan
403008 nan nan nan
403010 nan nan nan
403011 nan nan nan
403027 nan nan nan
403384 nan nan nan
403399 nan nan nan
403415 nan nan nan
403430 nan nan nan
404764 nan nan nan
501483 CARLOS ESPINOZA nan
504723 RICARDO JARED LOPEZ ACOSTA nan
506989 JUAN JOSE FLORES OCHOA nan
507376 JOSE DE JESUS VENEGAS nan
.....
```
i tried to use the fillna.('') like this
```
df["app"].fillna('')
df["apm"].fillna('')
df["nombre"].fillna('')
```
but the result is the same, i hope you can help me in order to make the column with the whole name, like name+surname1+surname2
edit: here is my minimal version, the reporte files are(each one) a part of the whole database as show up here,
```
for i in range(1,31):
exec('reporte_%d = pd.read_excel("/home/workstation/Desktop/fotos/Fotos/Detenidos/Reporte Detenidos CER %d.xlsx", encoding="latin1" )'%(i,i))
reportes = [reporte_1,reporte_2,reporte_3,reporte_4,reporte_5,reporte_6,reporte_7,reporte_8,reporte_9,reporte_10,reporte_11,reporte_12,reporte_13,reporte_14,reporte_15,reporte_16,reporte_17,reporte_18,reporte_19,reporte_20,reporte_21,reporte_22,reporte_23,reporte_24,reporte_25,reporte_26,reporte_27,reporte_28,reporte_29,reporte_30]
df = pd.concat(reportes)
```
now when i run
```
df['Full_name'] = [' '.join([y for y in x if pd.notna(y)]) for x in zip(df['nombre'], df['app'], df['apm'])]
```
i get this error TypeError: sequence item 1: expected str instance, int found
|
2020/04/01
|
[
"https://Stackoverflow.com/questions/60976753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11579387/"
] |
Use [`Object.values`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/values) with [`Array.prototype.some`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some):
```js
const obj = {
id: '123abc',
carrier_name: 'a',
group_id: 'a',
member_id: 'a',
plan_name: 'a',
}
console.log(!Object.values(obj).some(val => val === ""))
const obj2 = {
id: '123abc',
carrier_name: '',
group_id: 'a',
member_id: 'a',
plan_name: 'a',
}
console.log(!Object.values(obj2).some(val => val === ""))
```
|
You could check with `every` and `Boolean` as callback, **if you have only strings**.
```js
const check = object => Object.values(object).every(Boolean);
console.log(check({ foo: 'bar' })); // true
console.log(check({ foo: '' })); // false
console.log(check({ foo: '', bar: 'baz' })); // false
console.log(check({ foo: '', bar: '' })); // false
```
| 11,693
|
25,449,779
|
I use Google Cloud SDK under Window 7 64bit.
Google Cloud SDK and python install success. and run gcloud.
The error occurs as shown below.
```
C:\Program Files\Google\Cloud SDK>gcloud
Traceback (most recent call last):
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\..\./lib\googlecloudsdk\gcloud\gcloud.py", line 137, in <
module>
_cli = CreateCLI()
File "C:\Program Files\Google\Cloud SDK\google-cloud-sdk\bin\..\./lib\googlecloudsdk\gcloud\gcloud.py", line 98, in Cr
eateCLI
sdk_root = config.Paths().sdk_root
AttributeError: 'Paths' object has no attribute 'sdk_root'
```
Can ask for help? Thanks
|
2014/08/22
|
[
"https://Stackoverflow.com/questions/25449779",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/485569/"
] |
I had the same problem. And this helped me solve this problem
Manually removed directory: C:\Program Files\Google\Cloud SDK
Then rerun: GoogleCloudSDKInstaller.exe
And make sure that you have connection to needed DL servers (I was first behind company firewall and installer didn't download all files - and no complains either by installer...)
Then I was OK again..
Source: <https://code.google.com/p/google-cloud-sdk/issues/detail?id=62&thanks=62&ts=1407851956>
|
```
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt update
sudo apt-get install google-cloud-sdk
gcloud init
```
| 11,696
|
18,805,720
|
All I know how to do is type "python foo.py" in dos; the program runs but then exits python back to dos. Is there a way to run foo.py from within python? Or to stay in python after running? I want to do this to help debug, so that I may look at variables used in foo.py
(Thanks from a newbie)
|
2013/09/14
|
[
"https://Stackoverflow.com/questions/18805720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2779936/"
] |
You can enter the python interpreter by just typing Python. Then if you run:
```
execfile('foo.py')
```
This will run the program and keep the interpreter open. More details [here](http://docs.python.org/2/library/functions.html#execfile).
|
To stay in Python afterwards you could just type 'python' on the command prompt, then run your code from inside python. That way you'll be able to manipulate the objects (lists, dictionaries, etc) as you wish.
| 11,697
|
27,621,018
|
how to perform
```
echo xyz | ssh [host]
```
(send xyz to host)
with python?
I have tried pexpect
```
pexpect.spawn('echo xyz | ssh [host]')
```
but it's performing
```
echo 'xyz | ssh [host]'
```
maybe other package will be better?
|
2014/12/23
|
[
"https://Stackoverflow.com/questions/27621018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3419895/"
] |
<http://pexpect.sourceforge.net/pexpect.html#spawn>
Gives an example of running a command with a pipe :
```
shell_cmd = 'ls -l | grep LOG > log_list.txt'
child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
child.expect(pexpect.EOF)
```
Previous incorrect attempt deleted to make sure no-one is confused by it.
|
You don't need `pexpect` to simulate a simple shell pipeline. The simplest way to emulate the pipeline is the `os.system` function:
```
os.system("echo xyz | ssh [host]")
```
A more Pythonic approach is to use [the `subprocess` module](https://docs.python.org/2/library/subprocess.html):
```
p = subprocess.Popen(["ssh", "host"],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write("xyz\n")
output = p.communicate()[0]
```
| 11,699
|
21,946,883
|
I'm writing a function to shift text by 13 spaces. The converted chars need to preserve case, and if the characters aren't letters then they should pass through unshifted. I wrote the following function:
```
def rot13(str):
result = ""
for c in str:
if 65 <= ord(c) <= 96:
result += chr((ord(c) - ord('A') + 13)%26 + ord('A'))
if 97 <= ord(c) <= 122:
result += chr((ord(c) - ord('a') + 13)%26 + ord('a'))
else:
result += c
print result
```
What I have found is that lowercase letters and non-letter characters work fine. However, when the function is applied to uppercase chars the function returns the shifted char FOLLOWED BY the original char. I know there are plenty of solutions to this problem on SO, but this specific error has me wondering what's wrong with my logic or understanding of chars and loops in python. Any help appreciated.
|
2014/02/21
|
[
"https://Stackoverflow.com/questions/21946883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1036605/"
] |
You are missing the "else" statement, so if the first if "fires" (`c` is an uppercase letter) then the "else" from the second if also "fires" (and concatenates the uppercase letter, as `ord(c)` is not between `97` and `122`)
```
def rot13(str):
result = ""
for c in str:
if 65 <= ord(c) <= 96:
result += chr((ord(c) - ord('A') + 13)%26 + ord('A'))
elif 97 <= ord(c) <= 122:
result += chr((ord(c) - ord('a') + 13)%26 + ord('a'))
else:
result += c
print result
```
Also, uppercase characters **end** with `ord('Z')==90`, ASCII characters between `91` and `96` are **not** letters. Function should also **return** the value, not print it (unless it is called `print_rot13`). Your function is also inconsistent - you use `ord('A')` in calculations, but actual, hard-coded value in if (`65`) you should decide on **one** of these.
```
def rot13(str):
a = ord('a')
z = ord('z')
A = ord('A')
Z = ord('Z')
result = ""
for c in str:
symbol = ord(c)
if A <= symbol <= Z:
result += chr((symbol - A + 13)%26 + A)
elif a <= symbol <= z:
result += chr((symbol - a + 13)%26 + a)
else:
result += symbol
return result
```
This way it only assume, that lower and upper case letters are arranged in consistent blocks, but nothing about their actual `ord` values.
|
This is an (rather contrived) one-liner implementation for the caesar cipher in python:
```
cipher = lambda w, s: ''.join(chr(a + (i - a + s) % 26) if (a := 65) <= (i := ord(c)) <= 90 or (a := 97) <= i <= 122 else c for c in w)
word = 'Hello, beautiful World!'
print(cipher(word, 13)) # positive shift
# Uryyb, ornhgvshy Jbeyq!
word = 'Uryyb, ornhgvshy Jbeyq!'
print(cipher(word, -13)) # -13 shift means decipher a +13 shift
# Hello, beautiful World!
```
It rotates only ascii alpha letters, respects upper/lower case, and handles positive and negative shifts (even shifts bigger that the alphabet).
More details in [this answer](https://stackoverflow.com/a/71553427/2938526).
| 11,700
|
15,054,598
|
So I created my project with a virtual environment and installed pip + distribute. Everything is fine so far, But when I click on the install button to install new packages it displays a beautiful : "Nothing to show".
Here is an image of it :

I have the [default repository](http://pypi.python.org/pypi)
So what did I do wrong? Is this not the right way to install python modules like PIL or third party django apps like django south ?
Edit : I forgot to mention , it's the trial version... Can this be because of it ?
|
2013/02/24
|
[
"https://Stackoverflow.com/questions/15054598",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/970581/"
] |
The problem is caused by recent server-side changes in PyPI, and will be addressed in the PyCharm 2.7.1 update. Please see <http://youtrack.jetbrains.com/issue/PY-8962> to track the status of the issue.
|
If you are behind a proxy (e.g. in a corporate environment), then you may need to configure your proxy settings for PyCharm to show the packages.
These are in Pycharm under:
>
> File -> Settings -> Appearance & Behaviour -> System Settings -> HTTP Proxy
>
>
>
Enter your proxy settings there, e.g. a host name and port number.
If you don't know your proxy settings then this question may be useful: [How to see the proxy settings on windows?](https://stackoverflow.com/questions/22368515/how-to-see-the-proxy-settings-on-windows/30751938#30751938)
| 11,701
|
28,398,240
|
I want to run my python function in console like this:
```
my_function_name
```
at any directory, I tried to follow arajek's answer in this question:
[run a python script in terminal without the python command](https://stackoverflow.com/questions/15587877/run-a-python-script-in-terminal-without-the-python-command)
but I still need to call `my_function_name.py` to make it work. If I call only `my_function_name`, the console will inform me `command not found` . I also tried to add a symbolic link with this answer: [Running a Python Script Like a Built-in Shell Command](https://stackoverflow.com/questions/16752935/running-a-python-script-like-a-built-in-shell-command) but it failed
`sudo ln -s my_function_name.py /home/thovo/test/my_function_name`
`ln: failed to create symbolic link ‘/home/thovo/test/my_function_name/my_function_name.py’: File exists`
|
2015/02/08
|
[
"https://Stackoverflow.com/questions/28398240",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1154698/"
] |
Change the name of the script to no longer have the `.py` extension.
|
Add this shebang to the top of your file: `#!/usr/bin/env python` and remove the file extension.
| 11,703
|
1,131,582
|
I am using Boost with Visual Studio 2008 and I have put the path to boost directory in configuration for the project in C++/General/"Additional Include Directories" and in Linker/General/"Additional Library Directories". (as it says here: <http://www.boost.org/doc/libs/1_36_0/more/getting_started/windows.html#build-from-the-visual-studio-ide>)
When I build my program, I get an error:
fatal error C1083: Cannot open include file: 'boost/python.hpp': No such file or directory
I have checked if the file exists, and it is on the path.
I would be grateful if anyone can solve this problem.
The boost include path is `C:\Program Files\boost\boost_1_36_0\boost`.
Linker path is `C:\Program Files\boost\boost_1_36_0\lib`.
The file `python.hpp` exists on the include path.
|
2009/07/15
|
[
"https://Stackoverflow.com/questions/1131582",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Where is the file located, and which include path did you specify? (And how is the file `#include`'d)
There's a mismatch between some of these But it's impossible to say what's wrong when you haven't shown what you actually did.
**Edit**:
Given the paths you mentioned in comments, the problem is that they don't add up.
If the include path is `C:\Program Files\boost\boost_1_36_0\boost`, and you then try to include 'boost/python.hpp", the compiler searches for this file in the include path, which means it looks for `C:\Program Files\boost\boost_1_36_0\boost\boost\python.hpp`, which doesn't exist.
The include path should be set to `C:\Program Files\boost\boost_1_36_0` instead.
|
How do you include it? You should write something like this:
```
#include <boost/python.hpp>
```
Note that `Additional Include Directories` settings are differs in `Release` and `Debug` configurations. You should make them the same.
If boost placed to `C:\Program Files\boost\boost_1_36_0\` you should set path to `C:\Program Files\boost\boost_1_36_0\` without `boost` in the end.
| 11,704
|
61,713,057
|
I'm working on [google colaboratory](https://colab.research.google.com/) and i have to do some elaboration to some files based on their extensions, like:
```
!find ./ -type f -name "*.djvu" -exec file '{}' ';'
```
and i expect an output:
```
./file.djvu: DjVu multiple page document
```
but when i try to mix bash and python to use a list of exensions:
```
for f in file_types:
!echo "*.{f}"
!find ./ -type f -name "*.{f}" -exec file '{}' ';'
!echo "*.$f"
!find ./ -type f -name "*.$f" -exec file '{}' ';'
```
i get only the output of both the `echo` but not of the files.
```
*.djvu
*.djvu
*.jpg
*.jpg
*.pdf
*.pdf
```
If i remove the `exec` part it actually find the files so i can't figure out why the `find` command combined with `exec` fail in some manner.
If needed i can provide more info/examples.
|
2020/05/10
|
[
"https://Stackoverflow.com/questions/61713057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8069500/"
] |
I found an ugly workaround passing trought a file, so first i write the array to a file in python:
```
with open('/content/file_types.txt', 'w') as f:
for ft in file_types:
f.write(ft + '\n')
```
and than i read and use it from bash in another cell:
```
%%bash
filename=/content/protected_file_types.txt
while IFS= read -r f;
do
find ./ -name "*.$f" -exec file {} ';' ;
done < $filename
```
Doing so i dont mix bash and python in the same cell as suggested in the comment of another [answer](https://stackoverflow.com/a/61713349/8069500).
I hope to find a better solution that maybe use some trick that I'm not aware of.
|
This works for me
```
declare -a my_array=("pdf" "py")
for i in ${my_array[*]}; do find ./ -type f -name "*.$i" -exec file '{}' ';'; done;
```
| 11,705
|
65,362,747
|
I'd like to slice a tensor (multi-dimensional array) using Rust's [ndarray](https://docs.rs/ndarray) library, but the catch is that the tensor is dynamically shaped and the slice is stored in a user provided variable.
If I knew the dimensionality up front, I expect I could simply do the following, where `idx` is the user provided index and `x` is a 4 dimensional tensor:
```rust
// should give a 1D tensor as a view on the last axis at index `idx`
x.slice(s![idx[0], idx[1], idx[2], ..])
```
BUT because I don't know the dimensionality up front, I cant manually unpack `idx` that way and feed it to the slice macro `s!`.
In python I might do it [this way](https://numpy.org/doc/stable/user/basics.indexing.html#dealing-with-variable-numbers-of-indices-within-programs), where `idx` was a user provided tuple:
```py
# if `len(idx)` was 2 but `x.ndim` was 3, we could get a 1d tensor, of length `x.shape[-1]`
x[idx]
```
Whats the proper way to do this in Rust? The [ndarray for numpy users](https://docs.rs/ndarray/0.14.0/ndarray/doc/ndarray_for_numpy_users/index.html#indexing-and-slicing) only shows how to do it with scalar values given to `s!`
|
2020/12/18
|
[
"https://Stackoverflow.com/questions/65362747",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7681811/"
] |
You want to get the **"Bin On Hand"** fields:
You can get Location, Bin Number and On Hand from this join in your Saved Search. When done your results fields should be:
* Bin On Hand:Location
* Bin On Hand:Bin Number
* Bin On Hand:On Hand
Todd Stafford
|
From Admin role: List > Search > Saved Searches > New > Inventory Balance
In the results menu:
1. Item
2. Inventory Location
3. Bin
4. Quantity On Hand.
| 11,706
|
67,157,938
|
Trying to search no of times word appears in a file using python file handling. For example was trying to search 'believer' in the lyrics of believer song that how many times believer comes. It appears 18 times but my program is giving 12. What are the conditions I am missing.
```
def no_words_function():
f=open("believer.txt","r")
data = f.read()
cnt=0
ws = input("Enter word to find: ")
word = data.split()
for w in word:
if w in ws:
cnt+=1
f.close()
print(ws,"found",cnt,"times in the file.")
no_words_function()
```
|
2021/04/19
|
[
"https://Stackoverflow.com/questions/67157938",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7762964/"
] |
I would suggest you adding object properties conditionally with ES6 syntax like this.
```js
const normalizeData = ({id, name = ""}) => {
const condition = !!name; // Equivalent to: name !== undefined && name !== "";
return {id, ...(condition && { name })};
}
console.log(normalizeData({id: 123, name: ""}));
console.log(normalizeData({id: 123, name: undefined}));
console.log(normalizeData({id: 123, name: "Phong_Nguyen"}));
```
**Explain:**
* Using [`Default function parameters`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Default_parameters) at line code `name = ""` to avoid `undefined` of name propety having not included in input object.
* Using [`Destructuring assignment`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) at line code `...(condition && { name })`
|
You can create the object with the id first and then add a name afterwards:
```
const obj = {id: 123};
if(name) obj.name = name;
axios.post('saveUser', obj);
```
| 11,707
|
61,431,924
|
here is my problem : I wrote a python bot that makes plenty of stuff, including printing colorful text for better understanding. I'm using the colorama package because it prints color even on windows command prompt.
Here is how I use colorama, which work both on unix and windows using python 3.8 :
```
from colorama import Fore, init
init()
print(Fore.RED + 'some red text')
```
Now my goal is to transform my script into a .exe so it can run on windows without any install. Problem is, using pyinstaller.exe --onefile script.py, or pyinstaller.exe --onedir script.py or whatever, I can't make it work. Pyinstaller builds a EXE successfully with 0 error message, but whenever I launch the exe I get :
`ModuleNotFoundError: No module named 'colorama'`
and it's the only module missing. I've searched through the entire internet, and didn't manage to fix that by myself. You guys are my last hope ! please help me
|
2020/04/25
|
[
"https://Stackoverflow.com/questions/61431924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13407332/"
] |
Try:
```
pyinstaller.exe --onefile --hidden-import colorama script.py
```
This (--hidden-import colorama) should ensure that pyinstaller builds the application including colorama.
|
you should run `pip/pipenv install colorama` at first.
| 11,708
|
14,260,714
|
PyDev is reporting import errors which don't exist. The initial symptom was a fake "unresolved import" error, which was fixed by some combination of:
* Cleaning the Project
* Re-indexing the project (remove interpreter, add again)
* Restarting Eclipse
* Burning incense to the Python deities
Now the error is "unverified variable from import"--it can't seem to find pymssql.connect.
This IS NOT a PYHTONPATH problem. I can access the module just fine, the code in the file with the (alleged) error runs fine---it has unit tests and production code calling it.
The error is somewhere in PyDev: I added a new module to my PyDev project, and the error only occurs in the new module. I've tried all of the above.
---
So, I was planning on posting this code somewhere else to solicit some comments about the design, and I was asked in the comments to post code. (Inspired by: [Database connection wrapper](https://stackoverflow.com/questions/9367857/database-connection-wrapper) and Clint Miller's answer to this question: [How do I correctly clean up a Python object?](https://stackoverflow.com/questions/865115/how-do-i-correctly-clean-up-a-python-object)). The import error happens at line 69 (self.connection = pymssql.connect...). Not sure what good this does in answering the question, but...
```
import pymssql
from util.require_type import require_type
class Connections(object):
@require_type('host', str)
@require_type('user', str)
@require_type('password', str)
@require_type('database', str)
@require_type('as_dict', bool)
def __init__(self, host, user, password, database, as_dict=True):
self.host = host
self.user = user
self.password = password
self.db = database
self.as_dict = as_dict
@staticmethod
def server1(db):
return Connections('','','','')
@staticmethod
def server2(db):
pass
@staticmethod
def server3(db):
pass
class DBConnectionSource(object):
# Usage:
# with DBConnectionSource(ConnectionParameters.server1(db = 'MyDB)) as dbConn:
# results = dbConn.execute(sqlStatement)
@require_type('connection_parameters', Connections)
def __init__(self, connection_parameters=Connections.server1('MyDB')):
self.host = connection_parameters.host
self.user = connection_parameters.user
self.password = connection_parameters.password
self.db = connection_parameters.db
self.as_dict = connection_parameters.as_dict
self.connection = None
def __enter__(self):
parent = self
class DBConnection(object):
def connect(self):
self.connection = pymssql.connect(host=parent.host,
user=parent.user,
password=parent.password,
database=parent.db,
as_dict=parent.as_dict)
def execute(self, sqlString, arguments={}):
if self.connection is None:
raise Exception('DB Connection not defined')
crsr = self.connection.cursor()
crsr.execute(sqlString, arguments)
return list(crsr)
def cleanup(self):
if self.connection:
self.connection.close()
self.connection = DBConnection()
self.connection.connect()
return self.connection
def __exit__(self, typ, value, traceback):
self.connection.cleanup()
```
|
2013/01/10
|
[
"https://Stackoverflow.com/questions/14260714",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/963250/"
] |
Try `ctrl+1` at the line where the error and add a comment saying that you are expecting that import. This should resolve the PyDev error, as it does static code analysis and not runtime analysis.
|
I added # @UndefinedVariable to the end of the line throwing the error.
This isn't much of a permanent fix, but at least it got rid of the red from my screen for the time being. If there is a better long term fix, I would be happy to hear it.
| 11,709
|
38,542,675
|
I'd appreciate some help with the diagnosis.
The error messages either point to the possibility that this package cannot be installed on a 64 bit machine or the wrong compiler is being selected.
**Edit:**
The [requirements](http://vmprof.readthedocs.io/en/latest/vmprof.html#requirements) for `vmprof` state that it will only run on x86 (32 bit). It's clear that the C compiler ought to be instructed into compiling the source into 32 bit. Does that point to a shortfall in the vmprof packaging which should be raised as a vmprof issue?
**End edit.**
Either way, I don't know how to solve that problem. I am running `pip install vmprof` from the command line.
warning C4311: [This warning detects 64-bit pointer truncation issues.](http://msdn.microsoft.com/en-us/library/4t91x2k5.aspx)
warning C4312: [This warning detects an attempt to assign a 32-bit value to a 64-bit pointer type](http://msdn.microsoft.com/en-us/library/h97f4b9y.aspx)
These two warnings lead me to wonder if PyPi cannot install vmprof into my 64 bit environment. However, if the presented error output is ordered by time, then it appears that Visual Studio was loaded after these warnings were generated. Could this point to the wrong compiler being used? I do have a large set of Microsoft Visual C++ YYYY Redistributable from 2005 onwards for 32 and 64 bits. (I'm reluctant to test the wrong compiler theory by uninstalling older versions in case that breaks something.)
PyPi says it tried to load Microsoft Visual Studio v14.0 which I believe to be the correct version for Python 3.5.
There are [other SO questions](https://stackoverflow.com/questions/21650763/pip-installation-error-command-python-setup-py-egg-info-failed-with-error-code) relating to the warning, "manifest\_maker: standard file '-c' not found"
My setuptools is fully up to date. (v 25.0.0). vmprof is not available as a prebuilt binary from the link suggested. In any case, all of the binaries there are unsupported.
Other SO questions on this warning pertain to Unix.
warning LNK4197: [export 'PyInit\_\_vmprof' specified multiple times; using first specification.](http://msdn.microsoft.com/en-us/library/dt1zk962.aspx)
This is the point at which the build finally seemed to go off the rails. I'm guessing the multiple specification of "export 'PyInit\_\_vmprof'" is inside a command file supplied as part of vmprof.
error LNK2001: [unresolved external symbol \_PyThreadState\_Current build\lib.win-amd64-3.5\_vmprof.cp35-win\_amd64.pyd : fatal error LNK1120: 1 unresolved externals.](http://msdn.microsoft.com/en-us/library/f6xx1b1z.aspx)
And here it crashed with a link error. The full pip install output follows.
```
Installing collected packages: requests, vmprof
Running setup.py install for vmprof error
Complete output from command d:\python35\python.exe -u -c "import setuptools, tokenize;__file__='D:\\Users\\Stephen\
\AppData\\Local\\Temp\\pip-build-dpjo8j82\\vmprof\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read
().replace('\r\n', '\n'), __file__, 'exec'))" install --record D:\Users\Stephen\AppData\Local\Temp\pip-kfygn2le-record\i
nstall-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\tests
copying tests\cpuburn.py -> build\lib.win-amd64-3.5\tests
copying tests\test_config.py -> build\lib.win-amd64-3.5\tests
copying tests\test_reader.py -> build\lib.win-amd64-3.5\tests
copying tests\test_run.py -> build\lib.win-amd64-3.5\tests
copying tests\test_stats.py -> build\lib.win-amd64-3.5\tests
copying tests\__init__.py -> build\lib.win-amd64-3.5\tests
creating build\lib.win-amd64-3.5\vmprof
copying vmprof\binary.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\cli.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\profiler.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\reader.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\show.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\stats.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\upload.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\vmprofdemo.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\__init__.py -> build\lib.win-amd64-3.5\vmprof
copying vmprof\__main__.py -> build\lib.win-amd64-3.5\vmprof
creating build\lib.win-amd64-3.5\vmprof\log
copying vmprof\log\constants.py -> build\lib.win-amd64-3.5\vmprof\log
copying vmprof\log\marks.py -> build\lib.win-amd64-3.5\vmprof\log
copying vmprof\log\merge_point.py -> build\lib.win-amd64-3.5\vmprof\log
copying vmprof\log\objects.py -> build\lib.win-amd64-3.5\vmprof\log
copying vmprof\log\parser.py -> build\lib.win-amd64-3.5\vmprof\log
copying vmprof\log\__init__.py -> build\lib.win-amd64-3.5\vmprof\log
running egg_info
writing entry points to vmprof.egg-info\entry_points.txt
writing requirements to vmprof.egg-info\requires.txt
writing dependency_links to vmprof.egg-info\dependency_links.txt
writing top-level names to vmprof.egg-info\top_level.txt
writing vmprof.egg-info\PKG-INFO
warning: manifest_maker: standard file '-c' not found
reading manifest file 'vmprof.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'vmprof.egg-info\SOURCES.txt'
running build_ext
building '_vmprof' extension
creating build\temp.win-amd64-3.5
creating build\temp.win-amd64-3.5\Release
creating build\temp.win-amd64-3.5\Release\src
D:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Id:
\python35\include -Id:\python35\include "-ID:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\INCLUDE" "-ID:\Program
Files (x86)\Windows Kits\10\include\10.0.10240.0\ucrt" "-ID:\Program Files (x86)\Windows Kits\8.1\include\shared" "-ID:
\Program Files (x86)\Windows Kits\8.1\include\um" "-ID:\Program Files (x86)\Windows Kits\8.1\include\winrt" /Tcsrc/_vmpr
of.c /Fobuild\temp.win-amd64-3.5\Release\src/_vmprof.obj
_vmprof.c
d:\users\stephen\appdata\local\temp\pip-build-dpjo8j82\vmprof\src\vmprof_common.h(67): warning C4311: 'type cast': p
ointer truncation from 'PyCodeObject *' to 'unsigned long'
d:\users\stephen\appdata\local\temp\pip-build-dpjo8j82\vmprof\src\vmprof_common.h(67): warning C4312: 'type cast': c
onversion from 'unsigned long' to 'void *' of greater size
d:\users\stephen\appdata\local\temp\pip-build-dpjo8j82\vmprof\src\vmprof_common.h(96): warning C4267: '=': conversio
n from 'size_t' to 'char', possible loss of data
d:\users\stephen\appdata\local\temp\pip-build-dpjo8j82\vmprof\src\vmprof_main_win32.h(31): warning C4267: 'function'
: conversion from 'size_t' to 'unsigned int', possible loss of data
d:\users\stephen\appdata\local\temp\pip-build-dpjo8j82\vmprof\src\vmprof_main_win32.h(48): warning C4267: 'initializ
ing': conversion from 'size_t' to 'int', possible loss of data
d:\users\stephen\appdata\local\temp\pip-build-dpjo8j82\vmprof\src\vmprof_main_win32.h(72): warning C4312: 'type cast
': conversion from 'DWORD' to 'void *' of greater size
src/_vmprof.c(42): warning C4311: 'type cast': pointer truncation from 'PyCodeObject *' to 'unsigned long'
src/_vmprof.c(42): warning C4312: 'type cast': conversion from 'unsigned long' to 'void *' of greater size
src/_vmprof.c(69): warning C4311: 'type cast': pointer truncation from 'PyCodeObject *' to 'unsigned long'
D:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\BIN\x86_amd64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MA
NIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:d:\python35\libs /LIBPATH:d:\python35\PCbuild\amd64 "/LIBPATH:D:\Program File
s (x86)\Microsoft Visual Studio 14.0\VC\LIB\amd64" "/LIBPATH:D:\Program Files (x86)\Windows Kits\10\lib\10.0.10240.0\ucr
t\x64" "/LIBPATH:D:\Program Files (x86)\Windows Kits\8.1\lib\winv6.3\um\x64" /EXPORT:PyInit__vmprof build\temp.win-amd64
-3.5\Release\src/_vmprof.obj /OUT:build\lib.win-amd64-3.5\_vmprof.cp35-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.5\Re
lease\src\_vmprof.cp35-win_amd64.lib
_vmprof.obj : warning LNK4197: pip specified multiple times; using first specification
Creating library build\temp.win-amd64-3.5\Release\src\_vmprof.cp35-win_amd64.lib and object build\temp.win-amd64-
3.5\Release\src\_vmprof.cp35-win_amd64.exp
_vmprof.obj : error LNK2001: unresolved external symbol _PyThreadState_Current
build\lib.win-amd64-3.5\_vmprof.cp35-win_amd64.pyd : fatal error LNK1120: 1 unresolved externals
error: command 'D:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\link.exe' failed with exi
t status 1120
```
|
2016/07/23
|
[
"https://Stackoverflow.com/questions/38542675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2409868/"
] |
This is an issue on [vmprof's github repository](http://github.com/vmprof/vmprof-python/issues/92) which is unfixed at 8/23/2016.
|
Finally, `vmprof` from v0.4 officially supports 64bit Windows
See the closed [GitHub Issue](https://github.com/vmprof/vmprof-python/issues/85)
| 11,711
|
27,032,218
|
I am using emacs-for-python provided by gabrielelanaro at this [link](https://github.com/gabrielelanaro/emacs-for-python).
Indentation doesn't seem to be working for me at all.
It doesn't happen automatically when I create a class, function or any other block of code that requires automatic indentation(if, for etc.) and press enter or `Ctrl + j`. Instead emacs says "Arithmetic Error".
It doesn't happen when I press `Tab` anywhere in a .py file. Again, every `Tab` press causes "Arithmetic error".
Also, when I manually indent code using spaces, I can't erase those spaces! Backspace-ing these indents also causes "Arithmetic Error".
This problem surfaces when I use the regular `Python AC` mode as well.
emacs : GNU Emacs 24.3.1 (x86\_64-pc-linux-gnu, GTK+ Version 3.10.7)
of 2014-03-07 on lamiak, modified by Debian
|
2014/11/20
|
[
"https://Stackoverflow.com/questions/27032218",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2696086/"
] |
Check the value of `python-indent-offset`. If it is 0, change it `M-x set-variable RET python-indent-offset RET 4 RET`.
Emacs tries to guess the offset used in a Python file when opening it. It might get confused and set that variable to 0 for some badly-formatted Python file. If this is indeed the problem, please do file a bug report using `M-x report-emacs-bug` and the text of the Python file so that the auto-detection can be fixed.
|
Can you comment the lines related to auto-complete in your init.el?
```
; (add-to-list 'load-path "~/.emacs.d/auto-complete-1.3.1")
; (require 'auto-complete)
; (add-to-list 'ac-dictionary-directories "~/.emacs.d/ac-dict")
; (require 'auto-complete-config)
; (ac-config-default)
; (global-auto-complete-mode t)
```
| 11,712
|
69,895,751
|
I try start my code, but process finishes on step with method cv2.namedWindow (without
any errors).
Do you have any suggestions, why it could be so?
```
import cv2
image_cv2 = cv2.imread('/home/spartak/PycharmProjects/python_base/lesson_016/python_snippets/external_data/girl.jpg')
def viewImage(image, name_of_window):
print('step_1')
cv2.namedWindow(name_of_window, cv2.WINDOW_NORMAL)
print('step_2')
cv2.imshow(name_of_window, image)
print('step_3')
cv2.waitKey(0)
print('step_4')
cv2.destroyAllWindows()
cropped = image_cv2
viewImage(cropped, 'Cropped version')
```
P.S.:
Also I erased UBUNTU , and installed Fedora.
Instead Pycharm, check programm on VS code.
But nothing changes.
I changed location for picture (girl.jpg) to directory with python document.
But program stops on step1 and waiting something.

|
2021/11/09
|
[
"https://Stackoverflow.com/questions/69895751",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14779400/"
] |
I find out the problem.
I started this code in virtual environment.
Apparently, in virtual environment on UBUNTU/FEDORA, opencv have restrictions.

|
The code completes all 4 steps for me
I think there is a problem with the image path which you took
[](https://i.stack.imgur.com/zrqCV.png)
The function cv2.namedWindow creates a window that can be used as a placeholder for images and trackbars. Created windows are referred to by their names If a window with the same name already exists, the function does nothing.
| 11,715
|
41,487,605
|
I am using DBSCAN from sklearn in python to cluster some data points. I am using a precomputed distance matrix to cluster the points.
```
import sklearn.cluster as cl
C = cl.DBSCAN(eps = 2, metric = 'precomputed', min_samples =2)
db = C.fit(Dist_Matrix)
```
Dist\_Matrix is precomputed distance matrix I am using. Each time when I run my code, I am getting different cluster labels for the data points. Number of clusters is also varying
Like, in the first run,labels are
```
[ 2 3 3 0 3 0 2 2 2 4 2 -1 0 0 0 1 4 0 1 0 1 3 0 3 0
0 1 -1 0 3 1 3 0 0 2 0 2 0 -1 0 0 3 0 0 0 1 0 1 0 0]
```
in another run, it is like
```
[ 0 2 2 1 2 1 0 0 0 3 0 -1 1 1 1 0 3 1 0 1 0 2 1 2 1
1 0 -1 1 2 0 2 1 1 0 1 0 1 -1 1 1 2 1 1 1 0 1 0 1 1]
```
How can I resolve this? Please help
|
2017/01/05
|
[
"https://Stackoverflow.com/questions/41487605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7342743/"
] |
Clustering will *usually* not assign the same labels.
Because the label itself is *meaningless*. The only valueable information is what objects go *together*.
As for sklearn, if you use an old version, it will (unnecessarily) randomly shuffle the data. So it's not surprising you get a random permutation of the labels.
Usually, if you require stable labels, you are doing something wrong!
Butif you really know you need that, implement a simple logic: sort clusters by their smallest object, and relabel them accordingly. I.e. the first objects cluster is cluster 0. The second objects cluster (unless it is the same) is cluater 1, and so forth.
|
You can use a custom function to normalize the cluster labels.
```
def normalize_cluster_labels(labels):
min_value = min(labels)
if (min_value < 0):
labels = labels + abs(min(labels)) # normalize indexes
#idx = clustering.labels_ - min(clustering.labels_ )
return labels
```
| 11,717
|
37,241,819
|
I have read a lot of other posts here on stackoverflow and google but I could not find a solution.
It all started when I changed the model from a CharField to a ForeignKey.
The error I recieve is:
```
Operations to perform:
Synchronize unmigrated apps: gis, staticfiles, crispy_forms, geoposition, messages
Apply all migrations: venues, images, amenities, cities_light, registration, auth, admin, sites, sessions, contenttypes, easy_thumbnails, newsletter
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying venues.0016_auto_20160514_2141...Traceback (most recent call last):
File "/Users/iam-tony/.envs/venuepark/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.IntegrityError: column "venue_city" contains null values
```
My model is as follows:
```
class Venue(models.Model):
venue_city = models.ForeignKey(City, null=True,)
venue_country=models.ForeignKey(Country, null=True)
```
venue\_country did not exist before so that migration happened successfully. But venue\_city was a CharField.
I made some changes to my migration file so that it would execute the sql as follows:
```
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('venues', '0011_venue_map_activation'),
]
migrations.RunSQL(''' ALTER TABLE venues_venue ALTER venue_city TYPE integer USING venue_city::integer '''),
migrations.RunSQL(''' ALTER TABLE venues_venue ALTER venue_city RENAME COLUMN venue_city TO venue_city_id '''),
migrations.RunSQL(''' ALTER TABLE venues_venue ADD CONSTRAINT venues_venus_somefk FOREIGN KEY (venue_city_id) REFERENCES cities_light (id) DEFERRABLE INITIALLY DEFERRED'''),
```
Thanks in advance!
UPDATE: my new migration file:
```
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('cities_light', '0006_compensate_for_0003_bytestring_bug'),
('venues', '0024_remove_venue_venue_city'),
]
operations = [
migrations.AddField(
model_name='venue',
name='venue_city',
field=models.ForeignKey(null=True, to='cities_light.City'),
),
]
```
|
2016/05/15
|
[
"https://Stackoverflow.com/questions/37241819",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1067213/"
] |
Had a similar problem i resolved it by removing the previous migration files.No technical explanation
|
I solved it by below:
1. First delete last migration that face problem
2. Then add like `venue_city = models.CharField(blank=True, null=True)`
3. Finally use `makemigrations` and `migrate` command
| 11,718
|
21,758,560
|
I've been struggling with an algorithm tied to comparisons with 3d triangle vectors. Unfortunately its very slow in places and I've gone back and forth on different methods to try and improve it. One thing I'm struggling with is speeding up a distance calculation.
I have two groups of triangles which have been broken down to three points each of which has a 3d float vector (xyz). The calculations I'm using are :
```
diffverts = numpy.zeros( ( ntris*3, ntesttris*3, 3 ), dtype = 'float32')
diffverts += triverts.reshape(ntris*3, 1, 3 )
diffverts -= ttriverts.reshape(1, ntesttris*3, 3 )
vertdist = ( diffverts[:,:,0]**2 + diffverts[:,:,1]**2 + diffverts[:,:,2]**2 ) ** 0.5
```
this calculation is faster than :
```
diffverts = triverts.reshape(ntris*3, 1, 3 ) - ttriverts.reshape(1, ntesttris*3, 3 )
vertdist = ( diffverts[:,:,0]**2 + diffverts[:,:,1]**2 + diffverts[:,:,2]**2 ) ** 0.5
```
Is there a faster method to populate the diff vert part (which takes longest) and/or the distance part which is also quite time consuming? This code is called a lot of times due to the number of groups to test. Also, trying to do it just on indexes to the verts causes me other issues with further calculations when trying to get back to some boolean tests (i.e. this is only one of a set of calculations so keeping at the tri point level works best.
I'm using numpy and python
|
2014/02/13
|
[
"https://Stackoverflow.com/questions/21758560",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1942439/"
] |
The problem is that brute force testing of all triangles versus eachother takes quadratic time. It is better to use a datastructure which is specialized to perform such computations. Luckily, scipy contains one.
Take a look at scipy.spatial.cKDTree. The help should be self-explanatory.
|
I think diffverts is taking up enough memory to cause cache misses. Unfortunately while this solution is very elegant, you're probably better off computing the whole distance in one go, to avoid having to save an n\*m\*3 array of intermediate values. As ugly as it is, I would just do nested for loops.
| 11,728
|
15,415,093
|
For a system I am developing I need to programmatically go to a specific page. Fill out one field in the form (I know the id and name of the input element), submit it and store the results.
I have seen a few different Perl, python and java classes that do this. However I would like to do this using PHP and havent found anything as of yet.
I do have the permission to do this from the site i am getting the information from as well.
Any help is appreciated
|
2013/03/14
|
[
"https://Stackoverflow.com/questions/15415093",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1364791/"
] |
Take a look at David Walsh's simple explanation.
<http://davidwalsh.name/curl-post>
You can easily store the response (in this example, $result) in your database or logfile.
|
Usually PHP crawlers/scrapers use CURL - <http://php.net/manual/en/book.curl.php>.
It allows you to make a query from the server where PHP runs and get response from the website that you need to crawl. It returns response data in plain format and parsing it is up to you. You can manually check what does the form submit when you do it manually, and do the same thing via curl.
| 11,729
|
50,991,402
|
I'm trying to create a new tables in a new app added to my project .. `makemigrations` worked great but `migrate` is not working .. here is my models
**blog/models.py**
```
from django.db import models
# Create your models here.
from fostania_web_app.models import UserModel
class Tag(models.Model):
tag_name = models.CharField(max_length=250)
def __str__(self):
return self.tag_name
class BlogPost(models.Model):
post_title = models.CharField(max_length=250)
post_message = models.CharField(max_length=2000)
post_author = models.ForeignKey(UserModel, on_delete=models.PROTECT)
post_image = models.ImageField(upload_to='documents/%Y/%m/%d', null=False, blank=False)
post_tag = models.ForeignKey(Tag, on_delete=models.PROTECT)
post_created_at = models.DateTimeField(auto_now=True)
```
when i try to do `python manage.py migrate` i get this error
`invalid literal for int() with base 10: '??????? ???????'`
**UserModel** is created in another app in the same project that is why i used the statement `from fostania_web_app.models import UserModel`
**fostania\_web\_app/models.py**
```
class UserModelManager(BaseUserManager):
def create_user(self, email, password, pseudo):
user = self.model()
user.name = name
user.email = self.normalize_email(email=email)
user.set_password(password)
user.save()
return user
def create_superuser(self, email, password):
'''
Used for: python manage.py createsuperuser
'''
user = self.model()
user.name = 'admin-yeah'
user.email = self.normalize_email(email=email)
user.set_password(password)
user.is_staff = True
user.is_superuser = True
user.save()
return user
class UserModel(AbstractBaseUser, PermissionsMixin):
## Personnal fields.
email = models.EmailField(max_length=254, unique=True)
name = models.CharField(max_length=16)
## [...]
## Django manage fields.
date_joined = models.DateTimeField(auto_now_add=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
USERNAME_FIELD = 'email'
REQUIRED_FIELD = ['email', 'name']
objects = UserModelManager()
def __str__(self):
return self.email
def get_short_name(self):
return self.name[:2].upper()
def get_full_name(self):
return self.name
```
and on my **setting.py** files i have this :
```
#custom_user
AUTH_USER_MODEL='fostania_web_app.UserModel'
```
and here is the full `traeback` :
```
Operations to perform:
Apply all migrations: admin, auth, blog, contenttypes, fostania_web
ons, sites, social_django
Running migrations:
Applying blog.0001_initial...Traceback (most recent call last):
File "manage.py", line 15, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\__init__.py", line 371, in execute_from_comm
utility.execute()
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\__init__.py", line 365, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\base.py", line 288, in run_from_argv
self.execute(*args, **cmd_options)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\base.py", line 335, in execute
output = self.handle(*args, **options)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\core\management\commands\migrate.py", line 200, in handle
fake_initial=fake_initial,
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=f
nitial=fake_initial)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 147, in _migrate_all_forwar
state = self.apply_migration(state, migration, fake=fake, fake_in
initial)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\executor.py", line 244, in apply_migration
state = migration.apply(state, schema_editor)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\migration.py", line 122, in apply
operation.database_forwards(self.app_label, schema_editor, old_st
t_state)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\migrations\operations\fields.py", line 84, in database_fo
field,
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\sqlite3\schema.py", line 306, in add_field
self._remake_table(model, create_field=field)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\sqlite3\schema.py", line 178, in _remake_table
self.effective_default(create_field)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\backends\base\schema.py", line 240, in effective_default
default = field.get_db_prep_save(default, self.connection)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\related.py", line 936, in get_db_prep_save
return self.target_field.get_db_prep_save(value, connection=conne
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 767, in get_db_prep_save
return self.get_db_prep_value(value, connection=connection, prepa
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 939, in get_db_prep_valu
value = self.get_prep_value(value)
File "C:\Users\LiTo\AppData\Local\Programs\Python\Python36-32\lib\s
s\django\db\models\fields\__init__.py", line 947, in get_prep_value
return int(value)
ValueError: invalid literal for int() with base 10: '??? ??? ?????'
```
|
2018/06/22
|
[
"https://Stackoverflow.com/questions/50991402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9494140/"
] |
Error message: `ValueError: invalid literal for int() with base 10: '??? ??? ?????'`
As per exception int() with base 10: `'??? ??? ?????'` doesn't qualify as int.
Check in `blog.0001_initial` migrations for `'??? ??? ?????'` and modify that value with a valid int.
You might have accidentally provided garbage default value while rerunning makemigrations command which isn't an int()
|
I had the same error.
My problem:
I had:
```
models.y
expiration_date = models.DateField(null=True, max_length=20)
```
And a validation in my:
```
forms.py
def clean_expiration_date(self):
data = self.cleaned_data['expiration_date']
if data < datetime.date.today():
return data
```
And it gave me the here mentioned error. But only on a certain page.
My solution:
I changed the **DateField** to **DateTimeField**
And it fixed my problem in my case.
| 11,732
|
16,247,828
|
Now our code have 3,000,000 id stored in set, the format is string, I try to convert it to int using list comprehension. But it cost 5 sec, how can I reduce the cost of converting string to int for these 3,000,000 id?
Here is my testing code:
```
import random
import time
a = set()
for i in xrange(3088767):
a.add(str(random.randint(10005907, 100000000)))
start = time.time()
ld = [int(i) for i in a]
end = time.time()
print end-start
```
The result is:
```
$python -V
Python 2.6.5
$python ld.py
5.53777289391
```
|
2013/04/27
|
[
"https://Stackoverflow.com/questions/16247828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/472880/"
] |
```
ld = map(int, a)
```
should be quite faster, and also your only option not considering other python implementations
|
Try using [Pypy](http://pypy.org) or [Cython](http://cython.org).
These tools can made your code fly! (Pypy, specially, in my PC Pypy speed up the code 4 times..)
| 11,733
|
46,164,419
|
Getting a 'Label out of bound' error while running the testing script. Error is thrown in the confusion\_matrix function when the annotation values are compared with the number of classes. In my case annotation value is an image(560x560) and number\_of\_classes = 2.
```
[check_ops.assert_less(labels, num_classes_int64, message='`labels` out of bound')], labels
```
The above condition is always going to fail, as the annotation data is bigger than number of classes.
First, There is a good chance that I am misunderstanding the code, but I can not make any sense of it.
Second, If this is a valid check, then how can I modify my code or data to avoid this error.
```py
def confusion_matrix(labels, predictions, num_classes=None, dtype=dtypes.int32,
name=None, weights=None):
with ops.name_scope(name, 'confusion_matrix',
(predictions, labels, num_classes, weights)) as name:
labels, predictions = remove_squeezable_dimensions(
ops.convert_to_tensor(labels, name='labels'),
ops.convert_to_tensor(
predictions, name='predictions'))
predictions = math_ops.cast(predictions, dtypes.int64)
labels = math_ops.cast(labels, dtypes.int64)
# Sanity checks - underflow or overflow can cause memory corruption.
labels = control_flow_ops.with_dependencies(
[check_ops.assert_non_negative(
labels, message='`labels` contains negative values')],
labels)
predictions = control_flow_ops.with_dependencies(
[check_ops.assert_non_negative(
predictions, message='`predictions` contains negative values')],
predictions)
print(num_classes)
if num_classes is None:
num_classes = math_ops.maximum(math_ops.reduce_max(predictions),
math_ops.reduce_max(labels)) + 1
#$
else:
num_classes_int64 = math_ops.cast(num_classes, dtypes.int64)
---->>labels = control_flow_ops.with_dependencies(
[check_ops.assert_less(
labels, num_classes_int64, message='`labels` out of bound')],
labels)<<----
predictions = control_flow_ops.with_dependencies(
[check_ops.assert_less(
predictions, num_classes_int64,
message='`predictions` out of bound')],
predictions)
if weights is not None:
predictions.get_shape().assert_is_compatible_with(weights.get_shape())
weights = math_ops.cast(weights, dtype)
shape = array_ops.stack([num_classes, num_classes])
indices = array_ops.transpose(array_ops.stack([labels, predictions]))
values = (array_ops.ones_like(predictions, dtype)
if weights is None else weights)
cm_sparse = sparse_tensor.SparseTensor(
indices=indices, values=values, dense_shape=math_ops.to_int64(shape))
zero_matrix = array_ops.zeros(math_ops.to_int32(shape), dtype)
return sparse_ops.sparse_add(zero_matrix, cm_sparse)
```
```none
Traceback (most recent call last):
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call
return fn(*args)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1306, in _run_fn
status, run_metadata)
File "C:\Program Files\Python35\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:x (mean_iou/confusion_matrix/control_dependency:0) = ] [0 0 0...] [y (mean_iou/ToInt64_2:0) = ] [21]
[[Node: mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert = Assert[T=[DT_STRING, DT_STRING, DT_INT64, DT_STRING, DT_INT64], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_0, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_1, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch_1, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_3, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch_2)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/supriya.godge/PycharmProjects/tf-image-segmentation/tf_image_segmentation/recipes/pascal_voc/DeepLab/output/resnet_v1_101_8s_test_airplan.py", line 81, in <module>
image_np, annotation_np, pred_np, tmp = sess.run([image, annotation, pred, update_op])
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 895, in run
run_metadata_ptr)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1124, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1321, in _do_run
options, run_metadata)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\client\session.py", line 1340, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:x (mean_iou/confusion_matrix/control_dependency:0) = ] [0 0 0...] [y (mean_iou/ToInt64_2:0) = ] [21]
[[Node: mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert = Assert[T=[DT_STRING, DT_STRING, DT_INT64, DT_STRING, DT_INT64], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_0, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_1, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch_1, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_3, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch_2)]]
Caused by op 'mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert', defined at:
File "C:/Users/supriya.godge/PycharmProjects/tf-image-segmentation/tf_image_segmentation/recipes/pascal_voc/DeepLab/output/resnet_v1_101_8s_test_airplan.py", line 64, in <module>
weights=weights)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\contrib\metrics\python\ops\metric_ops.py", line 2245, in streaming_mean_iou
updates_collections=updates_collections, name=name)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\metrics_impl.py", line 917, in mean_iou
num_classes, weights)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\metrics_impl.py", line 285, in _streaming_confusion_matrix
labels, predictions, num_classes, weights=weights, dtype=cm_dtype)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\confusion_matrix.py", line 178, in confusion_matrix
labels, num_classes_int64, message='`labels` out of bound')],
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\check_ops.py", line 401, in assert_less
return control_flow_ops.Assert(condition, data, summarize=summarize)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\util\tf_should_use.py", line 175, in wrapped
return _add_should_use_warning(fn(*args, **kwargs))
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 131, in Assert
condition, no_op, true_assert, name="AssertGuard")
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\util\deprecation.py", line 296, in new_func
return func(*args, **kwargs)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1828, in cond
orig_res_f, res_f = context_f.BuildCondBranch(false_fn)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 1694, in BuildCondBranch
original_result = fn()
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 129, in true_assert
condition, data, summarize, name="Assert")
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\ops\gen_logging_ops.py", line 35, in _assert
summarize=summarize, name=name)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op
op_def=op_def)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "C:\Program Files\Python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1204, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:x (mean_iou/confusion_matrix/control_dependency:0) = ] [0 0 0...] [y (mean_iou/ToInt64_2:0) = ] [21]
[[Node: mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert = Assert[T=[DT_STRING, DT_STRING, DT_INT64, DT_STRING, DT_INT64], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_0, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_1, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch_1, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/data_3, mean_iou/confusion_matrix/assert_less/Assert/AssertGuard/Assert/Switch_2)]]
```
I am really lost here so any help or suggestion is appreciated!
|
2017/09/11
|
[
"https://Stackoverflow.com/questions/46164419",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8485184/"
] |
In the Annotation file, the labels were 0,1,2,255. The range of label was given 3. So when 255 was detected in the annotation file above error was thrown. After I removed all 255 values the code worked without any error.
|
under tensorflow 1.15.2, tensorflow/models/research/deeplab is basically quite ok.
error message like:
Invalid argument: assertion failed: [`labels` out of bound] [Condition x < y did not hold element-wise:] [x (mean\_iou/confusion\_matrix/control\_dependency:0) = ]
likely due not considering background as 1 class. e.g. deeplab/datasets/data\_generator.py
```
# Number of semantic classes, including the
# background class (if exists). For example, there
# are 20 foreground classes + 1 background class in
# the PASCAL VOC 2012 dataset. Thus, we set
# num_classes=21.
```
| 11,734
|
45,318,255
|
**Code:**
```
import numpy as np
z = np.array([[1,3,5],[2,4,6]])
print(z[0:, :2])
```
**Answer:**
```
[[1, 3] [2, 4]]
```
I am a python beginner, I was solving an interactive exercise when the above mentioned question appeared.
I am not able to understand, how z[0:, :2] works in this case? If possible, please help me understand this scenario.
|
2017/07/26
|
[
"https://Stackoverflow.com/questions/45318255",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4282170/"
] |
You can read about Numpy slicing and indexing here:
<https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>
In this case, `0:` means "all the rows, starting from (and including) row 0 and going until the end" (you could also just use the equivalent `:`, which means "all the rows, starting from the beginning and going until the end").
`:2` means "all the columns, starting from the beginning and going until (but not including) column 2".
Together, `z[0:, :2]` means "the part of `z` that includes all the rows and the first two columns". The first dimension listed is rows, and the second is columns. If your array was 3D, you could include yet another dimension with another comma, and so on.
|
First you ask to get all rows (`0:` is the same as `:`):
```
[[1,3,5],
[2,4,6]]
```
Then you ask for columns 0 and 1 (`:2` is the same as `0:2` which means from 0 until 2 exclusive):
```
[[1,3],
[2,4]]
```
| 11,740
|
68,434,126
|
In a Tkinter window of the Test.py file, I would like to display in a textobox what is printed in the Python console.
By clicking on button, you start a function in the Test.py file that calls the X.py and Y.py scripts (more precisely their functions). The results of the scripts are printed correctly in the Python console: first the result of the X file is printed and then immediately after the result of the Y file is printed.
I would like to see these X.py and Y.py results printed in a textbox. Of course, if possible, I would also like to hide (not open at all) the Python console. I have read a few questions here on the site, but have not been able to accomplish this. Above for information purposes I have explained the purpose of what I am creating, therefore it is useless to paste the entire code already working of the various functions. Can you help me and show me the code to view the script results in the textobox please?
Of course, if possible, I would also like to hide (not open at all) the Python console
```
#IMPORT OF FILE X AND Y, AND RELATED FUNCTIONS
from File.X import Example_Name_Function_1
from File.Y import Example_Name_Function_2
#TEXTOBOX
text = tk.Text(test,width=80,height=50, background="black", foreground="white")
text.pack()
text.place(x=450, y=20)
text.insert(INSERT, "aaaaaaaa\n")
text.insert(END, " bbbbbbbb \n")
#BUTTON
button = Button(test, text="Go", foreground='black', command= Go)
button.place(x=7, y=512)
```
**CODE UPDATE**
I have reported the initial code. Now the window opens immediately and the scraping starts only after clicking on the button. As a result, the results of the scraping are printed in the Python terminal console. Not in the textobox. Being long in the code, I preferred to report it like this. But inside the code of @Matiiss, but it doesn't work
```
#I open tkinter window for scraping
editmenu.add_command(label='Scraping', command=filename.draw_graph)
```
\_
```
#WINDOW SCRAPING
from tkinter import *
from tkinter import ttk
import tkinter as tk
import tkinter.font as tkFont
from PIL import ImageTk, Image
from File import Scraping_Nome_Campionati
from File import Scraping_Nome_Squadre_MIO
import subprocess
def draw_graph():
test_scraping=tk.Toplevel()
test_scraping.title("Scraping")
test_scraping.geometry("1100x900")
test_scraping.configure(bg='#282828')
#I call up and open the two scripts for scraping (no tkinter)
def do_scraping():
msg1 = Scraping_Nome_Campionati.scraping_nome_campionati_e_tor()
if msg1:
message1.configure(text=msg1)
message1.configure(foreground="red")
vuoto_elenco_campionati.config(image=render7)
else:
vuoto_elenco_campionati.config(image=render8)
message1.configure(foreground="green")
msg2 = Scraping_Nome_Squadre_MIO.scraping_nome_squadre_e_tor()
if msg2:
message2.configure(text=msg2)
message2.configure(foreground="red")
vuoto_elenco_squadre.config(image=render7)
else:
vuoto_elenco_squadre.config(image=render8)
message2.configure(foreground="green")
#YOUR CODE
def call_obj_from(obj, module):
if module:
proc = subprocess.Popen(['python3', '-c', f'from {module} import {obj}; {obj}()'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return proc.communicate()[0].decode()
text.insert('end', call_obj_from('scraping_nome_campionati_e_tor', 'File.Scraping_Nome_Campionati'))
text.insert('end', call_obj_from('scraping_nome_squadre_e_tor', 'file.Scraping_Nome_Squadre_MIO'))
text = tk.Text(test_scraping,width=80,height=50, background="black", foreground="white")
text.pack()
text.place(x=450, y=20)
text.insert(INSERT, "aaaaaa\n")
text.insert(END, "bbbbbbbbb\n")
button = Button(test_scraping, text="Avvia", bg='#e95420', foreground='white', command=do_scraping)
button.place(x=116, y=512)
test_scraping.mainloop()
```
**CODE UPLOAD 2**
```
# Text widget with file-like object feature
class TextOut(tk.Text):
def __init__(self, master, **kw):
super().__init__(master, **kw)
# required output function for a file-like object
def write(self, message):
self.insert("insert", message)
def do_scraping():
# temporarily redirect sys.stdout
with redirect_stdout(text) as f:
scraping_nome_campionati_e_tor()
scraping_nome_squadre_e_tor()
print("completed") # this shows in the text box as well
print("done") # this will show in console instead of text box
msg1 = Scraping_Nome_Campionati.scraping_nome_campionati_e_tor()
if msg1:
message1.configure(text=msg1)
message1.configure(foreground="red")
vuoto_elenco_campionati.config(image=render7)
else:
vuoto_elenco_campionati.config(image=render8)
message1.configure(foreground="green")
msg2 = Scraping_Nome_Squadre_MIO.scraping_nome_squadre_e_tor()
if msg2:
message2.configure(text=msg2)
message2.configure(foreground="red")
vuoto_elenco_squadre.config(image=render7)
else:
vuoto_elenco_squadre.config(image=render8)
message2.configure(foreground="green")
text = TextOut(test_scraping,width=80,height=50, background="black", foreground="white")
text.pack()
text.place(x=450, y=20)
button = Button(test_scraping, text="Avvia", bg='#e95420', foreground='white', command=do_scraping)
button.place(x=116, y=512)
```
|
2021/07/19
|
[
"https://Stackoverflow.com/questions/68434126",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16472472/"
] |
For Python 3.4+, you can use `contextlib.redirect_stdout` (see [official document](https://docs.python.org/3/library/contextlib.html#contextlib.redirect_stdout)) to redirect `sys.stdout` to another file or file-like object temporarily.
```py
import tkinter as tk
from contextlib import redirect_stdout
from File.X import Example_Name_Function_1
from File.Y import Example_Name_Function_2
# Text widget with file-like object feature
class TextOut(tk.Text):
def __init__(self, master, **kw):
super().__init__(master, **kw)
# required output function for a file-like object
def write(self, message):
self.insert("insert", message)
def Go():
# temporarily redirect sys.stdout
with redirect_stdout(text) as f:
Example_Name_Function_1()
Example_Name_Function_2()
print("completed") # this shows in the text box as well
print("done") # this will show in console instead of text box
test = tk.Tk()
# use TextOut instead of normal Text widget
text = TextOut(test, width=80, height=50, bg="black", fg="white")
text.pack()
button = tk.Button(test, text="Go", command=Go)
button.pack()
test.mainloop()
```
|
Ok, so after a lot of trial and error I finally found a way to accomplish what You want (to be fair though, You were the one that was supposed to go through this phase AND show Your attempts which You failed to do, either how I don't mind much since I learned stuff too):
```py
from tkinter import Tk, Text
import subprocess
def call_obj_from(obj, module):
if module:
proc = subprocess.Popen(['python', '-c', f'from {module} import {obj}; {obj}()'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return proc.communicate()[0].decode()
root = Tk()
text = Text(root)
text.pack()
text.insert('end', call_obj_from('example_func_1', 'file.x'))
text.insert('end', call_obj_from('example_func_2', 'file.y'))
root.mainloop()
```
in simple terms the two arguments are `obj` or in other words what You will want to execute that also prints something to the console, and the `module` which is the file where that object is located so for example in this case I would have to have a file in the same directory named `functions.py` and in it there should be a function named `func()`
| 11,745
|
48,908,156
|
I am using **Django-Cookiecutter** which uses **Django-Allauth** to handle all the authentication and there is one App installed called **user** which handles everything related to users like sign-up, sign-out etc.
I am sharing the models.py file for users
```
@python_2_unicode_compatible
class User(AbstractUser):
# First Name and Last Name do not cover name patterns
# around the globe.
name = models.CharField(_('Name of User'), blank=True, max_length=255)
def __str__(self):
return self.username
def get_absolute_url(self):
return reverse('users:detail', kwargs={'username': self.username})
```
Now that I have added my new App say Course and my models.py is
```
class Course(models.Model):
name = models.CharField(max_length=30)
description = models.CharField(max_length=100)
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse("details:assess", kwargs={"id": self.id})
```
I have also defined my views.py as
```
@login_required
def course_list(request):
queryset = queryset = Course.objects.all()
print("query set value is", queryset)
context = {
"object_list" :queryset,
}
return render(request, "course_details/Course_details.html", context)
```
Is there any way I can reference(Foreign key) **User App** to my **Course App** so that each user has their own Course assigned to it.
one of the possible ways is to filter objects on the basis of the user and pass it to the template
I can't figure it out that how to map users and course in order to get the list of course which is assigned to the particular user.
|
2018/02/21
|
[
"https://Stackoverflow.com/questions/48908156",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5881652/"
] |
Unless you are offering a course to just one user, it does not make sense to have a one-to-many relationship between User and Course models. So, you need a many-to-many relationship. It is also wise to extend the user model by creating a UserProfile model and relate UserProfile to your course. You can look [here](https://stackoverflow.com/questions/44109/extending-the-user-model-with-custom-fields-in-django) for how to extend the user model.
So, what you should really do is this:
```
class Course(models.Model):
name = models.CharField(max_length=30)
description = models.CharField(max_length=100)
# Make sure you first import UserProfile model before referring to it
students = models.ManyToManyField(UserProfile, related_name = 'courses')
def __str__(self):
return self.name
def get_absolute_url(self):
return reverse("details:assess", kwargs={"id": self.id})
```
Also, note that adding and querying many-to-many relationships are little different. Please see [the docs](https://docs.djangoproject.com/en/2.0/topics/db/examples/many_to_many/) for more detail.
|
You can directly add foreign key as:
```
user = models.Foreignkey(User,related_name="zyz")
```
| 11,746
|
72,184,482
|
I have a project trying to imagescrape from a website. I use a csv file with all the urls. Some urls i dont have the premission to open(or they dont exist). I get a Http error 403 in phyton from those. I just want the try the next url in the csv file and ignore the error.
```python
import urllib.request
import csv
with open ('urls_01.csv') as images:
images = csv.reader(images)
img_count = 1
for image in images:
urllib.request.urlretrieve(image[0],
'images/image_{0}.jpg'.format(img_count))
img_count += 1
```
This is the error
```python
Traceback (most recent call last):
File "c:\Users\Heigre\Documents\Phyton\img_test.py", line 8, in <module>
urllib.request.urlretrieve(image[0],
File "C:\Program Files\Python310\lib\urllib\request.py", line 241, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "C:\Program Files\Python310\lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "C:\Program Files\Python310\lib\urllib\request.py", line 525, in open
response = meth(req, response)
File "C:\Program Files\Python310\lib\urllib\request.py", line 634, in http_response
response = self.parent.error(
File "C:\Program Files\Python310\lib\urllib\request.py", line 563, in error
return self._call_chain(*args)
File "C:\Program Files\Python310\lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
File "C:\Program Files\Python310\lib\urllib\request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
```
|
2022/05/10
|
[
"https://Stackoverflow.com/questions/72184482",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19084463/"
] |
You may have an issue with the SNS access policy. There is a video of troubleshooting this issue at <https://youtu.be/RjSW75YsBMM>.
|
Make sure that your SNS topic has the proper Access Policy set. In my case, the default Access Policy included an `AWS:SourceOwner` condition which I needed to remove in order to allow the S3 event configuration to perform the `SNS:Publish` action against the topic.
| 11,747
|
26,207,097
|
I am wondering how one would create a GUI application, and interact with it from the console that started it.
As an example, I would like to create a GUI in PyQt and work with it from the console. This could be for testing settings without restarting the app, but in larger projects also for calling functions etc.
Here is a simple example using PyQt:
```
import sys
from PyQt4 import QtGui
def main():
app = QtGui.QApplication(sys.argv)
w = QtGui.QWidget()
w.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
```
when this is run with `python -i example.py` the console is blocked as long as the main-loop is executed.
How can I call `w.resize(100,100)` while the GUI is running?
|
2014/10/05
|
[
"https://Stackoverflow.com/questions/26207097",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1356998/"
] |
ops, posted wrong answer before
there is a post in Stack about that
[Execute Python code from within PyQt event loop](https://stackoverflow.com/questions/4893748/execute-python-code-from-within-pyqt-event-loop)
|
The easiest way is to use IPython:
```
ipython --gui=qt4
```
See `ipython --help` or the [online documentation](http://ipython.org/ipython-doc/dev/config/options/terminal.html) for more options (e.g. gtk, tk, etc).
| 11,748
|
2,157,208
|
In python, I have a global variable defined that gets read/incremented by different threads. Because of the GIL, will this ever cause problems without using any kind of locking mechanism?
|
2010/01/28
|
[
"https://Stackoverflow.com/questions/2157208",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/261273/"
] |
The GIL only requires that the interpreter completely executes a single bytecode instruction before another thread can take over. However, there is no reason to assume that an increment operation is a single instruction. For example:
```
>>> import dis
>>> dis.dis(compile("x=753","","exec"))
1 0 LOAD_CONST 0 (753)
3 STORE_NAME 0 (x)
6 LOAD_CONST 1 (None)
9 RETURN_VALUE
>>> dis.dis(compile("x+=1","","exec"))
1 0 LOAD_NAME 0 (x)
3 LOAD_CONST 0 (1)
6 INPLACE_ADD
7 STORE_NAME 0 (x)
10 LOAD_CONST 1 (None)
13 RETURN_VALUE
```
As you can see, even these simple operations are more than a single bytecode instruction. Therefore, whenever sharing data between threads, you *must* use a separate locking mechanism (eg, threading.lock) in order to maintain data consistency.
|
Yes, multithreading without locking almost always causes problems, with or without a GIL.
| 11,750
|
56,120,774
|
In the case where python executes more operations, it is slower.
The following is a very simple comparison of two separate nested loops (for finding a Pythagorean triple `(a,b,c)` which sum to 1000):
```
#Takes 9 Seconds
for a in range(1, 1000):
for b in range(a, 1000):
ab = 1000 - (a + b)
for c in range(ab, 1000):
if(((a + b + c) == 1000) and ((a**2) + (b**2) == (c**2))):
print(a,b,c)
exit()
#Solution B
#Takes 7 Seconds
for a in range(1, 1000):
for b in range(a, 1000):
for c in range(b, 1000):
if(((a + b + c) == 1000) and ((a**2) + (b**2) == (c**2))):
print(a,b,c)
exit()
```
I expected solution A to shave a second or two off of solution B but instead it increased the time it took to complete. by two seconds.
```
instead of iterating
1, 1, 1
1, 1, 2
...
1, 1, 999
1, 2, 2
It would iterate
1, 1, 998
1, 1, 999
1, 2, 997
1, 2, 998
1, 2, 999
1, 3, 996
```
It seems to me that solution a should vastly improve speed by cutting out thousands to millions of operations, but it in fact does not.
I am aware that there is a simple way to vastly improve this algorithm but I am trying to understand why python would run slower in the case that would seem to be faster.
|
2019/05/13
|
[
"https://Stackoverflow.com/questions/56120774",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11495271/"
] |
You can just count total amount of iterations in each solution and see that A takes more iterations to find the result:
```
#Takes 9 Seconds
def A():
count = 0
for a in range(1, 1000):
for b in range(a, 1000):
ab = 1000 - (a + b)
for c in range(ab, 1000):
count += 1
if(((a + b + c) == 1000) and ((a**2) + (b**2) == (c**2))):
print(a,b,c)
print('A:', count)
return
#Solution B
#Takes 7 Seconds
def B():
count = 0
for a in range(1, 1000):
for b in range(a, 1000):
for c in range(b, 1000):
count += 1
if(((a + b + c) == 1000) and ((a**2) + (b**2) == (c**2))):
print(a,b,c)
print('B:', count)
return
A()
B()
```
Output:
```
A: 115425626
B: 81137726
```
That's why A is slower. Also `ab = 1000 - (a + b)` takes time.
|
You have two false premises in your confusion:
* The methods find all triples. They do not; each one finds a single triple and then aborts.
* The upper method (aka "solution A") does fewer comparisons.
I added some basic instrumentation to test your premises:
import time
```
#Takes 9 Seconds
count = 0
start = time.time()
for a in range(1, 1000):
for b in range(a, 1000):
ab = 1000 - (a + b)
for c in range(ab, 1000):
count += 1
if(((a + b + c) == 1000) and ((a**2) + (b**2) == (c**2))):
print(a,b,c)
print(count, time.time() - start)
break
#Solution B
#Takes 7 Seconds
count = 0
start = time.time()
for a in range(1, 1000):
for b in range(a, 1000):
for c in range(b, 1000):
count += 1
if(((a + b + c) == 1000) and ((a**2) + (b**2) == (c**2))):
print(a,b,c)
print(count, time.time() - start)
break
```
Output:
```
200 375 425
115425626 37.674554109573364
200 375 425
81137726 25.986871480941772
```
`Solution B` considers fewer triples. Do the math ... which is the lower value, `b` or `1000-a-b` for this exercise?
| 11,751
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.