qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
47,064,278
|
I've been working on this script today and have made some really good progress with looping through the data and importing it to an external database. I'm trying to troubleshoot a field that I'm having an issue with and it doesn't make much sense. Whenever I attempt to run it, I get the following error `KeyError: 'manufacturer'`. If I comment out the line `product_details['manufacturer'] = item['manufacturer']`, the script runs as it should.
1. I've checked the caseSensitivty
2. I've checked my spelling
3. I've confirmed that the json document I'm pulling from has that field
filled out,
4. I've confirmed the DataType is supported (it's just a string);
Not sure what else to check or where to go from here (new to python)
I'm using the following **[test data](https://raw.githubusercontent.com/algolia/datasets/master/ecommerce/bestbuy_seo.json)**
```
import json
input_file = open ('data/bestbuy_seo.json')
json_array = json.load(input_file)
product_list = []
for item in json_array:
product_details = {"name": None, "shortDescription": None, "bestSellingRank": None,
"thumbnailImage": None, "salePrice": None, "manufacturer": None, "url": None,
"type": None, "image": None, "customerReviewCount": None, "shipping": None,
"salePrice_range": None, "objectID": None, "categories": [None] }
product_details['name'] = item['name']
product_details['shortDescription'] = item['shortDescription']
product_details['bestSellingRank'] = item['bestSellingRank']
product_details['thumbnailImage'] = item['thumbnailImage']
product_details['salePrice'] = item['salePrice']
product_details['manufacturer'] = item['manufacturer']
product_details['url'] = item['url']
product_details['type'] = item['type']
product_details['image'] = item['image']
product_details['customerReviewCount'] = item['customerReviewCount']
product_details['shipping'] = item['shipping']
product_details['salePrice_range'] = item['salePrice_range']
product_details['objectID'] = item['objectID']
product_details['categories'] = item['categories']
product_list.append(product_details)
# Let's dump it to the screen to see if it works
print json.dumps(product_list, indent=4)
```
|
2017/11/01
|
[
"https://Stackoverflow.com/questions/47064278",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1257896/"
] |
my guess is that one of the items in your data does not have a 'manufacturer' key set.
replace
item['manufacturer']
by
item.get('manufacturer', None)
or replace None by a default manufacturer...
|
Here are two ways of doing getting around a dictionary not having a key. Both work but the first one is probably easier to use and will work as a drop in for your current code.
This is a way of doing it using python's `dictionary.get()` method. [Here is a page with more examples of how it works](https://www.tutorialspoint.com/python/dictionary_get.htm). This method of solving the problem was inspired by [this](https://stackoverflow.com/a/47064338/2990052) answer by `Ian A. Mason` to the current question. I changed your code inspired by his answer.
```
import json
input_file = open('data/bestbuy_seo.json')
json_array = json.load(input_file)
product_list = []
for item in json_array:
product_details = {
'name': item.get('name', None),
'shortDescription': item.get('shortDescription', None),
'bestSellingRank': item.get('bestSellingRank', None),
'thumbnailImage': item.get('thumbnailImage', None),
'salePrice': item.get('salePrice', None),
'manufacturer': item.get('manufacturer', None),
'url': item.get('url', None),
'type': item.get('type', None),
'image': item.get('image', None),
'customerReviewCount': item.get('customerReviewCount', None),
'shipping': item.get('shipping', None),
'salePrice_range': item.get('salePrice_range', None),
'objectID': item.get('objectID', None),
'categories': item.get('categories', None)
}
product_list.append(product_details)
# Let's dump it to the screen to see if it works
print json.dumps(product_list, indent=4)
```
This is a second way of doing it using the 'Ask for forgiveness not permission' concept in python. It is easy to just let the one object that is missing an attribute fail and keep going. It is a lof faster to do a try and expect than a bunch of if's.
[Here is a post about this concept.](https://www.tutorialspoint.com/python/dictionary_get.htm)
```
import json
from copy import deepcopy
input_file = open('data/bestbuy_seo.json')
json_array = json.load(input_file)
product_list = []
product_details_master = {"name": None, "shortDescription": None, "bestSellingRank": None,
"thumbnailImage": None, "salePrice": None, "manufacturer": None, "url": None,
"type": None, "image": None, "customerReviewCount": None, "shipping": None,
"salePrice_range": None, "objectID": None, "categories": [None]}
for item in json_array:
product_details_temp = deepcopy(product_details_master)
try:
product_details_temp['name'] = item['name']
product_details_temp['shortDescription'] = item['shortDescription']
product_details_temp['bestSellingRank'] = item['bestSellingRank']
product_details_temp['thumbnailImage'] = item['thumbnailImage']
product_details_temp['salePrice'] = item['salePrice']
product_details_temp['manufacturer'] = item['manufacturer']
product_details_temp['url'] = item['url']
product_details_temp['type'] = item['type']
product_details_temp['image'] = item['image']
product_details_temp['customerReviewCount'] = item['customerReviewCount']
product_details_temp['shipping'] = item['shipping']
product_details_temp['salePrice_range'] = item['salePrice_range']
product_details_temp['objectID'] = item['objectID']
product_details_temp['categories'] = item['categories']
product_list.append(product_details_temp)
except KeyError:
# Add error handling here! Right now if a product does not have all the keys NONE of the current object
# will be added to the product_list!
print 'There was a missing key in the json'
# Let's dump it to the screen to see if it works
print json.dumps(product_list, indent=4)
```
| 5,369
|
65,364,425
|
I'm trying to pass primary key as URL argument from CreatePost to UploadImage view but I'm constantly getting an error even if I see primary key in URL. I'm new to Django, so please help me :)
**views.py**
```
class CreatePost(CreateView):
model=shopModels.UserPost
template_name='shop/create_post.html'
fields='__all__'
def get_form(self, form_class=None):
form = super().get_form(form_class)
form.fields['category'].widget.attrs['class'] = 'form-control'
form.fields['category'].widget.attrs['oninvalid']="this.setCustomValidity('Ovo polje je obavezno!')"
return form
def dispatch(self, request, *args, **kwargs):
if not request.user.is_authenticated:
return redirect('login')
return super(CreatePost, self).dispatch(request, *args, **kwargs)
def get_success_url(self):
return reverse('image_upload',kwargs={'user_post':self.object.id})
class UploadImage(CreateView):
model=shopModels.Image
template_name='shop/images_upload.html'
fields='__all__'
def dispatch(self, request, *args, **kwargs):
if not request.user.is_authenticated:
return reverse('login')
return super(UploadImage, self).dispatch(request, *args, **kwargs)
```
**urls.py**
...
```
path('create_post/',views.CreatePost.as_view(),name="create_post"),
path('image_upload/<int:user_post>',views.UploadImage.as_view(),name="image_upload"),
```
...
**models.py**
```
class UserPost(models.Model):
id = models.AutoField(primary_key=True)
user=models.ForeignKey(Account,on_delete=models.CASCADE)
title=models.CharField(max_length=255)
text=models.TextField(null=True)
category=models.ForeignKey(Category,null=True,on_delete=models.SET_NULL)
is_used=models.BooleanField(default=False)
price=models.IntegerField(default=0)
created_at = models.DateField(auto_now_add=True)
updated_at = models.DateField(auto_now=True)
is_active = models.BooleanField(default=True)
def __str__(self):
return self.title + ' | ' + str(self.user)
def get_absolute_url(self,*args,**kwargs):
return reverse('image_upload', kwargs={'user_post':self.id})
class Image(models.Model):
id = models.AutoField(primary_key=True)
user_post=models.ForeignKey(UserPost,default=None, on_delete=models.CASCADE)
image=models.ImageField(null=True,blank=True,upload_to='images/')
def __str__(self):
return self.user_post.title
def get_absolute_url(self):
return reverse('index')
```
**EDIT!!!**
```
NoReverseMatch at /image_upload/23
```
Reverse for 'image\_upload' with no arguments not found. 1 pattern(s) tried: ['image\_upload/(?P<user\_post>[0-9]+)$']
Request Method: GET
Request URL: <http://127.0.0.1:8000/image_upload/23>
Django Version: 3.1.4
Exception Type: NoReverseMatch
Exception Value:
Reverse for 'image\_upload' with no arguments not found. 1 pattern(s) tried: ['image\_upload/(?P<user\_post>[0-9]+)$']
Exception Location: C:\Users\marij\AppData\Local\Programs\Python\Python38-32\lib\site-packages\django\urls\resolvers.py, line 685, in \_reverse\_with\_prefix
Python Executable: C:\Users\marij\AppData\Local\Programs\Python\Python38-32\python.exe
Python Version: 3.8.3
|
2020/12/18
|
[
"https://Stackoverflow.com/questions/65364425",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14852796/"
] |
I think you are not including `user_post` key in html form, you should include it in jinja style
`<a class="btn btn-success" href="{% url 'image_upload' user_post=user_post_key %}">`
And if you want to do some operations based on that `user_post` You should override the `form_valid(self)` method and access the `int:user_post` by using `self.kwargs['user_post']`.
|
Your path explicitly expects and integer for user\_post:
```
path('image_upload/<int:user_post>',views.UploadImage.as_view(),name="image_upload"),
```
If you call reverse(...) and you handover 123 you meed to make sure that 123 is a type integer and not e.g. string.
| 5,372
|
37,103,682
|
I am only starting out programming and currently making a text game. I know there is no goto command in python and after doing some research I understood that I have to use loops to replace that command but it just isn't doing what i was hoping it would do. Here's my code:
```
print('Welcome to my bad game!')
print('Please, press Enter to continue.')
variable=input()
if variable == '':
print('You are in a dark forest. You can only choose one item. Press 1 for a flashlight and press 2 for an axe.')
while True:
item=input()
if item=='2':
print('Bad choice, you lost the game.')
quit()
if item=='1':
print('Good choise, now you can actually see something.')
```
So my problem with this is that if the player chooses the 'wrong' item the program just kills itself but I would want it to jump back to the beginning. I actually don't know if this is even possible but better ask than just wonder.
|
2016/05/08
|
[
"https://Stackoverflow.com/questions/37103682",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6307467/"
] |
If you mean to the beginning to the loop, just leave out the call to `quit`. If you mean to the beginning of the *program*, then you'll need a loop around that as well.
|
Instead of quit you could use 'continue' to loop back to your while. But it's not clear from this example what you want the while to do.
| 5,373
|
4,619,580
|
With python, I can use [logging](http://docs.python.org/library/logging.html) library.
What do you use for the logging library with C++?
|
2011/01/06
|
[
"https://Stackoverflow.com/questions/4619580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/260127/"
] |
I personally like: <http://code.google.com/p/google-glog/>
You have many options though. This one is pretty similar to what you are used to.
|
Maybe you will want to take a look at <https://github.com/gabime/spdlog>, they used a Python style syntax to compose log messages, and is pretty fast and safe.
| 5,374
|
45,386,035
|
I run several python subprocesses to migrate data to S3. I noticed that my python subprocesses often drops to 0% and *this condition lasts more than one minute*. This significantly decreases the performance of the migration process.
Here is the pic of the sub process:
[](https://i.stack.imgur.com/aRpPe.png)
The subprocess does these things:
1. Query all tables from a database.
2. Spawn sub processes for each table.
```
for table in tables:
print "Spawn process to process {0} table".format(table)
process = multiprocessing.Process(name="Process " + table,
target=target_def,
args=(args, table))
process.daemon = True
process.start()
processes.append(process)
for process in processes:
process.join()
```
3. Query data from a database using Limit and Offset. I used PyMySQL library to query the data.
4. Transform returned data to another structure. `construct_structure_def()` is a function that transform row into another format.
```
buffer_string = []
for i, row_file in enumerate(row_files):
if i == num_of_rows:
buffer_string.append( json.dumps(construct_structure_def(row_file)) )
else:
buffer_string.append( json.dumps(construct_structure_def(row_file)) + "\n" )
content = ''.join(buffer_string)
```
5. Write the transformed data into a file and compress it using gzip.
```
with gzip.open(file_path, 'wb') as outfile:
outfile.write(content)
return file_name
```
6. Upload the file to S3.
7. Repeat step 3 - 6 until no more rows to be fetched.
In order to speed up things faster, I create subprocesses for each table using `multiprocesses.Process` built-in library.
I ran my script in a virtual machine. Following are the specs:
* processor: Intel(R) Xeon(R) CPU E5-2690 @ 2.90 Hz 2.90 GHz (2 Processes)
* Virtual processors: 4
* Installed RAM: 32 GB
* OS: Windows Enterprise Edition.
I saw on the post in [here](https://stackoverflow.com/questions/41942960/python-randomly-drops-to-0-cpu-usage-causing-the-code-to-hang-up-when-handl?noredirect=1&lq=1) that said one of the main possibilities is because of memory I/O limitation. So I tried to run one sub process to test that theory, but no avail.
Any idea why this is happening? Let me know if you guys need more information.
Thank you in advance!
|
2017/07/29
|
[
"https://Stackoverflow.com/questions/45386035",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5610684/"
] |
Turns out the culprit was the query I ran. The query took a long time to return the result. This made the python script idle thus zero percent usage.
|
Your VM is a Windows machine, I'm more of a Linux person so I'd love it if someone will back me up here.
I think the `daemon` is the problem here.
I've read about [daemon preocesses](https://en.wikipedia.org/wiki/Daemon_(computing)) and especially about [TSR](https://en.wikipedia.org/wiki/Terminate_and_stay_resident_program).
The first line in TSR says:
>
> Regarding computers, a terminate and stay resident program (commonly referred to by the initialism TSR) is a computer program that uses a system call in DOS operating systems to return control of the computer to the operating system, as though the program has quit, but stays resident in computer memory so it can be reactivated by a hardware or software interrupt.
>
>
>
As I understand, making the process a `daemon` (or `TSR` in your case) makes it dormant until a syscall will wake it up, which I don't think is the case here (correct me if I'm wrong).
| 5,378
|
54,509,350
|
I want to write aws lambda function to fetch data from on premises oracle db and migrate to aurora db.
I tried :
```
var oracledb = require('oracledb-for-lambda');
var os = require('os');
var fs = require('fs');
'use strict';
str_host = os.hostname() + ' localhost\n';
fs.appendFile(process.env.HOSTALIASES,str_host , function(err){
if(err) throw err;
});
```
But I am again stuck as it does not seem to work.
Can someone show me , i have table with same columns present in oracle db as well as aurora db i want to map form oracle to aurora. How to write it in java or python using aws lambda.
|
2019/02/04
|
[
"https://Stackoverflow.com/questions/54509350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9951102/"
] |
***KeyNotFoundException ... Why?***
The distilled, core reason is that the `Equals` and `GetHashCode` methods are inconsistent. This situation is fixed by doing 2 things:
* Override `Equals` in `TestClass`
* Never modify a dictionary during iteration
+ It's that the key object/value is being modified
---
**`GetHashCode` - `Equals` disconnect**
* [... GetHashCode is not expected to change over the lifecycle of the object.](https://stackoverflow.com/questions/54509316/keynotfoundexception-in-c-sharp-dictionary-after-changing-property-value-based-o#comment95822315_54509376) AND [do not use values that are subject to change while calculating `GetHashCode()`](https://stackoverflow.com/questions/54509316/keynotfoundexception-in-c-sharp-dictionary-after-changing-property-value-based-o/54509376?noredirect=1#comment95822349_54509376)
+ Not exactly. The problem is a changing hash code while equality does not change.
+ MSDN says: *The GetHashCode() method for an object must consistently return the same hash code as long as there is no modification to the object state that determines the return value of the object's System.Object.Equals method*
---
**`TestClass.Equals`**
I say `TestClass` because that is the dictionary key. But this applies to `ValueClass` too.
A class' `Equals` and `GetHashCode` ***must*** be consistent. When overriding either but not both then they are not consistent. We all know "if you override `Equals` also override `GetHashCode`". We never override `GetHashCode` yet seem to get away with it. Hear me now and believe me the first time you implement `IEqualityComparer` and `IEquatable` - always override both.
---
**Iterating Dictionary**
Do not add or delete an element, modify a key, nor modify a value (sometimes) *during iteration*.
* [Editing Dictionary Values ... in a loop](https://stackoverflow.com/q/1070766/463206) - It hits on some technical reasons a dictionary entry cannot safely be modified during iteration
* [MSDN Dictionary doc](https://learn.microsoft.com/en-us/dotnet/api/system.collections.generic.dictionary-2?view=netframework-4.7.2)
* [Can change value during iteration](https://social.msdn.microsoft.com/Forums/vstudio/en-US/c98e9d32-7f06-4b3a-8917-8b58eee31e58/change-dictionary-values-while-iterating?forum=netfxbcl) - The dictionary "value", not the key's value
* [Well, not in every case](https://www.c-sharpcorner.com/blogs/updating-dictionary-elements-while-iterating1)
---
**GetHashCode**
* [MSDN GetHashCode](https://learn.microsoft.com/en-us/dotnet/api/system.object.gethashcode?view=netframework-4.7.2)
+ *Do not use the hash code as the key to retrieve an object from a keyed collection.*
+ *Do not test for equality of hash codes to determine whether two objects are equal*
The OP code may not be doing this literally but certainly virtually because there is no Equals override.
[Here is a neat hash algorithm from C# demi god Eric Lipper](https://stackoverflow.com/a/263416/463206)
|
That is the expected behavior with your code. Then what is your wrong with your code?
Look at your Key Class. You are overriding your `GetHashCode()` and on top of that you are using a mutable value to calculate the `GetHashCode()` method (very very bad :( ).
```
public class TestClass
{
public int MyProperty { get; set; }
public override int GetHashCode() {
return MyProperty;
}
}
```
The look up in the implementation of the Dictionary uses `GetHashCode()` of the inserted object. At the time of insertion of your object your `GetHashCode()` returned some value and that object got inserted into some `bucket`. However, after you changed your `MyProperty`; `GetHashCode()` does not return the same value therefore it can not be look-ed up any more
This is where the lookup occurs
```
Console.WriteLine($"{dict[k].MyProperty} - {k.MyProperty}");
```
`dict[k]` already had its `MyProperty` changed therefore `GetHashCode()` does not return the value when the object first added to the dictionary.
Ant another really important thing is to keep in mind that when you override `GetHashCode()` then override `Equals()` as well. The inverse is true too!
| 5,379
|
33,630,400
|
How do I extract two `i32`s from a single line of input in Rust? In Python I can read two `int`s like:
```python
a, b = map(int, input().split()) # "2 3" => a=2 and b=3
```
As of Rust 1.3.0, I can run the following to read one i32:
```rust
let mut s = String::new();
std::io::stdin().read_line(&mut s).ok().expect("read_line panic");
let n: i32 = s.trim().parse().ok().expect("parse panic");
```
|
2015/11/10
|
[
"https://Stackoverflow.com/questions/33630400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2370748/"
] |
`&str` has a [`split_whitespace()`](http://doc.rust-lang.org/std/primitive.str.html#method.split_whitespace) method which returns an iterator yielding parts of the target string separated by an arbitrary amount of whitespace, much like `split()` in Python. You can use the [`map()`](http://doc.rust-lang.org/std/iter/trait.Iterator.html#method.map) iterator adapter to convert each part to a number:
```
let mut s = String::new();
std::io::stdin().read_line(&mut s).expect("read_line error");
let mut parts = s.split_whitespace().map(|s| s.parse::<i32>());
match (parts.next(), parts.next()) {
(Some(Ok(a)), Some(Ok(b))) => {
// a and b are i32
}
// handle other problems: not enough numbers, numbers are invalid, etc
_ => {} // ignore invalid input
}
```
This looks a lot more verbose than the Python version, but that's mostly because in Python all errors will be thrown as exceptions, while in Rust you have to handle them explicitly. If you don't care about them, you can throw `match` away:
```
let a = parts.next().unwrap().unwrap();
let b = parts.next().unwrap().unwrap();
```
Even another approach would be to use a handy `collect()` implementation for `Result<Vec<_>, _>`:
```
let items: Result<Vec<i32>, _> = parts.collect();
```
This way if any of numbers in the input string fail to parse, `items` will contain the corresponding `Err` value, but if they all are parsed successfully, then `items` will contain `Ok` variant with the vector of parsed numbers. With this approach you also do not need to specify `::<i32>()` in `parse()` invocation, as it will be inferred automatically (no need for `mut` either):
```
let parts = s.split_whitespace().map(|s| s.parse());
```
Also there is no one-liner function to read a string from stdin in the standard library. It is somewhat unfortunate but rarely a problem in practice. There are libraries which provide such functionality; see other answers to find some examples.
|
The Rust code is always going to be more verbose than the Python one. But since version 1.26, Rust also supports slice patterns as shown below. The code looks more readable in my opinion.
```
fn main() {
let a = "2 3";
if let [Ok(aa), Ok(aaa)] = &a.split(" ")
.map(|a| a.parse::<i32>())
.collect::<Vec<_>>()[..] {
println!("{:?} {:?}", aa, aaa);
}
}
```
| 5,381
|
14,259,660
|
I am calling a python script from within a shell script. The python script returns error codes in case of failures.
How do I handle these error codes in shell script and exit it when necessary?
|
2013/01/10
|
[
"https://Stackoverflow.com/questions/14259660",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/202325/"
] |
The exit code of last command is contained in `$?`.
Use below pseudo code:
```
python myPythonScript.py
ret=$?
if [ $ret -ne 0 ]; then
#Handle failure
#exit if required
fi
```
|
You mean [the `$?` variable](http://tldp.org/LDP/abs/html/exit-status.html)?
```
$ python -c 'import foobar' > /dev/null
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named foobar
$ echo $?
1
$ python -c 'import this' > /dev/null
$ echo $?
0
```
| 5,384
|
21,517,740
|
I am new to Python having come from mainly Java programming.
I am currently pondering over how classes in Python are instantiated.
I understand that `__init__()`: is like the constructor in Java. However, sometimes python classes do not have an `__init__()` method which in this case I assume there is a default constructor just like in Java?
Another thing that makes the transition from Java to python slightly difficult is that in Java you have to define all the instance fields of the class with the type and sometimes an initial value. In python all of this just seems to disappear and developers can just define new fields on the fly.
For example I have come across a program like so:
```
class A(Command.UICommand):
FIELDS = [
Field( 'runTimeStepSummary', BOOL_TYPE)
]
def __init__(self, runTimeStepSummary=False):
self.runTimeStepSummary = runTimeStepSummary
"""Other methods"""
def execute(self, cont, result):
self.timeStepSummaries = {}
""" other code"""
```
The thing that confuses (and slightly irritates me) is that this A class does not have a field called timeStepSummaries yet how can a developer in the middle of a method just define a new field? or is my understanding incorrect?
So to be clear, my question is in Python can we dynamically define new fields to a class during runtime like in this example or is this timeStepSummaries variable an instance of a java like private variable?
EDIT: I am using python 2.7
|
2014/02/02
|
[
"https://Stackoverflow.com/questions/21517740",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2020869/"
] |
>
> I understand that `__init__()`: is like the constructor in Java.
>
>
>
To be more precise, in Python `__new__` is the constructor method, `__init__` is the initializer. When you do `SomeClass('foo', bar='baz')`, the `type.__call__` method basically does:
```
def __call__(cls, *args, **kwargs):
instance = cls.__new__(*args, **kwargs)
instance.__init__(*args, **kwargs)
return instance
```
Generally, most classes will define an `__init__` if necessary, while `__new__` is more commonly used for immutable objects.
>
> However, sometimes python classes do not have an **init**() method which in this case I assume there is a default constructor just like in Java?
>
>
>
I'm not sure about old-style classes, but this is the case for new-style ones:
```
>>>> object.__init__
<slot wrapper '__init__' of 'object' objects>
```
If no explicit `__init__` is defined, the default will be called.
>
> So to be clear, my question is in Python can we dynamically define new fields to a class during runtime like in this example
>
>
>
Yes.
```
>>> class A(object):
... def __init__(self):
... self.one_attribute = 'one'
... def add_attr(self):
... self.new_attribute = 'new'
...
>>> a = A()
>>> a.one_attribute
'one'
>>> a.new_attribute
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'A' object has no attribute 'new_attribute'
>>> a.add_attr()
>>> a.new_attribute
'new'
```
Attributes can be added to an instance at any time:
```
>>> a.third_attribute = 'three'
>>> a.third_attribute
'three'
```
However, it's possible to restrict the instance attributes that can be added through the class attribute `__slots__`:
```
>>> class B(object):
... __slots__ = ['only_one_attribute']
... def __init__(self):
... self.only_one_attribute = 'one'
... def add_attr(self):
... self.another_attribute = 'two'
...
>>> b = B()
>>> b.add_attr()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 6, in add_attr
AttributeError: 'B' object has no attribute 'another_attribute'
```
(It's probably important to note that `__slots__` is primarily intended as a *memory optimization* - by not requiring an object have a dictionary for storing attributes - rather than as a form of run-time modification prevention.)
|
This answer pertains to new-style Python classes, which subclass `object`. New-style classes were added in 2.2, and they're the only kind of class available in PY3.
```
>>> print object.__doc__
The most base type
```
The class itself is an instance of a metaclass, which is usually `type`:
```
>>> print type.__doc__
type(object) -> the object's type
type(name, bases, dict) -> a new type
```
Per the above docstring, you can instantiate the metaclass directly to create a class:
```
>>> Test = type('Test', (object,), {'__doc__': 'Test class'})
>>> isinstance(Test, type)
True
>>> issubclass(Test, object)
True
>>> print Test.__doc__
Test class
```
Calling a class is handled by the metaclass `__call__` method, e.g. `type.__call__`. This in turn calls the class `__new__` constructor (typically inherited) with the call arguments in order to create an instance. Then it calls `__init__`, which may set instance attributes.
Most objects have a `__dict__` that allows setting and deleting attributes dynamically, such as `self.value = 10` or `del self.value`. It's generally bad form to modify an object's `__dict__` directly, and actually disallowed for a class (i.e. a class dict is wrapped to disable direct modification). If you need to access an attribute dynamically, use the [built-in functions](http://docs.python.org/2/library/functions.html#built-in-functions) `getattr`, `setattr`, and `delattr`.
The data model defines the following special methods for [customizing attribute access](http://docs.python.org/2/reference/datamodel.html#customizing-attribute-access): `__getattribute__`, `__getattr__`, `__setattr__`, and `__delattr__`. A class can also define the descriptor protocol methods `__get__`, `__set__`, and `__delete__` to determine how its instances behave as attributes. Refer to the [descriptor guide](http://docs.python.org/2/howto/descriptor.html).
When looking up an attribute, `object.__getattribute__` first searches the object's class and base classes using the [C3 method resolution order](http://www.python.org/download/releases/2.3/mro/) of the class:
```
>>> Test.__mro__
(<class '__main__.Test'>, <type 'object'>)
```
Note that a data descriptor defined in the class (e.g. a `property` or a `member` for a slot) takes precedence over the instance dict. On the other hand, a non-data descriptor (e.g. a function) or a non-descriptor class attribute can be shadowed by an instance attribute. For example:
```
>>> Test.x = property(lambda self: 10)
>>> inspect.isdatadescriptor(Test.x)
True
>>> t = Test()
>>> t.x
10
>>> t.__dict__['x'] = 0
>>> t.__dict__
{'x': 0}
>>> t.x
10
>>> Test.y = 'class string'
>>> inspect.isdatadescriptor(Test.y)
False
>>> t.y = 'instance string'
>>> t.y
'instance string'
```
Use [`super`](http://docs.python.org/2/library/functions.html#super) to proxy attribute access for the next class in the method resolution order. For example:
```
>>> class Test2(Test):
... x = property(lambda self: 20)
...
>>> t2 = Test2()
>>> t2.x
20
>>> super(Test2, t2).x
10
```
| 5,386
|
56,760,023
|
I am trying yo use a PyTorch library SparseConvNet (<https://github.com/facebookresearch/SparseConvNet>) in Google Colaboratory. In order to install it properly, you need to first install Conda, and then using Conda install the SparseConvNet package. Here is the code I am using (following the instructions from scn readme file):
```
!wget -c https://repo.continuum.io/archive/Anaconda3-5.1.0-Linux-x86_64.sh
!chmod +x Anaconda3-5.1.0-Linux-x86_64.sh
!bash ./Anaconda3-5.1.0-Linux-x86_64.sh -b -f -p /usr/local
import sys
sys.path.append('/usr/local/lib/python3.6/site-packages/')
!conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
!conda install google-sparsehash -c bioconda
!conda install -c anaconda pillow
!git clone https://github.com/facebookresearch/SparseConvNet.git
!cd SparseConvNet/
!bash develop.sh
```
When I run this it is working and I can successfully import sparseconvnet package, but I need to do it every time I enter the notebook or restart runtime, and it's taking a lot of time. Is it possible to install these packages permanently?
There is one similar question, and the answer suggest that I should install it
on my drive, but I don't know how to do it using conda.
Thanks!
|
2019/06/25
|
[
"https://Stackoverflow.com/questions/56760023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11627002/"
] |
You can specify the directory for conda to install to using
```
conda install -p path_to_your_dir
```
So, you can mount your google drive and conda install there to make it permanent.
|
The whole environment that Google Colaboratory runs your notebooks is not permanent, it is one of their premises. If you need a persistent environment consider running Jupyter directly on a Google Cloud Compute Engine VM, they have pre-built images with everything configured [here](https://cloud.google.com/deep-learning-vm/docs/pytorch_start_instance) or [Google Cloud Datalab](https://cloud.google.com/datalab/) (which runs on a GCE VM, but is managed)
| 5,389
|
58,131,697
|
While reading a file in python, I was wondering how to get the next `n` lines when we encounter a line that meets my condition.
Say there is a file like this
```
mangoes:
1 2 3 4
5 6 7 8
8 9 0 7
7 6 8 0
apples:
1 2 3 4
8 9 0 9
```
Now whenever we find a line starting with mangoes, I want to be able to read all the next 4 lines.
I was able to find out how to do the next immediate line but not next `n` immediate lines
```
if (line.startswith("mangoes:")):
print(next(ifile)) #where ifile is the input file being iterated over
```
|
2019/09/27
|
[
"https://Stackoverflow.com/questions/58131697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/886357/"
] |
just repeat what you did
```
if (line.startswith("mangoes:")):
for i in range(n):
print(next(ifile))
```
|
Unless it's a huge file and you don't want to read all lines into memory at once you could do something like this
```py
n = 4
with open(fn) as f:
lines = f.readlines()
for idx, ln in enumerate(lines):
if ln.startswith("mangoes"):
break
mangoes = lines[idx:idx+n]
```
This would give you a list of the `n` number of lines, including the word `mangoes`. if you did `idx=idx+1` then you'd skip the title too.
| 5,391
|
27,528,566
|
I am trying to return a python dictionary to the view with AJAX and reading from a JSON file, but so far I am only returning `[object Object],[object Object]`...
and if I inspect the network traffic, I can indeed see the correct data.
So here is how my code looks like. I have a class and a method which based on the selected ID (request argument method), will print specific data. Its getting the data from a python discretionary. the problem is not here, have already just tested it. But just in case I will link it.
# method to create the directionary - just in case #
```
def getCourselist_byClass(self, classid):
"""
Getting the courselist by the class id, joining the two tables.
Will only get data if both of them exist in their main tables.
Returning as a list.
"""
connection = db.session.connection()
querylist = []
raw_sql = text("""
SELECT
course.course_id,
course.course_name
FROM
course
WHERE
EXISTS(
SELECT 1
FROM
class_course_identifier
WHERE
course.course_id = class_course_identifier.course_id
AND EXISTS(
SELECT 1
FROM
class
WHERE
class_course_identifier.class_id = class.class_id
AND class.class_id = :classid
)
)""")
query = connection.engine.execute(raw_sql, {'classid': classid})
for column in query:
dict = {
'course_id' : column['course_id'],
'course_name' : column['course_name']
}
querylist.append(dict)
return querylist
```
my jsonify route method
=======================
```
@main.route('/task/create_test')
def get_courselist():
#objects
course = CourseClass()
class_id = request.args.get('a', type=int)
#methods
results = course.getCourselist_byClass(class_id)
return jsonify(result=results)
```
HTML
====
and here is how the input field and where it should link the data looks like.
```
<input type="text" size="5" name="a">
<span id="result">?</span>
<p><a href="javascript:void();" id="link">click me</a>
```
and then I am calling it like this
```
<script type=text/javascript>
$(function() {
$('a#link').bind('click', function() {
$.getJSON("{{ url_for('main.get_courselist') }}", {
a: $('input[name="a"]').val()
}, function(data) {
$("#result").text(data.result);
});
return false;
});
});
</script>
```
but every time I enter a id number in the field, i am getting the correct data. but it is not formatted correctly. It is instead printing it like [object Object]
*source, followed this guide as inspiration: [flask ajax example](http://runnable.com/UiPhLHanceFYAAAP/how-to-perform-ajax-in-flask-for-python)*
|
2014/12/17
|
[
"https://Stackoverflow.com/questions/27528566",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1731280/"
] |
I think you're getting confused because you actually have two tables of
data linked by a common ID:
```
library(dplyr)
df <- tbl_df(df)
years <- df %>%
filter(attributes == "YR") %>%
select(id = ID, year = values)
years
#> Source: local data frame [6 x 2]
#>
#> id year
#> 1 1 2014
#> 2 2 2013
#> 3 3 2014
#> 4 4 2014
#> 5 5 2013
#> .. .. ...
authors <- df %>%
filter(attributes == "AU") %>%
select(id = ID, author = values)
authors
#> Source: local data frame [16 x 2]
#>
#> id author
#> 1 1 AAA
#> 2 1 BBB
#> 3 2 CCC
#> 4 2 DDD
#> 5 2 EEE
#> .. .. ...
```
Once you have the data in this form, it's easy to answer the questions
you're interested in:
1. Authors per paper:
```
n_authors <- authors %>%
group_by(id) %>%
summarise(n = n())
```
Or
```
n_authors <- authors %>% count(id)
```
2. Median authors per year:
```
n_authors %>%
left_join(years) %>%
group_by(year) %>%
summarise(median(n))
#> Joining by: "id"
#> Source: local data frame [2 x 2]
#>
#> year median(n)
#> 1 2013 5.0
#> 2 2014 1.5
```
|
I misunderstood the structure of your dataset initially. Thanks to the comments below I realize your data needs to be restructured.
```
# split the data out
df1 <- df[df$attributes == "AU",]
df2 <- df[df$attributes == "YR",]
# just keeping the columns with data as opposed to the label
df3 <- merge(df1, df2, by="ID")[,c(1,3,5)]
# set column names for clarification
colnames(df3) <- c("ID","author","year")
# get author counts
num.authors <- count(df3, vars=c("ID","year"))
ID year freq
1 1 2014 2
2 2 2013 5
3 3 2014 1
4 4 2014 2
5 5 2013 5
6 6 2014 1
summaryBy(freq ~ year, data = num.authors, FUN = list(median))
year freq.median
1 2013 5.0
2 2014 1.5
```
The nice thing about `summaryBy` is that you can add in which ever function has been defined in the list and you will get another column containing the other metric (e.g. mean, sd, etc.)
| 5,393
|
56,924,174
|
I'm importing files from the following folder inside a python code:
```
Mask_RCNN
-mrcnn
-config.py
-model.py
-__init__.py
-utils.py
-visualize.py
```
I'm using the following imports:
These work ok:
from Mask\_RCNN.mrcnn.config import Config
from Mask\_RCNN. mrcnn import utils
These give me error:
```
from Mask_RCNN.mrcnn import visualize
import mrcnn.model as modellib
```
Error:
```
ImportError: No module named 'mrcnn'
```
How to import these properly?
|
2019/07/07
|
[
"https://Stackoverflow.com/questions/56924174",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10028453/"
] |
You get an error for the 2nd import, where you omit `Mask_RCNN` from the package name.
Try changing the lines to:
```
from Mask_RCNN.mrcnn import visualize
import Mask_RCNN.mrcnn.model as modellib
```
|
Use this line before importing the libraries
```
sys.path.append("Mask_RCNN/")
```
| 5,396
|
25,012,210
|
Steps I followed to build WebRTC for Android in UBUNTU 13.10 env.
Check out the code:
```
gclient config https://webrtc.googlecode.com/svn/trunk
echo "target_os = ['android', 'unix']" >> .gclient
gclient sync --nohooks
cd trunk
source ./build/android/envsetup.sh
export GYP_DEFINES="build_with_libjingle=1 build_with_chromium=0 libjingle_java=1 OS=android $GYP_DEFINES"
gclient runhooks
```
I'm getting this error:
```
gyp: /home/joss/Desarrollo/Glass/GDK/librerias/webrtc/trunk/third_party/boringssl/boringssl.gyp not found (cwd: /home/joss/Desarrollo/Glass/GDK/librerias/webrtc)
Error: Command /usr/bin/python trunk/webrtc/build/gyp_webrtc -Dextra_gyp_flag=0 returned non-zero exit status 1 in /home/joss/Desarrollo/Glass/GDK/librerias/webrtc
```
If I remove `"OS=android"` from `GYP_DEFINES` the command "gclient runhooks" works but if I try to use the generated library `"libjingle_peerconnection_so.so"` after ninja build I get the following error in Android:
```
dlopen("/data/app-lib/com.mundoglass.glassrtc-1/libjingle_peerconnection_so.so") failed: dlopen failed: "/data/app-lib/com.mundoglass.glassrtc-1/libjingle_peerconnection_so.so" not 32-bit: 2
```
Please, let me know if I'm doing any step wrong. I'm not sure if I have to use `"OS=android"` to generated the Android libraries.
|
2014/07/29
|
[
"https://Stackoverflow.com/questions/25012210",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3580291/"
] |
I don't think you're doing anything wrong.
your error is mentioned [here](https://code.google.com/p/webrtc/issues/detail?id=3622) and i guess it will be fixed.
```
"Yes, chrome has moved to BoringSSL from OpenSSL, which causes some problems in WebRTC Android. We are looking into it."
```
You can try an older revision, I tried revision r6783 as suggested [here](https://code.google.com/p/webrtc/issues/detail?id=3641) and it works fine
|
Follow this [example](http://simonguest.com/2013/08/06/building-a-webrtc-client-for-android/), i have tried it and work success fully.
Only need to make one change is the link provided in this example for gclient config command is older one. Follow your link gclient config <http://webrtc.googlecode.com/svn/trunk>
Also make sure that you have oracle jdk-6, other version creates issues while following the steps to get the native code
Good luck.
| 5,397
|
20,712,314
|
I have a simple python code
```
path1 = //path1/
path2 = //path2/
write_html = """
<form name="input" action="copy_file.php" method="get">
"""
Outfile.write(write_html)
```
Now copy\_file.php copies files from one folder to another. I want the python path1 and path2 variable values to be passed to php script. **How can I do that?** Also instead of calling a php script, how can I place the php code in action attribute.
Php code
```
<?php
$file = $argv[1]
$newfile = $argv[1]
if (!copy($file, $newfile)) {
echo "failed to copy $file...\n";
}
```
?>
|
2013/12/20
|
[
"https://Stackoverflow.com/questions/20712314",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1538688/"
] |
```
write_html = """
<form name="input" action="copy_file.php" method="get">
<input type="hidden" name="path1" value="{0}" />
<input type="hidden" name="path2" value="{1}" />
<input type="button" name="button" value="onClick="copyfile('{0}', '{1}')"/>
<script> function moveFile(path1, path2){ ...} </script>
""".format(path1, path2)
```
Then in `copy_file.php` add
```
$path1 = $_GET["path1"];
$path2 = $_GET["path2"];
```
|
>
> I want the python path1 and path2 variable values to be passed to php script.
>
>
>
Doable:
```
write_html = """
<form name="input" action="copy_file.php" method="get">
<input type="hidden" name="path1" value="%s" />
<input type="hidden" name="path2" value="%s" />
""" % (path1, path2)
```
My python is a bit rusty, but that should work.
>
> instead of calling a php script, how can I place the php code in action attribute.
>
>
>
What? No. Are you insane?
**Edit:** wait, are you just trying to make an HTTP request against a PHP script from within Python?
| 5,398
|
55,494,430
|
Plz suggest how to create dictionary from the following file contetns
```
2,20190327.1.csv.gz
3,20190327.23.csv.gz
4,20190327.21302.csv.gz
2,20190327.24562.csv.gz
```
my required output is
```
{2:20190327.1.csv.gz:982, 3:20190327.23.csv.gz, 4:20190327.21302.csv.gz, 2:20190327.24562.csv.gz}
```
I am new to python and I tried below code but It is not working. Please suggest
```
from __future__ import print_function
import csv
file = '/tmp/.fileA'
with open(file) as fh:
rd = csv.DictReader(fh, delimiter=',')
for row in rd:
print(row)
```
|
2019/04/03
|
[
"https://Stackoverflow.com/questions/55494430",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3619226/"
] |
You could use
`$"color".isin("GREEN","RED","YELLOW")`
Code example:
```
val df2 = df.withColumn("Ind",
when($"color".isin("GREEN","RED","YELLOW"), 1).otherwise(0))
df2.show(false)
```
Outputs:
```
+------+---+
| color|Ind|
+------+---+
| RED| 1|
| GREEN| 1|
|YELLOW| 1|
| PINK| 0|
+------+---+
```
A quick search revealed a similar question already answered in stack-overflow: [Spark SQL - IN clause](https://stackoverflow.com/a/40218776/2613150)
|
You should be able to check the column against a list with:
```
val result = df.withColumn("Ind",
when($"color".in("GREEN", "RED", "YELLOW"), 1).otherwise(0))
```
| 5,400
|
3,902,608
|
I'm pretty new to python and am trying to grab the ropes and decided a fun way to learn would be to make a cheesy MUD type game. My goal for the piece of code I'm going to show is to have three randomly selected enemies(from a list) be presented for the "hero" to fight. The issue I am running into is that python is copying from list to list by reference, not value (I think), because of the code shown below...
```
import random
#ene = [HP,MAXHP,loDMG,hiDMG]
enemies = [[8,8,1,5,"Ene1"],[9,9,3,6,"Ene2"],[15,15,2,8,"Ene3"]]
genENE = []
#skews # of ene's to be gen, favoring 1,2, and 3
eneAppears = 3
for i in range(0,eneAppears):
num = random.randint(5,5)
if num <= 5:
genENE.insert(i,enemies[0])
elif num >= 6 and num <=8:
genENE.insert(i,enemies[1])
else:
genENE.insert(i,enemies[2])
#genENE = [[8,8,1,5,"Ene1"],[9,9,3,6,"Ene2"],[15,15,2,8,"Ene3"]]
for i in range(0,eneAppears):
if eneAppears == 1:
print "A " + genENE[0][4] + " appears!"
else:
while i < eneAppears:
print "A " + genENE[i][4] + " appears!"
i = eneAppears
genENE[1][0] = genENE[1][0] - 1
print genENE
```
Basically I have a "master" list of enemies that I use to copy whatever one I want over into an index of another list during my first "for" loop. Normally the randomly generated numbers are 1 through 10, but the problem I'm having is more easily shown by forcing the same enemy to be inserted into my "copy" list several times. Basically when I try to subtract a value of an enemy with the same name in my "copy" list, they all subtract that value (see last two lines of code). I've done a lot of searching and cannot find a way to *copy* just a single index from one list to another. Any suggestions? Thanks!
|
2010/10/10
|
[
"https://Stackoverflow.com/questions/3902608",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/471561/"
] |
Change
```
genENE.insert(i,enemies[0])
```
to
```
genENE.insert(i,enemies[0][:])
```
This will force the list to be copied rather than referenced. Also, I would use append rather than insert in this instance.
|
`they all subtract that value` What do you mean do they all? If you mean both lists, you're problem is because you're only referencing the list NOT creating a second one.
| 5,401
|
58,931,845
|
My Airflow DAGs mainly consist of PythonOperators, and I would like to use my Python IDEs debug tools to develop python "inside" airflow. - I rely on Airflow's database connectors, which I think would be ugly to move "out" of airflow for development.
I have been using Airflow for a bit, and have so far only achieved development and debugging via the CLI. Which is starting to get tiresome.
Does anyone know of a nice way to set up PyCharm, or another IDE, that enables me to use the IDE's debug toolset when running `airflow test ..`?
|
2019/11/19
|
[
"https://Stackoverflow.com/questions/58931845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5152989/"
] |
It might be somewhat of a hack, but I found one way to set up PyCharm:
* Use `which airflow` to the local airflow environment - which in my case is just a pipenv
* Add a new run configuration in PyCharm
* Set the python "Script path" to said airflow script
* Set Parameters to test a task: `test dag_x task_y 2019-11-19`
This have only been validated with the **SequentialExecutor**, which might be important.
It sucks that I have to change test parameters in the run configuration for every new debug/development task, but so far this is pretty useful for setting breakpoints and stepping through code while "inside" the local airflow environment.
|
I debug `airflow test dag_id task_id`, run on a vagrant machine, using PyCharm. You should be able to use the same method, even if you're running airflow directly on localhost.
[Pycharm's documentation on this subject](https://www.jetbrains.com/help/pycharm/remote-debugging-with-product.html#remote-debug-config) should show you how to create an appropriate "Python Remote Debug" configuration. When you run this config, it waits to be contacted by the bit of code that you've added someplace (for example in one of your operators). And then you can debug as normal, with breakpoints set in Pycharm.
| 5,402
|
55,515,401
|
I have a python script that dynamically create task (airflow operator) and DAG basing on a JSON file that maps every option desired.
The script also dedicated function to create any operator needed.
Sometimes i want to activate some conditional options based on the mapping... for example in a bigqueryOperator sometimes i need a time\_partitioning and a destination\_table, but i don't want to set on every mapped task.
I've tried to read documentation about BaseOperator, but i can't see any java-like set method.
Function that return the operator for example the bigQuery one
```py
def bqOperator(mappedTask):
try:
return BigQueryOperator(
task_id=mappedTask.get('task_id'),
sql=mappedTask.get('sql'),
##destination_dataset_table=project+'.'+dataset+'.'+mappedTask.get('target'),
write_disposition=mappedTask.get('write_disposition'),
allow_large_results=mappedTask.get('allow_large_results'),
##time_partitioning=mappedTask.get('time_partitioning'),
use_legacy_sql=mappedTask.get('use_legacy_sql'),
dag=dag,
)
except Exception as e:
error = 'Error creating BigQueryOperator for task : ' + mappedTask.get('task_id')
logger.error(error)
raise Exception(error)
```
mappedTask inside json file without partitioning
```
{
"task_id": "TEST_TASK_ID",
"sql": "some fancy query",
"type": "bqOperator",
"dependencies": [],
"write_disposition": "WRITE_APPEND",
"allow_large_results": true,
"createDisposition": "CREATE_IF_NEEDED",
"use_legacy_sql": false
},
```
mappedTask inside json file with partitioning
```
{
"task_id": "TEST_TASK_ID_PARTITION",
"sql": "some fancy query",
"type": "bqOperator",
"dependencies": [],
"write_disposition": "WRITE_APPEND",
"allow_large_results": true,
"createDisposition": "CREATE_IF_NEEDED",
"use_legacy_sql": false,
"targetTable": "TARGET_TABLE",
"time_partitioning": {
"field": "DATE_TO_PART",
"type": "DAY"
}
},
```
|
2019/04/04
|
[
"https://Stackoverflow.com/questions/55515401",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4626682/"
] |
Change `bqOperator` as below to handle that case, basically it would pass None when it won't find that field in your json:
```
def bqOperator(mappedTask):
try:
return BigQueryOperator(
task_id=mappedTask.get('task_id'),
sql=mappedTask.get('sql'),
destination_dataset_table="{}.{}.{}".format(project, dataset, mappedTask.get('target')) if mappedTask.get('target', None) else None,
write_disposition=mappedTask.get('write_disposition'),
allow_large_results=mappedTask.get('allow_large_results'),
time_partitioning=mappedTask.get('time_partitioning', None),
use_legacy_sql=mappedTask.get('use_legacy_sql'),
dag=dag,
)
except Exception as e:
error = 'Error creating BigQueryOperator for task : ' + mappedTask.get('task_id')
logger.error(error)
raise Exception(error)
```
|
There is no private methods or fields in python, so you can directly set and get fields like
```py
op.use_legacy_sql = True
```
Given that I strongly discourage from doing this, as this a real code smell. Instead you could modify you factory class to apply some defaults to your json data.
Or even better, apply defaults on json itself. Than save and use updated json. This will make things more predictable.
| 5,407
|
63,979,298
|
So, I have created an html page and it gets content from Python.I am able to get the text from the python program in view.py but unable to set it as an html tags. I have setup the css and js in HTML file but only this problem is arising. Is there a way out?
[](https://i.stack.imgur.com/951Bj.png)
[](https://i.stack.imgur.com/ye3ru.png)
[](https://i.stack.imgur.com/FFx2n.png)
**IN SHORT**: How to tell Django that it is html and it has to simply join it in the code.(Not to use as text.)
*Details*
Python 3.8.2
Django 2.2.7
Edit: Haven't read the official documentation then.
|
2020/09/20
|
[
"https://Stackoverflow.com/questions/63979298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13935716/"
] |
It looks like all standard types (button, image, text, etc) are intercepter by ToolbarItem and converted into appropriate internal representation. But custom view (eg. shape based)... is not. So see below a demo of possible approach.
Demo prepared & tested with Xcode 12 / iOS 14.
[](https://i.stack.imgur.com/o0ZR8.gif)
```
ToolbarItem(placement: .bottomBar) {
ShapeButton(shape: Rectangle(), color: .red) {
print(">> works")
}
}
```
and simple custom Shape-based button
```
struct ShapeButton<S:Shape>: View {
var shape: S
var color: Color
var action: () -> ()
@GestureState private var tapped = false
var body: some View {
shape
.fill(color).opacity(tapped ? 0.4 : 1)
.animation(.linear(duration: 0.15), value: tapped)
.frame(width: 18, height: 18)
.gesture(DragGesture(minimumDistance: 0)
.updating($tapped) { value, state, _ in
state = true
}
.onEnded { _ in
action()
})
}
}
```
|
If you drop into UIKit it's working for me.
```
struct ButtonRepresentation: UIViewRepresentable {
let sfSymbolName: String
let titleColor: UIColor
let action: () -> ()
func makeUIView(context: Context) -> UIButton {
let b = UIButton()
let largeConfig = UIImage.SymbolConfiguration(scale: .large)
let image = UIImage(systemName: sfSymbolName, withConfiguration: largeConfig)
b.setImage(image, for: .normal)
b.tintColor = titleColor
b.addTarget(context.coordinator, action: #selector(context.coordinator.didTapButton(_:)), for: .touchUpInside)
return b
}
func updateUIView(_ uiView: UIButton, context: Context) {}
func makeCoordinator() -> Coordinator {
return Coordinator(action: action)
}
typealias UIViewType = UIButton
class Coordinator {
let action: () -> ()
@objc
func didTapButton(_ sender: UIButton) {
self.action()
}
init(action: @escaping () -> ()) {
self.action = action
}
}
}
```
Then you can add separate ButtonRepresentations with different colors.
```
.toolbar {
ToolbarItem(placement: .cancellationAction) {
ButtonRepresentation(sfSymbolName: closeIcon, titleColor: closeIconColor) {
presentationMode.wrappedValue.dismiss()
}
}
```
| 5,408
|
7,451,347
|
I'm programming an iOS application which needs to communicate with a python app in a very effecient way thru UDP sockets.
In the middle I have a bonjour service which serves as a bridge for my iOS app and host python app to communicate.
I'm building my own protocol which is a simple C structure. The code that I had already as packing strings into NSKeyedArchiver entities which by their turn would be packed in NSData and sent. In the other side there is a NSKeyedUnarchiver.
The problem is that it can't understand the C struct i'm sending. Is there a way of putting a C structure inside a NSKeyedArchiver? How should I modify my middle service to remove this?
|
2011/09/16
|
[
"https://Stackoverflow.com/questions/7451347",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/933104/"
] |
Yes, it is possible.
Read the [Archives and Serializations Programming Guide](http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/Archiving/Archiving.html), everything is explained here with samples, including this case, especially the part [Encoding and decoding C Data Types](http://developer.apple.com/library/mac/documentation/Cocoa/Conceptual/Archiving/Articles/codingctypes.html#//apple_ref/doc/uid/20001294-BBCBDHBI) :)
|
Rather than use NSData - and if the struct contains simple data items, rather than objects pointers you can use NSValue
```
NSValue valueFromStruct = [[NSValue value:&aStruct withObjCType:@encode(YourStructType)] retain];
```
As NSValue conforms to the NSCoding protocol you can use the methods that you want to use.
| 5,417
|
68,382,302
|
I am using a MacOS 10.15 and Python version 3.7.7
I wanted to upgrade pip so I ran `pip install --upgrade pip`, but it turns out my pip was gone (it shows `ImportError: No module named pip` when I want to use `pip install ...`)
I tried several methods like `python3 -m ensurepip`, but it returns
```
Looking in links: /var/folders/sc/f0txnv0j71l2mvss7psclh_h0000gn/T/tmpchwk90o3
Requirement already satisfied: setuptools in ./anaconda3/lib/python3.7/site-packages (49.6.0.post20200814)
Requirement already satisfied: pip in ./anaconda3/lib/python3.7/site-packages (20.2.2)
```
and `pip install` still does not work and returns the same error message.
I also tried `easy_install pip` and other methods but pip still does not work.
Can anyone help me with this?
---
Update:
Using the method from @cshelly, it works on my computer!
|
2021/07/14
|
[
"https://Stackoverflow.com/questions/68382302",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14438351/"
] |
Try the following:
```sh
python3 -m pip --upgrade pip
```
The `-m` flag will run a library module as a script.
|
The pip used by python3 is called pip3. Since you're using python3, you want to do `pip3 install --upgrade pip`.
| 5,418
|
62,451,944
|
```
class TempClass():
def __init__(self,*args):
for i in range(len(args)):
self.number1=args[0]
self.number2=args[1]
print(self.number1,self.number2)
temp1=TempClass(10,20)
```
output: 10 20
```
class TempClass2():
def __init__(self,*args):
for i in range(len(args)):
self.number1=args[0]
self.number2=args[1]
print(self.number1)
print(self.number1[0],self.number2)
temp2=TempClass2([10,20],40)
```
output : [10, 20]
```
class TempClass3():
def __init__(self,*args):
for i in range(len(args)):
self.number1[0]=args[0]
self.number1[1]=args[1]
print(self.number1)
temp3=TempClass3(10,20)
```
output: AttributeError: 'TempClass3' object has no attribute 'number1'
My question is in TempClass3 I tried to create a list by passing parameters to construct. why is it not possible??
Note: I tried doing this while learning OOps concepts in python.. please suggest me if my question itself is meaningless.
|
2020/06/18
|
[
"https://Stackoverflow.com/questions/62451944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5985980/"
] |
You first need to initialize `self.number1` with `[None] * 2` (or similar) before using it.
However, I would use `args` directly:
```
class TempClass3():
def __init__(self,*args):
self.number1 = list(args)
print(self.number1)
temp3=TempClass3(10,20)
```
|
When you are calling self.number1[0]=args[0], you are asking python to first open the list self.number1 which doesn't exist, then find an element in this non-existent list.
If it doesn't exist but you pass it a value, like self.number1=args[0], python will create self.number1 and define self.number1 as args[0].
You could fix this by adding the line self.number1 = [0,0] at the start of tempclass3 so that the list already exists.
| 5,420
|
53,107,475
|
I am trying to install kdb on the jupyter-notebook. First I download the 64-bit windows version on <https://ondemand.kx.com/> and also download the licence in the email.
Then I open it using window command prompt. I set QHOME and PATH using the following code in command prompt:
```
setx QHOME "C:\q"
setx PATH "%PATH%;C:\q\w64"
exit
```
I can run q properly in windows command.
However, when I open Anaconda3 prompt, to run the q, by typing:
```
activate base
q
```
The error appears
```
python.exe: can't open file 'C:\Users\Cecil': [Errno 2] No such file or directory
```
My directory path in Anaconda is
```
(base) C:\Users\Cecil M>
```
And when I open the jupyter-book, the kernel is dead
Is there any step missing here. I have downloaded relative packages, such as kx kdb, kx embedpy, kx jupyterq.
|
2018/11/01
|
[
"https://Stackoverflow.com/questions/53107475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10105915/"
] |
You didn't give us the requirements for the individual filenames, but here is an example that uses a sequential number for each given filename.
```
count = 0
for item in all_news:
count += 1
filename = '{}.txt'.format(count)
with open(filename, 'w') as f_out:
f.write('{}\n'.format(item))
```
|
You have to open a new file on each element of the list, and you will need a counter to ensure have separated filenames (or a second list).
```
counter=0
for item in all_news:
with open('your_file_'+str(counter)+'.txt', 'w') as f:
f.write("%s\n" % item)
counter = counter + 1
```
This will write each element in files like `you_file_0.txt`, `your_file_1.txt`, etc
*(I'm just elaborating what ShadowRanger already commented above).*
| 5,422
|
33,340,749
|
I have registered on Google Developers Console, but my project is not a billed project. I did the steps of “initialized environment.” and “Build and Run
”as the web pages <https://github.com/GoogleCloudPlatform/datalab/wiki/Development-Environment> and <https://github.com/GoogleCloudPlatform/datalab/wiki/Build-and-Run> describe.
But when i run code in Notebook deployed on my local linux server,i run into the following error:
Create and run a SQL query
==========================
bq.Query('SELECT \* FROM [cloud-datalab-samples:httplogs.logs\_20140615] LIMIT 3').results()
--------------------------------------------------------------------------------------------
Exception Traceback (most recent call last)
in ()
1 # Create and run a SQL query
----> 2 bq.Query('SELECT \* FROM [cloud-datalab-samples:httplogs.logs\_20140615] LIMIT 3').results()
/usr/local/lib/python2.7/dist-packages/gcp/bigquery/\_query.pyc in results(self, use\_cache)
130 """
131 if not use\_cache or (self.\_results is None):
--> 132 self.execute(use\_cache=use\_cache)
133 return self.\_results.results
134
/usr/local/lib/python2.7/dist-packages/gcp/bigquery/\_query.pyc in execute(self, table\_name, table\_mode, use\_cache, priority, allow\_large\_results)
343 """
344 job = self.execute\_async(table\_name=table\_name, table\_mode=table\_mode, use\_cache=use\_cache,
--> 345 priority=priority, allow\_large\_results=allow\_large\_results)
346 self.\_results = job.wait()
347 return self.\_results
/usr/local/lib/python2.7/dist-packages/gcp/bigquery/\_query.pyc in execute\_async(self, table\_name, table\_mode, use\_cache, priority, allow\_large\_results)
307 allow\_large\_results=allow\_large\_results)
308 except Exception as e:
--> 309 raise e
310 if 'jobReference' not in query\_result:
311 raise Exception('Unexpected query response.')
**Exception: Failed to send HTTP request.**
Step by step,I find the place which throws the exception:
if headers is None:
headers = {}
```
headers['user-agent'] = 'GoogleCloudDataLab/1.0'
# Add querystring to the URL if there are any arguments.
if args is not None:
qs = urllib.urlencode(args)
url = url + '?' + qs
# Setup method to POST if unspecified, and appropriate request headers
# if there is data to be sent within the request.
if data is not None:
if method is None:
method = 'POST'
if data != '':
# If there is a content type specified, use it (and the data) as-is.
# Otherwise, assume JSON, and serialize the data object.
if 'Content-Type' not in headers:
data = json.dumps(data)
headers['Content-Type'] = 'application/json'
headers['Content-Length'] = str(len(data))
else:
if method == 'POST':
headers['Content-Length'] = '0'
# If the method is still unset, i.e. it was unspecified, and there
# was no data to be POSTed, then default to GET request.
if method is None:
method = 'GET'
# Create an Http object to issue requests. Associate the credentials
# with it if specified to perform authorization.
#
# TODO(nikhilko):
# SSL cert validation seemingly fails, and workarounds are not amenable
# to implementing in library code. So configure the Http object to skip
# doing so, in the interim.
http = httplib2.Http()
http.disable_ssl_certificate_validation = True
if credentials is not None:
http = credentials.authorize(http)
try:
response, content = http.request(url,method=method,body=data,headers=headers)
if 200 <= response.status < 300:
if raw_response:
return content
return json.loads(content)
else:
raise RequestException(response.status, content)
except ValueError:
raise Exception('Failed to process HTTP response.')
except httplib2.HttpLib2Error:
raise Exception('Failed to send HTTP request.')
```
I wonder whether it is my configuration error or The cloud datalab does not support deploy&run locally.That is to say,we cannot run code in notebooks on local datalab server.
Please give me some ideas.The question has disturbed me for one week!Thank you!
|
2015/10/26
|
[
"https://Stackoverflow.com/questions/33340749",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5488148/"
] |
If you are looking to run Datalab container locally instead of running it in Google Cloud, that is also possible as described here:
<https://github.com/GoogleCloudPlatform/datalab/wiki/Build-and-Run>
However, that is developer setup for building/changing Datalab code and not currently geared towards a data scientist / developer looking to use Datalab as a tool. I.e. it is more complex to set up compared to just deploying Datalab to a billing-enabled cloud project. Even with locally running container, you will likely want to have a Google Cloud project to run BigQuery queries etc
|
If your project does not have billing enabled you cannot run queries against BigQuery, which is what it looks like you are trying to do.
| 5,424
|
32,260,538
|
So i have this script in python.
It uses models from django to get some (to be precise: a lot of) data from database.
A quick 'summary' of what i want to achieve (it might be not so important, so you can as well get it just by looking at the code):
There are objects of A type.
For each A object there are related objects of B type (1 to many).
For each B object there are related objects of C type (1 to many).
For each C object there is one related object that is particularly interesting for me- let's call it D (1 to 1 relation).
For each A object in database(not many) i need to get all B objects related to it and all the D objects related to it to create a summary of the A object. Each summary is a separate worksheet (i am using **openpyxl**).
The code i wrote is valid (meaning: it does what i want it to do), but there's problem with garbage collection, so the process gets killed. I have tried not using prefetching, since time is not that much of a concern, but it doesn't really help.
Abstract code:
```
a_objects = A.objects.all()
wb = Workbook()
for a_object in a_objects:
ws = wb.create_sheet()
ws.title = a.name
summary_dictionary = {}
<< 1 >>
b_objects = B.objects.filter(a_object=a_object)
for b_object in b_objects:
c_objects = C.objects.filter(b_object=b_object)
for c_object in c_objects:
# Here i put a value in dictionary, or alter it,
# depending on whether c_object.d_object has unique fields for current a_object
# Key is a tuple of 3 floats (taken from d_object)
# Value is array of 3 small integers (between 0 and 100)
summary_dictionary = sorted(summary_dictionary.items(), key=operator.itemgetter(0))
for summary_item in summary_dictionary:
ws.append([summary_item[0][0], summary_item[0][1], summary_item[0][2], summary_item[1][0], summary_item[1][1], summary_item[1][2], sum(summary_item[1])])
wb.save("someFile.xlsx")
```
While theoretically the whole xlsx file could be huge - possibly over 1GB, if all the d\_objects values were unique, i estimate it to be a lot under 100 MB even at the end of the script. There's about 650 MB free memory in system while script is being executed.
There are about 80 A objects and the script is killed after 6 or 7 of them. I used "top" to monitor memory usage and didn't notice any memory being freed, which is weird, because, let's say:
3rd a\_object had 1000 b\_objects related to it and each b\_object had 30 c\_objects related to it and
4th a\_object had only 100 b\_objects related to it and each b\_object had only 2 c\_objects related to it.
A lot of memory should be freed some time after <<1>> in 4th iteration, right?
My point is that what i thought this program should behave like, is that it would run as long as the following can fit into memory:
- the whole summary
- a single set of all b\_objects and its c\_objects and its d\_objects for any a\_object in database.
What am i missing then?
|
2015/08/27
|
[
"https://Stackoverflow.com/questions/32260538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3915216/"
] |
Alright so my main problem was that I couldn't get the \_id of the document I inserted without not being able to check whether if it was updated/found or inserted. However I learned that you can generate your own Id's.
```
id = mongoose.Types.ObjectId();
Chatrooms.findOneAndUpdate({Roomname: room.Roomname},{ $setOnInsert: {_id: id, status: true, userNum: 1}}, {new: true, upsert: true}, function(err, doc) {
if(err) console.log(err);
if(doc === null) {
// inserted document logic
// _id available for inserted document via id
} else if(doc.status) {
// found document logic
}
});
```
**Update**
Mongoose API v4.4.8
passRawResult: if true, passes the raw result from the MongoDB driver as the third callback parameter.
|
I'm afraid Using **FindOneAndUpdate** can't do what you whant because it doesn't has middleware and setter and it mention it the docs:
Although values are cast to their appropriate types when using the findAndModify helpers, the following are not applied:
* defaults
* Setters
* validators
* middleware
<http://mongoosejs.com/docs/api.html> search it in the findOneAndUpdate
if you want to get the docs before update and the docs after update you can do it this way :
```
Model.findOne({ name: 'borne' }, function (err, doc) {
if (doc){
console.log(doc);//this is ur document before update
doc.name = 'jason borne';
doc.save(callback); // you can use your own callback to get the udpated doc
}
})
```
hope it helps you
| 5,426
|
26,975,539
|
I thought that the standalone PsychoPy install could coexist happily if Python was installed separately on the PC to but I can't get it to, nor can I find any docs. (I'm using Windows 7)
I have the lastest standalone version installed and the shortcut to run it is
```
"D:\Program Files (x86)\PsychoPy2\pythonw.exe" "D:\Program Files (x86)\PsychoPy2\Lib\site-packages\PsychoPy-1.81.02-py2.7.egg\psychopy\app\psychopyApp.py"
```
This works fine if my system env variables for PYTHONHOME & PYTHONPATH aren't set but I also use Python for other apps and need them setting to point to the other version of Python I have installed natively. When these env vars are set, Psychopy fails to load and gives no error messages at all.
Can anyone advise how I get them to play together nicely? (I thought it used to work last year, has something changed?)
[ I've tried a full uninstall of psychopy and freshly installed the latest standalone version v1.81.02
|
2014/11/17
|
[
"https://Stackoverflow.com/questions/26975539",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/765827/"
] |
It is not a button documented by Qt. You can detect this by catching events and checking event type:
<http://qt-project.org/doc/qt-5/qevent.html#Type-enum>
There are different types as `QEvent::EnterWhatsThisMode` `QEvent::WhatsThisClicked` and so on. I achieved something similar to what are you looking for using event filter in mainwindow.
```
if(event->type() == QEvent::EnterWhatsThisMode)
qDebug() << "click";
```
I saw "click" when I clicked on `?` button.
|
Based on Chernobyl's answer, this is how I did it in Python (PySide):
```
def event(self, event):
if event.type() == QtCore.QEvent.EnterWhatsThisMode:
print "click"
return True
return QtGui.QDialog.event(self, event)
```
That is, you reimplement `event` when app enters 'WhatsThisMode'. Otherwise, pass along control back to the base class.
It almost works. The only wrinkle is that the mouse cursor is still turned into the 'Forbidden' shape. Based on [another post](https://stackoverflow.com/a/26978238/1886357), I got rid of that by adding:
```
QtGui.QWhatsThis.leaveWhatsThisMode()
```
As the line right before the print command in the previous.
| 5,434
|
59,598,620
|
I am creating my first Django project. I have successfully installed Django version 2.1. When I created the project, the project was successfully launched at the url 127.0.0.1:8000.
Then I ran the command **python manage.py startapp products**.
Products folder was also successfully created in the project. Then inside the products folder, I opened:
**views.py**
```
from django.http import HttpResponse
from django.shortcuts import render
def index(request):
return HttpResponse('Hello World')
```
then **products/urls**:
```
from django.urls import path
from . import views
urlpatterns = [
path('', views.index)
]
```
Inside the main project folder, I opened **urls.py** and I modified the code:
```
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('products/', include('products.urls'))
]
```
Then the url **<http://127.0.0.1:8000/products/>** was not opening on browser. Browser is giving the "Unable to connect" error. I am using the PyCharm IDE. I will really appreciate your answer.
|
2020/01/05
|
[
"https://Stackoverflow.com/questions/59598620",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12655703/"
] |
In you **settings.py** add the app to the **INSTALLED\_APPS** list as:
```py
INSTALLED_APPS =
[
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'products' # <-- your product app
]
```
And modify **ALLOWED\_HOSTS** list as:
```py
ALLOWED_HOSTS = ['localhost', '127.0.0.1']
```
Then run the application:
`python manage.py runserver 127.0.0.1:8080`.
|
Check if you had ran the command below:
`python manage.py runserver`
If you had run the above command and the error persists, try to run on the global address 0.0.0.0
```
python manage.py runserver 0.0.0.0:8000
```
OR on a different port
```
python manage.py runserver 0.0.0.0:8001
```
| 5,437
|
46,368,459
|
I have different types of `ISO 8601` formatted date strings, using `datetime library`, i want to obtain a `datetime object` from these strings.
Example of the input strings:
1. `2017-08-01` (1st august 2017)
2. `2017-09` (september of 2017)
3. `2017-W20` (20th week)
4. `2017-W37-2` (tuesday of 37th week)
I am able to obtain the 1st, 2nd and 4th examples, but for 3rd, I get a traceback while trying.
I am using datetime.datetime.strptime function for them in try-except blocks, as follows:
```
try :
d1 = datetime.datetime.strptime(date,'%Y-%m-%d')
except :
try :
d1 = datetime.datetime.strptime(date,'%Y-%m')
except :
try :
d1 = datetime.datetime.strptime(date,'%G-W%V')
except :
print('Not going through')
```
When i tried the 3rd try block on terminal, here's the error i got
```
>>> dstr
'2017-W38'
>>> dt.strptime(dstr,'%G-W%V')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\tushar.aggarwal\Desktop\Python\Python3632\lib\_strptime.py", line 565, in _strptime_datetime
tt, fraction = _strptime(data_string, format)
File "C:\Users\tushar.aggarwal\Desktop\Python\Python3632\lib\_strptime.py", line 483, in _strptime
raise ValueError("ISO year directive '%G' must be used with "
ValueError: ISO year directive '%G' must be used with the ISO week directive '%V' and a weekday directive ('%A', '%a', '%w', or '%u').
```
And this is how i got the 4th case working :
```
>>> dstr
'2017-W38-2'
>>> dt.strptime(dstr,'%G-W%V-%u')
datetime.datetime(2017, 9, 19, 0, 0)
```
Here is the reference for my code : [strptime documentation](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior)
There are many questions on SO regarding date parsing from `ISO 8601` formats, but I couldn't find one addressing my issue. Moreover the questions involved are very old and take older versions of python where `%G`, `%V` directives in `strptime` are not available.
|
2017/09/22
|
[
"https://Stackoverflow.com/questions/46368459",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6374328/"
] |
You can escape the Twig tags (as described [here](https://twig.symfony.com/doc/2.x/templates.html#escaping)) using `{{ '{{' }}`, `{{ '}}' }}`, `{{ '{%' }}` and `{{ '%}' }}`.
```
$input = '<h1>{{ pageTitle }}</h1>
<div class="row">
{% for product in products %}
<span class="mep"></span>
{% endfor %}
</div>';
$search = "/({{|}}|{%|%})/";
$replace = "{{ '$1' }}";
echo preg_replace($search, $replace, $input);
```
|
My regex solution (better solution still welcome):
```
$input = '<h1>{{ pageTitle }}</h1>
<div class="row">
{% for product in products %}
<span class="mep"></span>
{% endfor %}
</div>';
$search = '/({{.+}})|({%.+%})/si';
$replace = '';
echo preg_replace($input, $search, $replace);
```
| 5,438
|
31,768,128
|
I don't know what's the deal but I am stuck following some stackoverflow solutions which gets nowhere. Can you please help me on this?
```
Monas-MacBook-Pro:CS764 mona$ sudo python get-pip.py
The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/Users/mona/Library/Caches/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
/tmp/tmpbSjX8k/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
Collecting pip
Downloading pip-7.1.0-py2.py3-none-any.whl (1.1MB)
100% |████████████████████████████████| 1.1MB 181kB/s
Installing collected packages: pip
Found existing installation: pip 1.4.1
Uninstalling pip-1.4.1:
Successfully uninstalled pip-1.4.1
Successfully installed pip-7.1.0
Monas-MacBook-Pro:CS764 mona$ pip --version
-bash: /usr/local/bin/pip: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
```
|
2015/08/02
|
[
"https://Stackoverflow.com/questions/31768128",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2414957/"
] |
I'm guessing you have two python installs, or two pip installs, one of which has been partially removed.
Why do you use `sudo`? Ideally you should be able to install and run everything from your user account instead of using root. If you mix root and your local account together you are more likely to run into permissions issues (e.g. see the warning it gives about "parent directory is not owned by the current user").
What do you get if you run this?
```
$ head -n1 /usr/local/bin/pip
```
This will show you which python binary `pip` is trying to use. If it's pointing `/usr/local/opt/python/bin/python2.7`, then try running this:
```
$ ls -al /usr/local/opt/python/bin/python2.7
```
If this says "No such file or directory", then pip is trying to use a python binary that has been removed.
Next, try this:
```
$ which python
$ which python2.7
```
To see the path of the python binary that's actually working.
Since it looks like pip was successfully installed somewhere, it could be that `/usr/local/bin/pip` is part of an older installation of pip that's higher up on the `PATH`. To test that, you may try moving the non-functioning `pip` binary out of the way like this (might require `sudo`):
```
$ mv /usr/local/bin/pip /usr/local/bin/pip.old
```
Then try running your `pip --version` command again. Hopefully it picks up the correct version and runs successfully.
|
For me, on centOS 7
I had to remove the old pip link from /bin by
```sh
rm /bin/pip2.7
rm /bin/pip
```
then relink it with
```sh
sudo ln -s /usr/local/bin/pip2.7 /bin/pip2.7
```
Then if
```sh
/usr/local/bin/pip2.7
```
Works, this should work
| 5,439
|
10,529,461
|
I just noticed the problem with process terminate (from `multiprocessing` library) method on Linux. I have application working with `multiprocessing` library but... when I call `terminate` function on Windows everything works great, on the other hand Linux fails with this solution. As a replacement of process killing I was forced to use
```
os.system('kill -9 {}'.format(pid))
```
I know this isn't too smart, but it works. So I just wondering why this code works on Windows, but on Linux fails.
Example:
```python
from multiprocessing import Process
import os
process=Process(target=foo,args=('bar',))
pid=process.pid
process.terminate() # works on Windows only
...
os.sytem('kill -9 {}'.format(pid)) # my replacements on Linux
```
My configuration: python 3.2.0 build 88445; Linux-2.6.32-Debian-6.0.4
This is a sample from my code. I hope it will be sufficient.
```python
def start_test(timestamp,current_test_suite,user_ip):
global_test_table[timestamp] = current_test_suite
setattr(global_test_table[timestamp], "user_ip", user_ip)
test_cases = global_test_table[timestamp].test_cases_table
test_cases = test_cases*int(global_test_table[timestamp].count + 1)
global_test_table[timestamp].test_cases_table = test_cases
print(test_cases)
print(global_test_table[timestamp].test_cases_table)
case_num = len(test_cases)
Report.basecounter = Report.casecounter = case_num
setattr(global_test_table[timestamp], "case_num", case_num)
setattr(global_test_table[timestamp], "user_current_test", 0)
try:
dbobj=MySQLdb.connect(*dbconnector)
dbcursor=dbobj.cursor()
dbcursor.execute(sqlquery_insert_progress.format(progress_timestamp = str(timestamp), user_current_test = global_test_table[timestamp].user_current_tes$
except :...
for i in range(case_num):
user_row = global_test_table[timestamp]
current_test_from_tests_table = user_row.test_cases_table[i]
unittest.TextTestRunner(verbosity=2).run(suite(CommonGUI.get_address(CommonGUI,current_test_from_tests_table[1], current_test_from_tests_table[2], user$
global_test_table[timestamp].user_current_test = i + 1
try:
dbobj=MySQLdb.connect(*dbconnector)
dbcursor=dbobj.cursor()
dbcursor.execute(sqlquery_update_progress.format(progress_timestamp = str(timestamp), user_current_test = global_test_table[timestamp].user_current$
except :...
@cherrypy.expose()
def start_test_page(self, **test_suite):
timestamp = str(time.time())
user_ip = cherrypy.request.remote.ip
if on_server():
sys.stdout=sys.stderr=open("/var/log/cherrypy/test_gui/{file}.log".format(file=timestamp),"a")
current_test_suite = self.parse_result(**test_suite)
#global_test_table[timestamp] = current_test_suite
#setattr(global_test_table[timestamp], "user_ip", user_ip)
user_test_process = Process(target=start_test, args=(timestamp,current_test_suite,user_ip))
users_process_table[timestamp] = user_test_process
user_test_process.start()
return '''{"testsuite_id" : "''' + str(timestamp) + '''"}'''
@cherrypy.expose()
def stop_test(self, timestamp):
if timestamp in users_process_table:
if on_server():
user_process_pid = users_process_table[timestamp].pid
os.system("kill -9 " + str(user_process_pid))
else:
users_process_table[timestamp].terminate()
del users_process_table[timestamp]
else:
return "No process exists"
```
|
2012/05/10
|
[
"https://Stackoverflow.com/questions/10529461",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/611982/"
] |
From the [docs](http://docs.python.org/py3k/library/multiprocessing.html#multiprocessing.Process.terminate):
>
> terminate()
>
>
> Terminate the process. On Unix this is done using the
> SIGTERM signal; on Windows TerminateProcess() is used. Note that exit
> handlers and finally clauses, etc., will not be executed.
>
>
> Note that descendant processes of the process will not be terminated –
> they will simply become orphaned.
>
>
>
So it looks like you have to make sure that your process handles the SIGTERM signal correctly.
Use [`signal.signal`](http://docs.python.org/py3k/library/signal.html#signal.signal) to set a signal handler.
To set a SIGTERM signal handler that simply exist the process, use:
```
import signal
import sys
signal.signal(signal.SIGTERM, lambda signum, stack_frame: sys.exit(1))
```
**EDIT**
A Python process normally terminates on SIGTERM, I don't know why your multiprocessing process doesn't terminate on SIGTERM.
|
Not exactly a direct answer to your question, but since you are dealing with the threads this could be helpful as well for debugging those threads:
<https://stackoverflow.com/a/10165776/1019572>
I recently found a bug in cherrypy using this code.
| 5,449
|
3,182,009
|
I'm trying to upload an image (just a random picture for now) to my MediaWiki site, but I keep getting this error:
>
> "Unrecognized value for parameter 'action': upload"
>
>
>
Here's what I did (site url and password changed):
```
Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import wikitools
>>> import poster
>>> wiki = wikitools.wiki.Wiki("mywikiurl/api.php")
>>> wiki.login(username="admin", password="mypassword")
True
>>> screenshotPage = wikitools.wikifile.File(wiki=wiki, title="screenshot")
>>> screenshotPage.upload(fileobj=open("/Users/jeff/Pictures/02273_magensbay_1280x800.jpg", "r"), ignorewarnings=True)
Traceback (most recent call last):
File "", line 1, in
File "/Library/Python/2.6/site-packages/wikitools/wikifile.py", line 228, in upload
res = req.query()
File "/Library/Python/2.6/site-packages/wikitools/api.py", line 143, in query
raise APIError(data['error']['code'], data['error']['info'])
wikitools.api.APIError: (u'unknown_action', u"Unrecognized value for parameter 'action': upload")
>>>
```
From what I could find on google, the current MediaWiki doesn't support uploading files. But that's ridiculous... there must be some way, right?
I'm not married to the wikitools package—any way of doing it is appreciated.
EDIT: I set $wgEnableUploads = true in my LocalSettings.php, and I can upload files manually, just not through python.
EDIT: I think wikitools gets an edit token automatically. I've attached the upload method. Before it does the API request it calls self.getToken('edit'), which should take care of it I think? I'll play around with it a little though to see if manually adding that in fixes things.
```
def upload(self, fileobj=None, comment='', url=None, ignorewarnings=False, watch=False):
"""Upload a file, requires the "poster" module
fileobj - A file object opened for reading
comment - The log comment, used as the inital page content if the file
doesn't already exist on the wiki
url - A URL to upload the file from, if allowed on the wiki
ignorewarnings - Ignore warnings about duplicate files, etc.
watch - Add the page to your watchlist
"""
if not api.canupload and fileobj:
raise UploadError("The poster module is required for file uploading")
if not fileobj and not url:
raise UploadError("Must give either a file object or a URL")
if fileobj and url:
raise UploadError("Cannot give a file and a URL")
params = {'action':'upload',
'comment':comment,
'filename':self.unprefixedtitle,
'token':self.getToken('edit') # There's no specific "upload" token
}
if url:
params['url'] = url
else:
params['file'] = fileobj
if ignorewarnings:
params['ignorewarnings'] = ''
if watch:
params['watch'] = ''
req = api.APIRequest(self.site, params, write=True, multipart=bool(fileobj))
res = req.query()
if 'upload' in res and res['upload']['result'] == 'Success':
self.wikitext = ''
self.links = []
self.templates = []
self.exists = True
return res
```
Also this is my first question so somebody let me know if you can't post other peoples' code or something. Thanks!
|
2010/07/05
|
[
"https://Stackoverflow.com/questions/3182009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/383971/"
] |
You need at least MediaWiki 1.16 (which is currently in begta) to be able to upload files via the API. Or you can try [mwclient](http://mwclient.sf.net/), which automatically falls back to uploading via Special:Upload if an older version of MediaWiki is used (with reduced functionality, such as no error handling etc.)
|
Maybe you have to "obtain a token" first?
>
> To upload files, a token is required. This token is identical to the edit token and is the same regardless of target filename, but changes at every login. Unlike other tokens, it cannot be obtained directly, so one must obtain and use an edit token instead.
>
>
>
See here for details: <http://www.mediawiki.org/wiki/API:Edit_-_Uploading_files>
| 5,450
|
29,922,373
|
I'm doing a fair amount of parallel processing in Python using the multiprocessing module. I know certain objects CAN be pickle (thus passed as arguments in multi-p) and others can't. E.g.
```
class abc():
pass
a=abc()
pickle.dumps(a)
'ccopy_reg\n_reconstructor\np1\n(c__main__\nabc\np2\nc__builtin__\nobject\np3\nNtRp4\n.'
```
But I have some larger classes in my code (a dozen methods, or so), and this happens:
```
a=myBigClass()
pickle.dumps(a)
Traceback (innermost last):
File "<stdin>", line 1, in <module>
File "/usr/apps/Python279/python-2.7.9-rhel5-x86_64/lib/python2.7/copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle file objects
```
It's not a file object, but at other times, I'll get other messages that say basically: "I can't pickle this".
So what's the rule? Number of bytes? Depth of hierarchy? Phase of the moon?
|
2015/04/28
|
[
"https://Stackoverflow.com/questions/29922373",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1415450/"
] |
From the [docs](https://docs.python.org/2/library/pickle.html#what-can-be-pickled-and-unpickled):
>
> The following types can be pickled:
>
>
> * `None`, `True`, and `False`
> * integers, long integers, floating point numbers, complex numbers
> * normal and Unicode strings
> * tuples, lists, sets, and dictionaries containing only picklable objects
> * functions defined at the top level of a module
> + built-in functions defined at the top level of a module
> * classes that are defined at the top level of a module
> * instances of such classes whose `__dict__` or the result of calling `__getstate__()` is picklable (see section The pickle protocol for details).
>
>
> Attempts to pickle unpicklable objects will raise the `PicklingError`
> exception; when this happens, an unspecified number of bytes may have
> already been written to the underlying file. Trying to pickle a highly
> recursive data structure may exceed the maximum recursion depth, a
> `RuntimeError` will be raised in this case. You can carefully raise this
> limit with `sys.setrecursionlimit()`.
>
>
>
|
The general rule of thumb is that "logical" objects can be pickled, but "resource" objects (files, locks) can't, because it makes no sense to persist/clone them.
| 5,455
|
65,346,545
|
I have two dense matrices with the sizes (2500, 208) and (208, 2500). I want to calculate their product. It works fine and fast when it is a single process but when it is in a multiprocessing block, the processes stuck in there for hours. I do sparse matrices multiplication with even larger sizes but I have no problem. My code looks like this:
```
with Pool(processes=agents) as pool:
result = pool.starmap(run_func, args)
def run_func(args):
#Do stuff. Including large sparse matrices multiplication.
C = np.matmul(A,B) # or A.dot(B) or even using BLASS library directly dgemm(1, A, B)
#Never go after the line above!
```
Note that when the function `run_func` is executed in a single process, then it works fine. When I do multiprocessing on my local machine, it works fine. When I go for a multiprocessing on HPC, it stucks. I allocate my resources like this:
`srun -v --nodes=1 --time 7-0:0 --cpus-per-task=2 --nodes=1 --mem-per-cpu=20G python3 -u run.py 2`
Where the last parameter is the number of `agents` in the code above. Here is the LAPACK library details supported on the HPC (obtained from numpy):
```
libraries = ['mkl_rt', 'pthread']
library_dirs = ['**/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['**/include']
blas_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['**lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['**/include']
lapack_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['**/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['**/include']
lapack_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['**/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['**/include']
```
Compared to my local machine, all python packages and python version on HPC are the same. Any leads on what is going on?
|
2020/12/17
|
[
"https://Stackoverflow.com/questions/65346545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13575728/"
] |
All you need to do is pass the function 'add transaction' from 'Page 2' to 'Page 3'. You have to make sure that the function 'add transaction' accepts 'Trans' as a parameter and it also calls setState for Page 2. In Page 3 you have to pass your 'Trans(true, -50)' as the parameter to the 'add transaction' function that is received from 'Page 2'. The 'add transaction' function may be run in the onPressed method of RaisedButton on 'Page 3'. Please see the code below :
```
import 'package:flutter/material.dart';
import 'dart:math' as math;
final Color darkBlue = const Color.fromARGB(255, 18, 32, 47);
void main() {
runApp(MyApp());
}
extension Ex on double {
double toPrecision(int n) => double.parse(toStringAsFixed(n));
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData.dark().copyWith(scaffoldBackgroundColor: darkBlue),
debugShowCheckedModeBanner: false,
home: Scaffold(
appBar: AppBar(
title: const Text("Flutter Demo App"),
),
body: Center(
child: MyWidget(),
),
),
);
}
}
class MyWidget extends StatefulWidget {
_MyWidgetState createState() => _MyWidgetState();
}
class _MyWidgetState extends State<MyWidget> {
void _addTransaction(Trans transaction) {
setState(() {
transactions.add(transaction);
});
}
final List<Trans> transactions = [
const Trans(myBool: false, myDouble: 20),
const Trans(myBool: true, myDouble: -50),
const Trans(myBool: false, myDouble: 110),
const Trans(myBool: false, myDouble: 35.5),
];
@override
Widget build(BuildContext context) {
return Column(
children: [
Container(
height: MediaQuery.of(context).size.height * .7,
child: Scrollbar(
showTrackOnHover: true,
child: ListView.builder(
itemCount: transactions.length,
itemBuilder: (context, index) {
return ListTile(
title: transactions[index],
);
},
),
),
),
RaisedButton(
onPressed: () {
final rnd = math.Random();
_addTransaction(
Trans(
myBool: rnd.nextBool(),
myDouble: rnd.nextDouble().toPrecision(2) + rnd.nextInt(100),
),
);
},
child: const Text("Add Transaction"),
),
RaisedButton(
onPressed: () {
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => Page3(addTran: _addTransaction),
),
);
},
child: const Text("Page 3"),
),
],
);
}
}
class Trans extends StatelessWidget {
final myBool;
final myDouble;
const Trans({Key key, this.myBool, this.myDouble}) : super(key: key);
@override
Widget build(BuildContext context) {
return Row(
children: [
Text("Transaction: ${myBool.toString()} ${myDouble.toString()}")
],
);
}
}
class Page3 extends StatefulWidget {
final Function addTran;
const Page3({Key key, this.addTran}) : super(key: key);
@override
_Page3State createState() => _Page3State();
}
class _Page3State extends State<Page3> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text("Page 3"),
),
body: Center(
child: RaisedButton(
onPressed: () => widget.addTran(
const Trans(myBool: true, myDouble: 50),
),
child: const Text("Add Transaction"),
),
),
);
}
}
```
Note : even though in my example code above the widget 'Page 3' is in the same file. You may make a separate library for 'Page 3' as usual by importing material.dart.
|
Usually there are two methods of widget interaction: callbacks (when one widget provides a callback and other one call it back) or streams (when one widget provides a stream controller and other one uses those controller to send events to stream). Callbacks and events are processed by widget-initiator.
1. Create a `ValueSetter` parameter in `Page3` widget and create it like:
```dart
// in Page2
final page3 = Page3(callback: (trans) {
transactions.add(trans);
}
// in Page3
onPressed: () {
widget.callback(Trans(...)); // if Page3 us stateful
}
```
2. Create a `StreamController<Trans> state member in` Page2`and pass it as parameter to`Page3`.
```dart
// in Page2
@override
void initState() {
super.initState();
controller.stream.listen((trans) {
transactions.add(trans);
});
}
// in Page3
onPressed: () {
widget.controller.add(Trans(...));
}
```
| 5,460
|
45,597,031
|
I've looked at several other questions and none of them seem to help with my solution. I think I'm just not very intelligent sadly.
Basic question I know. I decided to learn python and I'm making a basic app with tkinter to learn.
Basically it's an app that stores and displays people's driving licence details (name and expiry date). One of the abilities I want it to have is a name lookup. To begin with, I need to figure out how to put a textbox into my window!
I will post the relevant (well, what I think is relevant!) code below:
```
class search(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
label = tk.Label(self, text="Enter a name to display that individual's details", font=LARGE_FONT)
label.pack(pady=10,padx=10)
label1 = tk.Label(console, text="Name:").pack()
searchbox = tk.Entry(console)
searchbox.pack()
button1 = tk.Button(self, text="SEARCH", command=lambda: controller.show_frame(main))#not created yet
button1.pack()
button2 = tk.Button(self, text="HOME", command=lambda: controller.show_frame(main))
button2.pack()
```
and of course at the top I have
```
import tkinter as tk
```
When I try and run this I get "typeobject "search" has no attribute 'tk'". It worked fine - the search window would open when I click the relevant button on the home window. Until I tried to add the Entry box.
What am I doing wrong here? I'm an utter newbie so I'm prepared to face my stupidity
Also apologies if the formatting of this question is awful, I'm a newbie posting here as well. Putting everything into correct "code" format is a real pain
|
2017/08/09
|
[
"https://Stackoverflow.com/questions/45597031",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4140751/"
] |
I'm guessing you're running into problems since you didn't specify a layout manager and passed `console` instead of `self`:
```
import tkinter as tk
class Search(tk.Frame):
def __init__(self, parent=None, controller=None):
tk.Frame.__init__(self, parent)
self.pack() # specify layout manager
label1 = tk.Label(self, text="Enter a name to display that individual's details")
label1.pack(pady=10, padx=10)
label2 = tk.Label(self, text="Name:")
label2.pack()
searchbox = tk.Entry(self)
searchbox.pack()
button1 = tk.Button(self, text="SEARCH", command=lambda: controller.show_frame(main))
button1.pack()
button2 = tk.Button(self, text="HOME", command=lambda: controller.show_frame(main))
button2.pack()
# Just cobble up the rest for example purposes:
main = None
class Controller:
def show_frame(self, frame=None):
pass
app = Search(controller=Controller())
app.mainloop()
```
|
First of all, using `from tkinter import *` is a more efficient way of importing Tkinters libraries without having to import specific things when needed. To answer your question though, here is the code for entering a text box.
`t1 = Text(self)`
To insert text into the text box: `t1.insert()`
An example of this would be `t1.insert(END, 'This is text')`
If you haven't got it already, t1 is the variable I'm assigning to the text box, although you can choose whatever variable you want. I highly recommend effbots tutorial on tkinter, I found it extremely useful. Here is the link: <http://effbot.org/tkinterbook/tkinter-application-windows.htm>
Best of luck!
| 5,461
|
17,964,475
|
Hey I'm trying to install some packages from a `requires` file on a new virtual environment (2.7.4), but I keep running into the following error:
```
CertificateError: hostname 'pypi.python.org' doesn't match either of '*.addvocate.com', 'addvocate.com'
```
I cannot seem to find anything helpful on the error when I search. What is going wrong here? Who in the world is `addvocate.com` and what are they doing here?
|
2013/07/31
|
[
"https://Stackoverflow.com/questions/17964475",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2637052/"
] |
The issue is being documented on the python status site at <http://status.python.org/incidents/jj8d7xn41hr5>
|
When I try to connect to pypi I get the following error:
```
pypi.python.org uses an invalid security certificate.
The certificate is only valid for the following names:
*.addvocate.com , addvocate.com
```
So either pypi is using the wrong ssl certificate or somehow my connection is being routed to the wrong server.
In the meantime I have resorted to downloading directly from source URLs. See <http://www.pip-installer.org/en/latest/usage.html#pip-install>
| 5,462
|
48,174,011
|
I'm running apache2 web server on raspberry pi3 model B. I'm setting up smart home running with Pi's and Uno's. I have a php scrypt that executes python program>index.php. It has rwxrwxrwx >I'll change that late becouse i don't fully need it.
And i want to real-time display print from python script.
`exec('sudo python3 piUno.py')` Let's say that output is "Hello w"
How can i import/get printed data from .py?
|
2018/01/09
|
[
"https://Stackoverflow.com/questions/48174011",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8882678/"
] |
shell\_exec returns the output of your script. so use
```
$cmd = escapeshellcmd('sudo python3 piUno.py');
$output = shell_exec($cmd);
echo $output;
```
should work! let me know if it doesn't
edit: oh hey! your question got me looking at doc to check myself and exec actually returns the last line of output if you need only the last output.
```
$output = exec('sudo python3 piUno.py');
echo $output;
```
or, you can set a second parameter to exec() to store all output lines in an array (1 entry per line) as this
```
$output = array();
exec('sudo python3 piUno.py',$output);
var_dump($output);
```
aight! this was fun!
|
First make sure you have permissions to **write read execute for web user**.
You can you user `sudo sudo chmod 777 /path/to/your/directory/file.xyz`
For php file and file you want to run. `$output = exec('sudo pytho3 piUno'); echo $output;`
**Credits ---> Ralph Thomas Hopper**
| 5,467
|
14,346,177
|
I'm trying to use factory\_boy to help generate some MongoEngine documents for my tests. I'm having trouble defining `EmbeddedDocumentField` objects.
Here's my MongoEngine `Document`:
```py
class Comment(EmbeddedDocument):
content = StringField()
name = StringField(max_length=120)
class Post(Document):
title = StringField(required=True)
tags = ListField(StringField(), required=True)
comments = ListField(EmbeddedDocumentField(Comment))
```
Here's my partially completed factory\_boy `Factory`:
```py
class CommentFactory(factory.Factory):
FACTORY_FOR = Comment
content = "Platinum coins worth a trillion dollars are great"
name = "John Doe"
class BlogFactory(factory.Factory):
FACTORY_FOR = Blog
title = "On Using MongoEngine with factory_boy"
tags = ['python', 'mongoengine', 'factory-boy', 'django']
comments = [factory.SubFactory(CommentFactory)] # this doesn't work
```
Any ideas how to specify the `comments` field? The problem is that factory-boy attempts to create the `Comment` EmbeddedDocument.
|
2013/01/15
|
[
"https://Stackoverflow.com/questions/14346177",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/387163/"
] |
I'm not sure if this is what you want but I just started looking at this problem and this seems to work:
```
from mongoengine import EmbeddedDocument, Document, StringField, ListField, EmbeddedDocumentField
import factory
class Comment(EmbeddedDocument):
content = StringField()
name = StringField(max_length=120)
class Post(Document):
title = StringField(required=True)
tags = ListField(StringField(), required=True)
comments = ListField(EmbeddedDocumentField(Comment))
class CommentFactory(factory.Factory):
FACTORY_FOR = Comment
content = "Platinum coins worth a trillion dollars are great"
name = "John Doe"
class PostFactory(factory.Factory):
FACTORY_FOR = Post
title = "On Using MongoEngine with factory_boy"
tags = ['python', 'mongoengine', 'factory-boy', 'django']
comments = factory.LazyAttribute(lambda a: [CommentFactory()])
>>> b = PostFactory()
>>> b.comments[0].content
'Platinum coins worth a trillion dollars are great'
```
I wouldn't be surprised if I'm missing something though.
|
The way that I'm doing it right now is to prevent the Factories based on EmbeddedDocuments from building. So, I've setup up an EmbeddedDocumentFactory, like so:
```
class EmbeddedDocumentFactory(factory.Factory):
ABSTRACT_FACTORY = True
@classmethod
def _prepare(cls, create, **kwargs):
return super(EmbeddedDocumentFactory, cls)._prepare(False, **kwargs)
```
Then I inherit from that to create factories for EmbeddedDocuments:
```
class CommentFactory(EmbeddedDocumentFactory):
FACTORY_FOR = Comment
content = "Platinum coins worth a trillion dollars are great"
name = "John Doe"
```
This may not be the best solution, so I'll wait on someone else to respond before accepting this as the answer.
| 5,468
|
66,517,764
|
### What is the pythonic way to remove all the parts of string upto and including dot from a set
```
theSet={'products.add_product','products.add_category','books.view_books','cats.change_cats'}
#desired output
newSet = {'add_product','add_category','view_books', 'change_cats'}
```
|
2021/03/07
|
[
"https://Stackoverflow.com/questions/66517764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8047262/"
] |
You can use `yearmon`function from `zoo`. Then search for string Jun with R `grepl` function in Date column and apply desired condition with `case_when`from `dplyr` package.
```
library(zoo)
library(dplyr)
# your data
Date <- c("2000-01", "2000-02", "2000-03", "2000-04", "2000-05", "2000-06",
"2000-07", "2000-08", "2000-09", "2000-10", "2000-11", "2000-12", "2001-01",
"2001-02", "2001-03", "2001-04", "2001-05", "2001-06", "2001-07", "2001-08",
"2001-09", "2001-10", "2001-11", "2001-12", "2000-01", "2000-02")
Permno <- c(10026, 10026, 10026, 10026, 10026, 10026, 10026, 10026, 10026,
10026, 10026, 10026, 10026, 10026, 10026, 10026, 10026, 10026,
10026, 10026, 10026, 10026, 10026, 10026, 10030, 10030)
Value <- c("Big, Growth", "Small, Value", "Neutral, Neutral", "Big, Value", "Big, Value",
"Big, Value", "Big, Value", "Big, Value", "Small, Value", "Small, Neutral",
"Neutral, Neutral", "Big, Growth", "Small, Value", "Neutral, Neutral",
"Big, Value", "Big, Value", "Small, Value", "Small, Neutral",
"Neutral, Neutral", "Big, Growth", "Small, Value", "Neutral, Neutral",
"Big, Value", "Small, Neutral", "Neutral, Neutral", "Small, Neutral")
df <- data.frame(Date, Permno, Value)
# code for your desired output
df1 <- df %>%
mutate(Date = as.yearmon(Date),
Hold = case_when(grepl("Jun", Date) ~ Value))
# Output:
> df1
Date Permno Value Hold
1 Jan 2000 10026 Big, Growth <NA>
2 Feb 2000 10026 Small, Value <NA>
3 Mar 2000 10026 Neutral, Neutral <NA>
4 Apr 2000 10026 Big, Value <NA>
5 May 2000 10026 Big, Value <NA>
6 Jun 2000 10026 Big, Value Big, Value
7 Jul 2000 10026 Big, Value <NA>
8 Aug 2000 10026 Big, Value <NA>
9 Sep 2000 10026 Small, Value <NA>
10 Oct 2000 10026 Small, Neutral <NA>
11 Nov 2000 10026 Neutral, Neutral <NA>
12 Dec 2000 10026 Big, Growth <NA>
13 Jan 2001 10026 Small, Value <NA>
14 Feb 2001 10026 Neutral, Neutral <NA>
15 Mar 2001 10026 Big, Value <NA>
16 Apr 2001 10026 Big, Value <NA>
17 May 2001 10026 Small, Value <NA>
18 Jun 2001 10026 Small, Neutral Small, Neutral
19 Jul 2001 10026 Neutral, Neutral <NA>
20 Aug 2001 10026 Big, Growth <NA>
21 Sep 2001 10026 Small, Value <NA>
22 Oct 2001 10026 Neutral, Neutral <NA>
23 Nov 2001 10026 Big, Value <NA>
24 Dec 2001 10026 Small, Neutral <NA>
25 Jan 2000 10030 Neutral, Neutral <NA>
26 Feb 2000 10030 Small, Neutral <NA>
```
|
Why not simply this?
```
FF5_class$HOLD <- ifelse(substr(FF5_class$Date, 6,7) =="06", FF5_class$Value, NA)
Date Permno Value HOLD
1 2000-01 10026 Big, Growth <NA>
2 2000-02 10026 Small, Value <NA>
3 2000-03 10026 Neutral, Neutral <NA>
4 2000-04 10026 Big, Value <NA>
5 2000-05 10026 Big, Value <NA>
6 2000-06 10026 Big, Value Big, Value
7 2000-07 10026 Big, Value <NA>
8 2000-08 10026 Big, Value <NA>
9 2000-09 10026 Small, Value <NA>
10 2000-10 10026 Small, Neutral <NA>
11 2000-11 10026 Neutral, Neutral <NA>
12 2000-12 10026 Big, Growth <NA>
13 2001-01 10026 Small, Value <NA>
14 2001-02 10026 Neutral, Neutral <NA>
15 2001-03 10026 Big, Value <NA>
16 2001-04 10026 Big, Value <NA>
17 2001-05 10026 Small, Value <NA>
18 2001-06 10026 Small, Neutral Small, Neutral
19 2001-07 10026 Neutral, Neutral <NA>
20 2001-08 10026 Big, Growth <NA>
21 2001-09 10026 Small, Value <NA>
22 2001-10 10026 Neutral, Neutral <NA>
23 2001-11 10026 Big, Value <NA>
24 2001-12 10026 Small, Neutral <NA>
25 2000-01 10030 Neutral, Neutral <NA>
26 2000-02 10030 Small, Neutral <NA>
```
dput(FF5\_class) used
```
FF5_class <- structure(list(Date = c("2000-01", "2000-02", "2000-03", "2000-04",
"2000-05", "2000-06", "2000-07", "2000-08", "2000-09", "2000-10",
"2000-11", "2000-12", "2001-01", "2001-02", "2001-03", "2001-04",
"2001-05", "2001-06", "2001-07", "2001-08", "2001-09", "2001-10",
"2001-11", "2001-12", "2000-01", "2000-02"), Permno = c(10026,
10026, 10026, 10026, 10026, 10026, 10026, 10026, 10026, 10026,
10026, 10026, 10026, 10026, 10026, 10026, 10026, 10026, 10026,
10026, 10026, 10026, 10026, 10026, 10030, 10030), Value = c("Big, Growth",
"Small, Value", "Neutral, Neutral", "Big, Value", "Big, Value",
"Big, Value", "Big, Value", "Big, Value", "Small, Value", "Small, Neutral",
"Neutral, Neutral", "Big, Growth", "Small, Value", "Neutral, Neutral",
"Big, Value", "Big, Value", "Small, Value", "Small, Neutral",
"Neutral, Neutral", "Big, Growth", "Small, Value", "Neutral, Neutral",
"Big, Value", "Small, Neutral", "Neutral, Neutral", "Small, Neutral"
), HOLD = c(NA, NA, NA, NA, NA, "Big, Value", NA, NA, NA, NA,
NA, NA, NA, NA, NA, NA, NA, "Small, Neutral", NA, NA, NA, NA,
NA, NA, NA, NA)), row.names = c(NA, -26L), class = "data.frame")
```
| 5,469
|
3,434,048
|
I know it can be achieved by command line but I need to pass at least 10 variables and command line will mean too much of programming since these variables may or may not be passed.
Actually I have build A application half in vB( for GUI ) and Half in python( for script ). I need to pass variables to python, similar, to its keywords arguments, i.e, x = val1, y = val2. Is there any way to achieve this?
|
2010/08/08
|
[
"https://Stackoverflow.com/questions/3434048",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/388350/"
] |
Since you're working on windows with VB, it's worth mentioning that [IronPython](http://ironpython.net/) might be one option. Since both VB and IronPython can interact through .NET, you could wrap up your script in an assembly and expose a function which you call with the required arguments.
|
Have you taken a look at the [getopt module](http://docs.python.org/library/getopt.html)? It's designed to make working with command line options easier. See also the examples at [Dive Into Python](http://www.faqs.org/docs/diveintopython/kgp_commandline.html).
If you are working with Python 2.7 (and not lower), than you can also have a look at the [argparse module](http://docs.python.org/library/argparse.html#module-argparse) which should make it even easier.
| 5,470
|
5,574,649
|
I need a scalable `NoSql` solution to store data as *arrays* for many fields & time stamps, where the key is a combination of a `field` and a `timestamp`.
Data would be stored in the following scheme:
**KEY** --> "FIELD\_NAME.YYYYMMDD.HHMMSS"
**VALUE** --> [v1, v2, v3, v4, v5, v6] (v1..v6 are just `floats`)
For instance, suppose that:
**FIELD\_NAME** = "TOMATO"
**TIME\_STAMP** = "20060316.184356"
**VALUES** = [72.34, -22.83, -0.938, 0.265, -2047.23]
I need to be able to retrieve **VALUE** (the entire array) given the combination of `FIELD_NAME` & `TIME_STAMP`.
The query **VALUES**["*TOMATO.20060316.184356*"] would return the vector [72.34, -22.83, -0.938, 0.265, -2047.23]. Reads of arrays should be as fast as possible.
Yet, I also need a way to store (in-place) a scalar value within an array . Suppose that I want to assign the 1st element of `TOMATO` on timestamp `2006/03/16.18:43:56` to be `500.867`. In such a case, I need to have a fast mechanism to do so -- something like:
**VALUES**["*TOMATO.20060316.184356*"][0] = 500.867 (this would update on disk)
Any idea which `NoSql` solution would work best for this(big plus if it has `python` interface)? I am looking for a fast yet a powerful solution. my data needs would grow to about 20[TB].
|
2011/04/07
|
[
"https://Stackoverflow.com/questions/5574649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/540009/"
] |
Sounds like MongoDB would be a good fit. [PyMongo](http://api.mongodb.org/python/1.10+/index.html) is the api.
|
Your data is highly structured and regular; what benefit do you see in NoSQL vs a more traditional database?
I think [MySQL Cluster](http://dev.mysql.com/downloads/cluster/) sounds tailor-made for your problem.
**Edit:**
@user540009: I agree that there are serious slowdowns on single-machine or mirrored instances of MySQL larger than half a terabyte, and no-one wants to have to deal with manual sharding; MySQL Cluster is meant to deal with this, and I have read of (though not personally played with) implementations up to 110 terabytes.
| 5,480
|
12,672,629
|
>
> **Possible Duplicate:**
>
> [Converting string into datetime](https://stackoverflow.com/questions/466345/converting-string-into-datetime)
>
>
>
I am parsing an XML file that gives me the time in the respective isoformat:
```
tc1 = 2012-09-28T16:41:12.9976565
tc2 = 2012-09-28T23:57:44.6636597
```
But it is being treated as a string when I retrieve this from the XML file.
I have two such time values and i need to do a diff between the two so as to find delta.
But since it is a string I can not directly do tc2-tc1. But since they are already in isoformat for datetime, how do i get python to recognize it as datetime?
thanks.
|
2012/10/01
|
[
"https://Stackoverflow.com/questions/12672629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/966739/"
] |
You can use the [python-dateutil `parse()` function](http://labix.org/python-dateutil#head-c0e81a473b647dfa787dc11e8c69557ec2c3ecd2), it's more flexible than strptime. Hope this help you.
|
Use the [`datetime` module](http://docs.python.org/library/datetime.html).
```
td = datetime.strptime('2012-09-28T16:41:12.997656', '%Y-%m-%dT%H:%M:%S.%f') -
datetime.strptime('2012-09-28T23:57:44.663659', '%Y-%m-%dT%H:%M:%S.%f')
print td
# => datetime.timedelta(-1, 60208, 333997)
```
There is only one small problem: Your microseconds are one digit to long for `%f` to handle. So I've removed the last digits from your input strings.
| 5,481
|
42,044,619
|
I'm working on a project of my own, and I'm at a point where i don't know anymore what to do..
I'm trying to implement some sounds into my project where i press some tact. switches and they should make sounds.. I'm a complete newbie with python so i found a piece of code doing something similar...
```
import os
from time import sleep
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(23, GPIO.IN)
GPIO.setup(24, GPIO.IN)
GPIO.setup(25, GPIO.IN)
while True:
if (GPIO.input(23) == False):
os.system('mpg123 -q binary-language-moisture-evaporators.mp3 &')
if (GPIO.input(24) == False):
os.system('mpg123 -q power-converters.mp3 &')
if (GPIO.input(25)== False):
os.system('mpg123 -q vader.mp3 &')
sleep(0.1);
```
I want the 1st sound to run in a continuous loop while `input(23)==false` and if one of the other two buttons is pressed it stops the first one and plays the other, only once, and returns to checking if `input(23)==false`
I need this to be done to finish my project, but I don't have the need really to learn python from scratch (at least for now). at least some guidelines would be greatly appreciated.
|
2017/02/04
|
[
"https://Stackoverflow.com/questions/42044619",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7516655/"
] |
Based on <https://github.com/Unitech/pm2/blob/master/lib/API/Extra.js#L436>, I managed to get this working
Put it as the last item in your ecosystem file, and it will always have the highest id
Make sure that the script path is correct, it was the default on MY system, it might not be on your
I'm running 2.9.3, and the current code (3.5.0) is similar, so it SHOULD work
```
{
script : '/usr/local/lib/node_modules/pm2/lib/HttpInterface.js',
name : 'pm2-http-interface',
execMode : 'fork_mode',
env : {
PM2_WEB_PORT : 9615
}
}
```
|
not sure, but you can try to specify `interpreter`. It should be your PM2 (check it with `whereis`).
Try smth like
`{
"apps": [{
"name": "web",
"script": "",
"interpreter": "/usr/local/bin/pm2",
"args": "web"
}]
}`
Please note - i didnot checked it at all, it just suggestion
| 5,484
|
63,201,965
|
Hi I have made my flask app and I have exposed port 5001 in Docker file.
I pushed it to dockerhub repo and ran on different machine by
```
docker container run --name XYZ <username>/<repo_name>:<tag>
```
The log says that app is running on <http://127.0.0.1:5001/>
But if I open that localtion in browser its says
```
Unable to connect
```
Dockerfile:
```
FROM ubuntu:18.04
RUN apt-get update && apt-get -y upgrade \
&& apt-get -y install python3.8 \
&& apt -y install python3-pip \
&& pip3 install --upgrade pip
WORKDIR /app
COPY . /app
RUN pip3 --no-cache-dir install -r requirements.txt
EXPOSE 5001
ENTRYPOINT ["python3"]
CMD ["app.py"]
```
|
2020/08/01
|
[
"https://Stackoverflow.com/questions/63201965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5687866/"
] |
This sounds like `insert . . . on duplicate key update`. First, though, you need a unique index or constraint:
```
create unique index unq_stocks_ticker on stocks(ticker);
```
Then you can use:
```
insert into stocks (ticker, marketcap)
values (?, ?)
on duplicate key update marketcap = values(marketcap);
```
|
An UPDATE query is incapable of creating a new row, so perhaps like:
```
UPDATE stocks SET marketcap = 300000000000 WHERE symbol = '$MMM'
```
Your footnote "unless it doesn't exist" means you probably then need to examine how many rows this altered and if it's 0 then run:
```
INSERT INTO stocks(marketcap, symbol) VALUES(300000000000, '$MMM')
```
It doesn't really matter which way round you do these; if you have a key on symbol you won't get duplicates, you'll get a fail to insert which you could then use to trigger an update. Ideally though you'd look at the likelihood of failure of each and go with the option of putting the least often failing option first. If you will update 10000 symbols 100 times a day each but only insert maybe 100 new symbols a day then put the update first. If you will create 10000 new symbols a day and update them once a year, put the insert first. This ensures the least time is wasted on operations that have no effect/use resources because they raise an error
| 5,485
|
52,458,754
|
I want to compare two dictionary keys in python and if the keys are equal, then print their values.
For example,
```
dict_one={'12':'fariborz','13':'peter','14':'jadi'}
dict_two={'15':'ronaldo','16':'messi','12':'daei','14':'jafar'}
```
and after comparing the keys, print
```
'fariborz', 'daei'
'jadi', jafar'
```
|
2018/09/22
|
[
"https://Stackoverflow.com/questions/52458754",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9287224/"
] |
You're asking for the intersection of the two dictionaries.
Using the builtin type `set`
----------------------------
You can use the builtin type `set` for this, which implements the `intersection()` function.
You can turn a list into a set like this:
```
set(my_list)
```
So, in order to find the intersection between the keys of two dictionaries, you can turn the keys into sets and find the intersection.
To get a list with the keys of a dictionary:
```
dict_one.keys()
```
So, to find the intersection between the keys of two dicts:
```
set(dict_one.keys()).intersection(set(dict_two.keys()))
```
This will return, in your example, the set `{'12', '14'}`.
The code above in a more readable way:
```
keys_one = set(dict_one.keys())
keys_two = set(dict_one.keys())
same_keys = keys_one.intersection(keys_two)
# To get a list of the keys:
result = list(same_keys)
```
Using anonymous function (lambda function) and list comprehension
-----------------------------------------------------------------
Another easy way to solve this problem would be using lambda functions.
I'm including this here just in case you'd like to know. Probably not the most efficient way to do!
```
same_keys = lambda first,second: [key1 for key1 in first.keys() if key1 in second.keys()]
```
So, as to get the result:
`result = same_keys(dict_one,dict_two)`
Any of the above two methods will give you the keys that are common to both dictionaries.
Just loop over it and do as you please to print the values:
```
for key in result:
print('{},{}'.format(dict_one[key], dict_two[key]))
```
|
```
for key, val1 in dict_one.items():
val2 = dict_two.get(key)
if val2 is not None:
print(val1, val2)
```
| 5,486
|
68,425,073
|
I have a list like this:
```
list1 = ['hello', 'halo', 'goodbye', 'bye bye', 'how are you?']
```
I want for example to replace ‘hello’ and ‘halo’ with ‘welcome’, and ‘goodbye’ and ‘bye bye’ with ‘greetings’ so the list will be like this:
```
list1 or newlist = ['welcome', 'welcome', 'greetings', 'greetings', 'how are you?']
```
how can I do that the shortest way?
I tried [this](https://stackoverflow.com/questions/31035258/how-to-replace-multiple-words-with-one-word-in-python), but it didn’t work unless I convert my list to string first but I want it to be list, any way I converted to string and changed the words and then I tried to convert it back to list but wasn’t converted back properly.
|
2021/07/17
|
[
"https://Stackoverflow.com/questions/68425073",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16431450/"
] |
if the substitutions can easily be grouped then this works:
```py
list1 = ['hello', 'halo', 'goodbye', 'bye bye', 'how are you?']
new_list = []
group1 = ('hello', 'halo')
group2 = ('goodbye', 'bye bye')
for word in list1:
if word in group1:
new_list.append('welcome')
elif word in group2:
new_list.append('greetings')
else:
new_list.append(word)
print(new_list)
# ['welcome', 'welcome', 'greetings', 'greetings', 'how are you?']
```
|
I see this question is marked with the re tag, so I will answer using regular expressions. You can replace text using re.sub.
```
>>> import re
>>> list1 = ",".join(['hello', 'halo', 'goodbye', 'bye bye', 'how are you?'])
>>> list1 = re.sub(r"hello|halo", r"welcome", list1)
>>> list1 = re.sub(r"goodbye|bye bye", r"greetings", list1)
>>> print(list1.split(","))
['welcome', 'welcome', 'greetings', 'greetings', 'how are you?']
>>>
```
Alternatively you can use a list comprehension
```
["welcome" if i in ["hello","halo"] else "greetings" if i in ["goodbye","bye bye"] else i for i in list1]
```
| 5,496
|
14,976,968
|
I am trying to run a c++ program from python. My problem is that everytime i run:
```
subprocess.Popen(['sampleprog.exe'], stdin = iterate, stdout = myFile)
```
it only reads the first line in the file. Every time I enclose it with a while loop it ends up crushing because of the infinite loop. Is there any other way to read all the lines inside the `testcases.txt`?
My Sample Code below:
```
someFile = open("testcases.txt","r")
saveFile = open("store.txt", "r+")
try:
with someFile as iterate:
while iterate is not False:
subprocess.Popen(['sampleprog.exe'],stdin = iterate,stdout = saveFile)
except EOFError:
someFile.close()
saveFile.close()
sys.exit()
```
|
2013/02/20
|
[
"https://Stackoverflow.com/questions/14976968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2090597/"
] |
Your line of code
```
10 open (23,file=outfile,status='old',access='append',err=10)
```
specifies that the `open` statement should transfer control to itself (label 10) in case an error is encountered, so any error could trigger an infinite loop. It also suppresses the output of error messages. If you want to just check for an error status, I would suggest using the `iostat` and/or `iomsg` (Fortran 2003) arguments:
```
open (23, file=outfile, status='old', access='append', iostat=ios, iomsg=str)
```
Here `ios` is an integer that will be zero if no errors occur and nonzero otherwise, and `str` is a character variable that will record the corresponding error message.
|
The `err=` argument in your `open` statement specifies a statement label to branch to should the `open` fail for some reason. Your code specifies a branch to the line labelled `10` which happens to be the line containing the `open` statement. This is probably not a good idea; a better idea would be to branch to a line which deals gracefully with an error from the `open` statement.
The warning from gfortran is spot on.
As to the apparent garbage in your output file, without sight of the code you use to write the garbage (or what you think are pearls perhaps) it's very difficult to diagnose and fix that problem.
| 5,498
|
47,959,991
|
I have gathered obligatory data from the scopus website. my outputs have been saved in a list named "document". when I use type method for each element of this list, the python returns me this class:
```
"<class'selenium.webdriver.firefox.webelement.FirefoxWebElement'>"
```
In continius in order to solve this issue, I have used text method such this:
`document=driver.find_elements_by_tag_name('td')`
```
for i in document:
print i.text
```
So, I could see the result in text format. But, when I call each element of the list independently, white space is printed in this code:
```
x=[]
for i in document:
x.append(i.text)
```
`print (x[2])` will return white space.
What should I do?
|
2017/12/24
|
[
"https://Stackoverflow.com/questions/47959991",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8461493/"
] |
As you have used the following line of code :
```
document=driver.find_elements_by_tag_name('td')
```
and see the output on Console as :
```
"<class'selenium.webdriver.firefox.webelement.FirefoxWebElement'>"
```
This is the expected behavior as **`Selenium`** prints the reference of the **`Nodes`** matching your search criteria.
As per your `Code Attempt` to print the text leaving out the `white spaces` you can use the following code block :
```
x=[]
document = driver.find_elements_by_tag_name('td')
for i in document :
if (i.get_attribute("innerHTML") != "null") :
x.append(i.get_attribute("innerHTML"))
print(x[2])
```
|
My code was correct. But, the selected elements for displaying were space. By select another element, the result was shown.
| 5,499
|
51,976,580
|
I am trying to change month name to date in python but i m getting an error:
```
ValueError: time data 'October' does not match format '%m/%d/%Y'
```
My CSV has values such as October in it which I want to change it to 10/01/2018
```
import pandas as pd
import datetime
f = pd.read_excel('test.xlsx', 'Sheet1', index_col=None)
keep_col = ['Month']
new_f = f[keep_col]
f['Month'] = f['Month'].apply(lambda v: datetime.datetime.strptime(v, '%m/%d/%Y'))
new_f.to_csv("output.csv", index=False)
```
Any help would be appreciated
|
2018/08/22
|
[
"https://Stackoverflow.com/questions/51976580",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9488647/"
] |
Can't you just write a function mapping to each? In fact, a dictionary will do.
```
def convert_monthname(monthname):
table = {"January": datetime.datetime(month=1, day=1, year=2018),
"February": datetime.datetime(month=2, day=1, year=2018),
...}
return table.get(monthname, monthname)
f['Month'] = f['Month'].apply(convert_monthname)
```
|
The whole point of passing a format string like `%m/%d/%y` to `strftime` is that you're specifying what format the input strings are going to be in.
You can see [the documentation](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior), but it's pretty obvious that a format like `%m/%d/%y` is not going to handle strings like `'October'`. You're asking for a (zero-padded) month number, a slash, a (zero-padded) day number, a slash, and a (zero-padded) (two-digit) years.
If you specify a format that actually *does* match your input, everything works without error:
```
>>> datetime.datetime.strptime('October', '%B')
datetime.datetime(1900, 10, 1, 0, 0)
```
However, that still isn't what you want, because the default year is 1900, not 2018. So, you either need to [`replace`](https://docs.python.org/3/library/datetime.html#datetime.datetime.replace) that, or pull the month out and build a new datetime object.
```
>>> datetime.datetime.strptime('October', '%B').replace(year=2018)
datetime.datetime(2018, 10, 1, 0, 0)
```
Also, notice that all of the strings that `strptime` knows about are locale-specific. If you've set an English-speaking locale, like `en_US.UTF-8`, or `C`, then `%B` means the English months, so everything is great. But if you've set, say, `br_PT.UTF-8`, then you're asking it to match the Brazilian Portuguese month names, like `Outubro` instead of `October`.1
---
1. Since I don't actually know Brazilian Portuguese, that was a pretty dumb example for me to pick… but Google says it's Outubro, and when Google Translate did so ever lead wrong one?
| 5,500
|
45,686,298
|
I have followed [official doc](https://learn.microsoft.com/en-us/visualstudio/python/debugging-cross-platform-remote) to install ptvsd 3.2.0, and put below code in the very beginning of target code.
```
import ptvsd
ptvsd.enable_attach('my_secret')
```
If run this code, I got error:
```
File "~/.virtualenvs/py3/lib/python3.6/site-packages/ptvsd/__init__.py", line 87, in enable_attach
return _attach_server().enable_attach(secret, address, certfile, keyfile, redirect_output)
File "~/.virtualenvs/py3/lib/python3.6/site-packages/ptvsd/__init__.py", line 31, in _attach_server
import ptvsd.attach_server
File "~/.virtualenvs/py3/lib/python3.6/site-packages/ptvsd/attach_server.py", line 40, in <module>
import ptvsd.debugger as vspd
File "~/.virtualenvs/py3/lib/python3.6/site-packages/ptvsd/debugger.py", line 49, in <module>
import ptvsd.repl as _vspr
```
ModuleNotFoundError: No module named 'ptvsd.repl'
|
2017/08/15
|
[
"https://Stackoverflow.com/questions/45686298",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4198142/"
] |
I had the same problem today. I've checked the [last version](https://pypi.python.org/pypi/ptvsd/3.2.0) and was released yesterday. I've decided to roll back to version 3.1.0, and that is working fine for me.
I've reported the problem to the [gitter room](https://gitter.im/Microsoft/PTVS). I'll update this answer as soon as I get more information.
|
The `ptvsd` module is not using semantic versioning, which means you cannot safely update it whenever you like. The plan is to switch to semantic versioning when it is fully decoupled from Visual Studio.
`ptvsd==3.2.0` was released at the same time that Visual Studio 2017 Update 15.3 because they have dependencies on each other. If you also update Visual Studio then you should update to `ptvsd==3.2.0`. Otherwise, stay with an older version.
Currently Visual Studio Code requires `ptvsd<3`. It has not been updated for recent changes.
| 5,508
|
65,046,032
|
I'm trying to create an .exe from my python script. The script uses the cloudscraper package. When I create the .exe and I execute it, it shows the following error:
```
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\...\\MEI1....\\cloudscraper\\user_agent\\browsers.json'
```
The error ONLY APPEARS WHEN I TRY TO EXECUTE THE .exe file.
Why is this happening? Is cloudscraper unavailable with pyinstaller?
The project structure looks like this:
```
C:\Users\andre\OneDrive\Documentos\Programming\Python\Python3\proyect
proyect
|
|______ main.py
|
|______ services
|________ __init__.py
|_______ main_service.py
|_______ sql_service.py
```
This is very similar to my project structure since obviously, I cannot share the actual project structure of my project.
|
2020/11/28
|
[
"https://Stackoverflow.com/questions/65046032",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14134850/"
] |
The found solution is to copy the required folder inside the .exe path but as for now a days, **I found that this could not be achieved if you're using the** `--onefile` **modifier to create the .exe**, instead you should not use it and copy the cloudscraper folder inside such .exe path, and that should work
**NOTE:**
The path is **NOT THE PARENT FOLDER cloudscraper**, instead is the nested folder which in it has the `user_agent` folder
|
Your .exe file is looking for browsers.json, but you didn't move that file to the same path as the .exe file. Working with pyinstaller requires good experience handling relative and absolute paths, otherwise, you will face that kind of errors.
If cloudscraper is not part of your project tree (maybe is a hidden import):
1. Try copy the folder named 'cloudscrapper' from [here](https://github.com/VeNoMouS/cloudscraper) and paste it in the same path of your .exe file
| 5,510
|
60,609,578
|
I have two sets `set([1,2,3]` and `set([4,5,6]`. I want to add them in order to get set `1,2,3,4,5,6`.
I tried:
```
b = set([1,2,3]).add(set([4,5,6]))
```
but it gives me this error:
```
Traceback (most recent call last):
File "<ipython-input-27-d2646d891a38>", line 1, in <module>
b = set([1,2,3]).add(set([4,5,6]))
TypeError: unhashable type: 'set'
```
**Question:** How to correct my code?
|
2020/03/09
|
[
"https://Stackoverflow.com/questions/60609578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1601703/"
] |
You can take the union of sets with `|`
```
> set([1, 2, 4]) | set([5, 6, 7])
{1, 2, 4, 5, 6, 7}
```
Trying to use `add(set([4,5,6]))` is not working because it tries to add the entire set as a single element rather than the elements in the set — and since it's not hashable, it fails.
|
You should use the union operation with the `.union` method or the `|` operator:
```py
>>> a = set([1, 2, 3])
>>> b = set([4, 5, 3])
>>> c = a.union(b)
>>> print(c)
{1, 2, 3, 4, 5}
>>> d = a | b
>>> print(d)
{1, 2, 3, 4, 5}
```
See the complete list of operations for [set](https://docs.python.org/3/library/stdtypes.html#set).
| 5,513
|
1,899,412
|
I have a web site where there are links like `<a href="http://www.example.com?read.php=123">` Can anybody show me how to get all the numbers (123, in this case) in such links using python? I don't know how to construct a regex. Thanks in advance.
|
2009/12/14
|
[
"https://Stackoverflow.com/questions/1899412",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
"If you have a problem, and decide to use regex, now you have two problems..."
If you are reading one particular web page and you know how it is formatted, then regex is fine - you can use S. Mark's answer. To parse a particular link, you can use Kimvai's answer. However, to get all the links from a page, you're better off using something more serious. Any regex solution you come up with will have flaws,
I recommend [mechanize](http://wwwsearch.sourceforge.net/mechanize/). If you notice, the `Browser` class there has a `links` method which gets you all the links in a page. It has the added benefit of being able to download the page for you =) .
|
This will work irrespective of how your links are formatted (e.g. if some look like `<a href="foo=123"/>` and some look like `<A TARGET="_blank" HREF='foo=123'/>`).
```
import re
from BeautifulSoup import BeautifulSoup
soup = BeautifulSoup(html)
p = re.compile('^.*=([\d]*)$')
for a in soup.findAll('a'):
m = p.match(a["href"])
if m:
print m.groups()[0]
```
| 5,514
|
62,163,714
|
`which python` returns:
`/Library/Frameworks/Python.framework/Versions/2.7/bin/python`
When I open my shell, the first couple lines of the shell read:
`Python 3.6.8 (v3.6.8:3c6b436a57, Dec 24 2018, 02:04:31)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin
Type "help", "copyright", "credits" or "license()" for more information.`
And finally, the issue is that when I am in my shell and I try to import pandas (which is installed), I receive `no module named 'pandas'`. Which, according to [this article](https://github.com/pandas-dev/pandas/issues/11604), is due to having multiple installations of Python and running Python from the system.
The solution proposed by the aforementioned article is to use conda. But, will simply installing conda solve my issue of my shell returning something different from terminal? I am really new to programming so assume I don't know how any of this works!
|
2020/06/03
|
[
"https://Stackoverflow.com/questions/62163714",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12304196/"
] |
CSS:
```
.bi-headphones::before {
display: inline-block;
content: "";
background-image: url("data:image/svg+xml,%3Csvg width='1em' height='1em' viewBox='0 0 16 16' class='bi bi-headphones' fill='currentColor' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath fill-rule='evenodd' d='M8 3a5 5 0 0 0-5 5v4.5H2V8a6 6 0 1 1 12 0v4.5h-1V8a5 5 0 0 0-5-5z'/%3E%3Cpath d='M11 10a1 1 0 0 1 1-1h2v4a1 1 0 0 1-1 1h-1a1 1 0 0 1-1-1v-3zm-6 0a1 1 0 0 0-1-1H2v4a1 1 0 0 0 1 1h1a1 1 0 0 0 1-1v-3z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-size: 1rem 1rem;
width:1rem; height:1rem;
}
```
Usage:
```
<i class="bi-headphones"></i>
```
I use URl SVG encoder to prepare svg for css: <https://yoksel.github.io/url-encoder/>
|
i cant find in the doc that bootstrap icon is fontawesome icon
after you install it you use it as a svg like this one
```
<svg class="bi bi-app" width="1em" height="1em" viewBox="0 0 16 16" fill="currentColor" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" d="M11 2H5a3 3 0 0 0-3 3v6a3 3 0 0 0 3 3h6a3 3 0 0 0 3-3V5a3 3 0 0 0-3-3zM5 1a4 4 0 0 0-4 4v6a4 4 0 0 0 4 4h6a4 4 0 0 0 4-4V5a4 4 0 0 0-4-4H5z"/>
</svg>
```
or you can use it this way after you download the svg to you project
```
<img src="/assets/img/bootstrap.svg" alt="" width="32" height="32" title="Bootstrap">
```
or on your css
```
.bi::before {
display: inline-block;
content: "";
background-image: url("data:image/svg+xml,<svg viewBox='0 0 16 16' fill='%23333' xmlns='http://www.w3.org/2000/svg'><path fill-rule='evenodd' d='M8 9.5a1.5 1.5 0 1 0 0-3 1.5 1.5 0 0 0 0 3z' clip-rule='evenodd'/></svg>");
background-repeat: no-repeat;
background-size: 1rem 1rem;
}
```
| 5,524
|
17,461,134
|
I am using Python2.7.5 in Windows 7. I'm new to command line arguments. I am trying to do this exercise:
Write a program that reads in a string on the command line and returns a table of the letters which occur in the string with the number of times each letter occurs. For example:
```
$ python letter_counts.py "ThiS is String with Upper and lower case Letters."
a 2
c 1
d 1
# etc.
```
I know how to add command line arguments to a file name and output them in a list in cmd (windows command prompt).
However, I would like to learn how to work with command line arguments in python script- because I need to add/access the additional command line arguments and create a loop in order to count their letters.
Outside of cmd, I currently only have letter\_counts.py as the filename- that's only one command line argument.
In python not cmd : how do I add and access command line arguments?
|
2013/07/04
|
[
"https://Stackoverflow.com/questions/17461134",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2547317/"
] |
You want to use the [`sys.argv`](http://docs.python.org/2/library/sys.html#sys.argv) list from the [sys](http://docs.python.org/2/library/sys.html#module-sys) module. It lets you access arguments passed in the command line.
For example, if your command line input was `python myfile.py a b c`, `sys.argv[0]` is myfile.py, `sys.argv[1]` is a, `sys.argv[2]` is b, and `sys.argv[3]` is c.
A running example (`testcode.py`):
```
if __name__ == "__main__":
import sys
print sys.argv
```
Then, running (in the command line):
```
D:\some_path>python testcode.py a b c
['testcode.py', 'a', 'b', 'c']
```
|
You can do something along these lines:
```
#!/usr/bin/python
import sys
print sys.argv
counts={}
for st in sys.argv[1:]:
for c in st:
counts.setdefault(c.lower(),0)
counts[c.lower()]+=1
for k,v in sorted(counts.items(), key=lambda t: t[1], reverse=True):
print "'{}' {}".format(k,v)
```
When invoked with `python letter_counts.py "ThiS is String with Upper and lower case Letters."` prints:
```
['./letter_counts.py', 'ThiS is String with Upper and lower case Letters.']
' ' 8
'e' 5
's' 5
't' 5
'i' 4
'r' 4
'a' 2
'h' 2
'l' 2
'n' 2
'p' 2
'w' 2
'c' 1
'd' 1
'g' 1
'o' 1
'u' 1
'.' 1
```
If you instead do not use quotes, like this: `python letter_counts.py ThiS is String with Upper and lower case Letters.` it prints:
```
['./letter_counts.py', 'ThiS', 'is', 'String', 'with', 'Upper', 'and', 'lower', 'case', 'Letters.']
'e' 5
's' 5
't' 5
'i' 4
'r' 4
'a' 2
'h' 2
'l' 2
'n' 2
'p' 2
'w' 2
'c' 1
'd' 1
'g' 1
'o' 1
'u' 1
'.' 1
```
Note the difference in the list `sys.argv` at the top of the output. The result is that whitespace between words is lost and the letter counts are the same.
| 5,525
|
47,069,829
|
This may be a repeated question of attempting to run a mysql query on a remote machine using python.
Im using pymysql and SSHTunnelForwarder for this.
The mysqldb is located on different server (192.168.10.13 and port 5555).
Im trying to use the following snippet:
```
with SSHTunnelForwarder(
(host, ssh_port),
ssh_username = ssh_user,
ssh_password = ssh_pass,
remote_bind_address=('127.0.0.1', 5555)) as server:
with pymysql.connect("192.168.10.13", user, password, port=server.local_bind_port) as connection:
cursor = connection.cursor()
output = cursor.execute("select * from billing_cdr limit 1")
print output
```
Is this the correct approach ?
I see the following error:
```
sshtunnel.BaseSSHTunnelForwarderError: Could not establish session to SSH gateway
```
Also is there any other recommended library to use ?
|
2017/11/02
|
[
"https://Stackoverflow.com/questions/47069829",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4670651/"
] |
Found this to be working after some digging.
```
with SSHTunnelForwarder(
("192.168.10.13", 22),
ssh_username = ssh_user,
ssh_password = ssh_pass,
remote_bind_address=('127.0.0.1', 5555)) as server:
with pymysql.connect('127.0.0.1', user, password, port=server.local_bind_port) as connection:
output = connection.execute("select * from db.table limit 1")
print output
```
|
you can do it like this:
```
conn = MySQLdb.connect(
host=host,
port=port,
user=username,
passwd=password,
db=database
charset='utf8',)
cur = conn.cursor()
cur.execute("select * from billing_cdr limit 1")
rows = cur.fetchall()
for row in rows:
a=row[0]
b=row[1]
conn.close()
```
| 5,526
|
16,784,154
|
With the help of joksnet's programs [here](https://stackoverflow.com/questions/4460921/extract-the-first-paragraph-from-a-wikipedia-article-python) I've managed to get plaintext Wikipedia articles that I'm looking for.
The text returned includes Wiki markup for the headings, so for example, the sections of the [Albert Einstein article](http://en.wikipedia.org/wiki/Albert_einstein) are returned like this:
```
==Biography==
===Early life and education===
blah blah blah
```
What I'd really like to do is feed the retrieved text to a function and wrap all the top level sections in bold html tags and the second level sections in italics, like this:
```
<b>Biography</b>
<i>Early life and education</i>
blah blah blah
```
But I'm afraid I don't know how to even start, at least not without making the function dangerously naive. Do I need to use regular expressions?
Any suggestions greatly appreciated.
PS Sorry if "parsing" is too strong a word for what I'm trying to do here.
|
2013/05/28
|
[
"https://Stackoverflow.com/questions/16784154",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/728286/"
] |
I think the best way here would be to let MediaWiki take care of the parsing. I don't know the library you're using, but basically this is the difference between
<http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Albert%20Einstein&rvprop=content>
which returns the raw wikitext and
<http://en.wikipedia.org/w/api.php?action=query&prop=revisions&titles=Albert%20Einstein&rvprop=content&rvparse>
which returns the parsed HTML.
|
You can use regex and scraping modules like Scrapy and Beautifulsoup to parse and scrape wiki pages.
Now that you clarified your question I suggest you use the py-wikimarkup module that is hosted on github. The link is <https://github.com/dcramer/py-wikimarkup/> . I hope that helps.
| 5,527
|
19,542,883
|
I'm trying to make a program that repeatedly asks an user for an input until the input is of a specific type. My code:
```
value = input("Please enter the value")
while isinstance(value, int) == False:
print ("Invalid value.")
value = input("Please enter the value")
if isinstance(value, int) == True:
break
```
Based on my understanding of python, the line
```
if isintance(value, int) == True
break
```
should end the while loop if **value** is an integer, but it doesn't.
My question is:
a) How would I make a code that would ask the user for an input, until the input is an integer?
b) Why doesn't my code work?
|
2013/10/23
|
[
"https://Stackoverflow.com/questions/19542883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2826154/"
] |
The reason your code doesn't work is because `input()` will always return a string. Which will always cause `isinstance(value, int)` to always evaluate to `False`.
You probably want:
```
value = ''
while not value.strip().isdigit():
value = input("Please enter the value")
```
|
`input` always returns a string, you have to convert it to `int` yourself.
Try this snippet:
```
while True:
try:
value = int(input("Please enter the value: "))
except ValueError:
print ("Invalid value.")
else:
break
```
| 5,529
|
49,993,687
|
Let's say I am running multiple python processes(not threads) on a multi core CPU (say 4). GIL is process level so GIL within a particular process won't affect other processes.
My question here is if the GIL within one process will take hold of only single core out of 4 cores or will it take hold of all 4 cores?
If one process locks all cores at once, then multiprocessing should not be any better than multi threading in python. If not how do the cores get allocated to various processes?
>
> As an observation, in my system which is 8 cores (4\*2 because of
> hyperthreading), when I run a single CPU bound process, the CPU usage
> of 4 out of 8 cores goes up.
>
>
>
Simplifying this:
4 python threads (in one process) running on a 4 core CPU will take more time than single thread doing same work (considering the work is fully CPU bound). Will 4 different process doing that amount of work reduce the time taken by a factor of near 4?
|
2018/04/24
|
[
"https://Stackoverflow.com/questions/49993687",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4510252/"
] |
Python doesn't do anything to [bind processes or threads to cores](https://en.wikipedia.org/wiki/Processor_affinity); it just leaves things up to the OS. When you spawn a bunch of independent processes (or threads, but that's harder to do in Python), the OS's scheduler will quickly and efficiently get them spread out across your cores without you, or Python, needing to do anything (barring really bad pathological cases).
---
The GIL isn't relevant here. I'll get to that later, but first let's explain what *is* relevant.
You don't have 8 cores. You have 4 cores, each of which is [hyperthreaded](https://en.wikipedia.org/wiki/Hyper-threading).
Modern cores have a whole lot of "super-scalar" capacity. Often, the instructions queued up in a pipeline aren't independent enough to take full advantage of that capacity. What hyperthreading does is to allow the core to go fetch other instructions off a second pipeline when this happens, which are virtually guaranteed to be independent. But it only allows that, not requires, because in some cases (which the CPU can usually decide better than you) the cost in cache locality would be worse than the gains in parallelism.
So, depending on the actual load you're running, with four hyperthreaded cores, you may get full 800% CPU usage, or you may only get 400%, or (pretty often) somewhere in between.
I'm assuming your system is configured to report 8 cores rather than 4 to userland, because that's the default, and that you're have at least 8 processes or a pool with default proc count and at least 8 tasks—obviously, if none of that is true, you can't possibly get 800% CPU usage…
I'm also assuming you aren't using explicit locks, other synchronization, `Manager` objects, or anything else that will serialize your code. If you do, obviously you can't get full parallelism.
And I'm also assuming you aren't using (mutable) shared memory, like a `multiprocessing.Array` that everyone writes to. This can cause cache and page conflicts that can be almost as bad as explicit locks.
---
So, what's the deal with the GIL? Well, if you were running multiple threads within a process, and they were all CPU-bound, and they were all spending most of that time running Python code (as opposed to, say, spending most of that time running numpy operations that release the GIL), only one thread would run at a time. You could see:
* 100% consistently on a single core, while the rest sit at 0%.
* 100% pingponging between two or more cores, while the rest sit at 0%.
* 100% pingponging between two or more cores, while the rest sit at 0%, but with some noticeable overlap where two cores at once are way over 0%. This last one might *look* like parallelism, but it isn't—that's just the switching overhead becoming visible.
But you're not running multiple threads, you're running separate processes, each of which has its own entirely independent GIL. And that's why you're seeing four cores at 100% rather than just one.
|
Process to CPU/CPU core allocation is handled by the Operating System.
| 5,534
|
54,193,625
|
I'm trying to fetch data for facebook account using selenium browser python but can't able to find the which element I can look out for clicking on an export button.
See attached screenshot[](https://i.stack.imgur.com/fd2Mf.png)
I tried but it seems giving me an error for the class.
```
def login_facebook(self, username, password):
chrome_options = webdriver.ChromeOptions()
preference = {"download.default_directory": self.section_value[24]}
chrome_options.add_experimental_option("prefs", preference)
self.driver = webdriver.Chrome(self.section_value[20], chrome_options=chrome_options)
self.driver.get(self.section_value[25])
username_field = self.driver.find_element_by_id("email")
password_field = self.driver.find_element_by_id("pass")
username_field.send_keys(username)
self.driver.implicitly_wait(10)
password_field.send_keys(password)
self.driver.implicitly_wait(10)
self.driver.find_element_by_id("loginbutton").click()
self.driver.implicitly_wait(10)
self.driver.get("https://business.facebook.com/select/?next=https%3A%2F%2Fbusiness.facebook.com%2F")
self.driver.get("https://business.facebook.com/home/accounts?business_id=698597566882728")
self.driver.get("https://business.facebook.com/adsmanager/reporting/view?act="
"717590098609803&business_id=698597566882728&selected_report_id=23843123660810666")
# self.driver.get("https://business.facebook.com/adsmanager/manage/campaigns?act=717590098609803&business_id"
# "=698597566882728&tool=MANAGE_ADS&date={}-{}_{}%2Clast_month".format(self.last_month,
# self.first_day_month,
# self.last_day_month))
self.driver.find_element_by_id("export_button").click()
self.driver.implicitly_wait(10)
self.driver.find_element_by_class_name("_43rl").click()
self.driver.implicitly_wait(10)
```
Can you please let me know how can i click on Export button?
|
2019/01/15
|
[
"https://Stackoverflow.com/questions/54193625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7684584/"
] |
Well, I'm able to resolve it by using xpath.
Here is the solution
```
self.driver.find_element_by_xpath("//*[contains(@class, '_271k _271m _1qjd layerConfirm')]").click()
```
|
to run automation scripts on applications like facebook, youtube quite a hard because they are huge coporations and their web applications are developed by the worlds best developers but its not impossible to run automation scripts sometimes elements are generated dynamically sometimes hidden or inactive you cant just go and click
one solution is you can do by click action by xpath realtive or absolute their is not id specified as "export\_button" in resource file i think this might help you
you can also find element by class name or css selector as i see in screen shot the class name is present "\_271K \_271m \_1qjd layerConfirm " you can perform click action on that
| 5,535
|
55,915,109
|
Im trying to create a more dynamic program where I define a function's name based on a variable string.
Trying to define a function using a variable like this:
```py
__func_name__ = "fun"
def __func_name__():
print('Hello from ' + __func_name__)
fun()
```
Was wanting this to output:
```
Hello from fun
```
The only examples I found were:
[how to define a function from a string using python](https://stackoverflow.com/questions/5920120/how-to-define-a-function-from-a-string-using-python)
|
2019/04/30
|
[
"https://Stackoverflow.com/questions/55915109",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6828625/"
] |
You can update `globals`:
```
>>> globals()["my_function_name"] = lambda x: x + 1
>>> my_function_name(10)
11
```
But usually is more convinient to use a dictionary that have the functions related:
```
my_func_dict = {
"__func_name__" : __func_name__,
}
def __func_name__():
print('Hello')
```
And then use the dict to take the funtion back using the name as a key:
```
my_func_dict["__func_name__"]()
```
|
This should do it as well, using the `globals` to update function name on the fly.
```
def __func_name__():
print('Hello from ' + __func_name__.__name__)
globals()['fun'] = __func_name__
fun()
```
The output will be
```
Hello from __func_name__
```
| 5,538
|
47,912,701
|
There is a solution posted [here](https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread-in-python) to create a stoppable thread. However, I am having some problems understanding how to implement this solution.
Using the code...
```
import threading
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self):
super(StoppableThread, self).__init__()
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
```
How can I create a thread that runs a function that prints "Hello" to the terminal every 1 second. After 5 seconds I use the .stop() to stop the looping function/thread.
Again I am having troubles understanding how to implement this stopping solution, here is what I have so far.
```
import threading
import time
class StoppableThread(threading.Thread):
"""Thread class with a stop() method. The thread itself has to check
regularly for the stopped() condition."""
def __init__(self):
super(StoppableThread, self).__init__()
self._stop_event = threading.Event()
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
def funct():
while not testthread.stopped():
time.sleep(1)
print("Hello")
testthread = StoppableThread()
testthread.start()
time.sleep(5)
testthread.stop()
```
Code above creates the thread testthread which can be stopped by the testthread.stop() command. From what I understand this is just creating an empty thread... Is there a way I can create a thread that runs funct() and the thread will end when I use .stop(). Basically I do not know how to implement the StoppableThread class to run the funct() function as a thread.
Example of a regular threaded function...
```
import threading
import time
def example():
x = 0
while x < 5:
time.sleep(1)
print("Hello")
x = x + 1
t = threading.Thread(target=example)
t.start()
t.join()
#example of a regular threaded function.
```
|
2017/12/20
|
[
"https://Stackoverflow.com/questions/47912701",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9124164/"
] |
There are a couple of problems with how you are using the code in your original example. First of all, you are not passing any constructor arguments to the base constructor. This is a problem because, as you can see in the plain-Thread example, constructor arguments are often necessary. You should rewrite `StoppableThread.__init__` as follows:
```
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._stop_event = threading.Event()
```
Since you are using Python 3, you do not need to provide arguments to `super`. Now you can do
```
testthread = StoppableThread(target=funct)
```
This is still not an optimal solution, because `funct` uses an external variable, `testthread` to stop itself. While this is OK-ish for a tiny example like yours, using global variables like that normally causes a huge maintenance burden and you don't want to do it. A much better solution would be to extend the generic `StoppableThread` class for your particular task, so you can access `self` properly:
```
class MyTask(StoppableThread):
def run(self):
while not self.stopped():
time.sleep(1)
print("Hello")
testthread = MyTask()
testthread.start()
time.sleep(5)
testthread.stop()
```
If you absolutely do not want to extend `StoppableThread`, you can use the [`current_thread`](https://docs.python.org/3/library/threading.html#threading.current_thread) function in your task in preference to reading a global variable:
```
def funct():
while not current_thread().stopped():
time.sleep(1)
print("Hello")
testthread = StoppableThread(target=funct)
testthread.start()
sleep(5)
testthread.stop()
```
|
Inspired by above solution I created a small library, ants, for this problem.
Example
```
from ants import worker
@worker
def do_stuff():
...
thread code
...
do_stuff.start()
...
do_stuff.stop()
```
In above example do\_stuff will run in a separate thread being called in a `while 1:` loop
You can also have triggering events , e.g. in above replace `do_stuff.start()` with `do_stuff.start(lambda: time.sleep(5))` and you will have it trigger every 5:th second
The library is very new and work is ongoing on GitHub <https://github.com/fa1k3n/ants.git>
| 5,539
|
71,719,341
|
I made a small pygame app that plays certain wav files in keypress using pygame.mixer
All seem to work just fine except the fact that if you minimize the pygame window the program stops working until you open it again. Is there a way to solve this issue or an alternative way to implement sound playing in python?
This is my repository: <https://github.com/Souvlaki42/HighPlayer>
|
2022/04/02
|
[
"https://Stackoverflow.com/questions/71719341",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15354246/"
] |
Several ways. Here are a couple:
Standard SQL:
```sql
SELECT DISTINCT
CASE WHEN col1 < col2 THEN col1 ELSE col2 END AS col1
, CASE WHEN col1 < col2 THEN col2 ELSE col1 END AS col2
FROM tbl
;
```
or (MySQL supports this one):
```sql
SELECT DISTINCT
LEAST(col1, col2) AS col1
, GREATEST(col1, col2) AS col2
FROM tbl
;
```
There are other approaches, which have slightly different behavior.
For instance, we might want to keep ('B', 'A') if ('A', 'B') doesn't also exist.
Another thought is we could prevent this issue by correcting the data on INSERT by using similar LEAST / GREATEST logic in the INSERT (or a trigger), to be sure col1 was always less than or equal to col2.
Then add a unique constraint on (col1, col2) and a table check constraint (col1 <= col2) to prevent duplicates and reflections.
|
```
select distinct least(col_1, col_2), greatest(col_1, col_2)
from the_table
order by 1
```
| 5,542
|
66,239,918
|
I wrote this code to return a list of skills. If the user already has a specific skill, the list-item should be updated to `active = false`.
This is my initial code:
```js
setup () {
const user = ref ({
id: null,
skills: []
});
const available_skills = ref ([
{value: 'css', label: 'CSS', active: true},
{value: 'html', label: 'HTML', active: true},
{value: 'php', label: 'PHP', active: true},
{value: 'python', label: 'Python', active: true},
{value: 'sql', label: 'SQL', active: true},
]);
const computed_skills = computed (() => {
let result = available_skills.value.map ((skill) => {
if (user.value.skills.map ((sk) => {
return sk.name;
}).includes (skill.label)) {
skill.active = false;
}
return skill;
});
return result;
})
return {
user, computed_skills
}
},
```
This works fine on the initial rendering. But if I remove a skill from the user doing
`user.skills.splice(index, 1)` the `computed_skills` are not being updated.
Why is that the case?
|
2021/02/17
|
[
"https://Stackoverflow.com/questions/66239918",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7510971/"
] |
In JavaScript user or an object is a refence to the object which is the pointer itself will not change upon changing the underling properties hence the computed is not triggered
kid of like computed property for an array and if that array get pushed with new values, the pointer of the array does not change but the underling reference only changes.
**Work around:**
try and reassign user by shadowing the variable
|
`slice` just returns a copy of the changed array, it doesn't change the original instance..hence computed property is not reactive
Try using below code
```
user.skills = user.skills.splice(index, 1);
```
| 5,543
|
66,670,964
|
I have the following python code:
```
client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client.connect(("example.com", 443))
client.send(b'POST /api HTTPS/1.1\r\nHost: example.com\r\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:83.0) Gecko/20100101 Firefox/83.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: application/json\r\nConnection: keep-alive\r\nContent-Type: application/json\r\nAuthoriation: aa\r\nContent-Length: 22\r\n\r\n')
client.send(b'{"jsonPostData": "aaa"}')
response = client.recv(4096)
response = repr(response)
```
But it returns a 400 bad request error with no content, I tried same headers and json with requests and aiohttp and in both it works, any idea on what I am doing wrong?
|
2021/03/17
|
[
"https://Stackoverflow.com/questions/66670964",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15414108/"
] |
It will **not overwrite** the secret if you create it manually in the console or using AWS SDK. The `aws_secretsmanager_secret` creates only the secret, but not its value. To set value you have to use [aws\_secretsmanager\_secret\_version](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret_version).
Anyway, this is something you can easily test yourself. Just run your code with a secret, update its value in AWS console, and re-run terraform apply. You should see no change in the secret's value.
|
You could have Terraform [generate random secret values](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_random_password) for you using:
```
data "aws_secretsmanager_random_password" "dev_password" {
password_length = 16
}
```
Then create [the secret metadata](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret) using:
```
resource "aws_secretsmanager_secret" "dev_secret" {
name = "dev-secret"
recovery_window_in_days = 7
}
```
And then by creating [the secret version](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/secretsmanager_secret_version):
```
resource "aws_secretsmanager_secret_version" "dev_sv" {
secret_id = aws_secretsmanager_secret.dev_secret.id
secret_string = data.aws_secretsmanager_random_password.dev_password.random_password
lifecycle {
ignore_changes = [secret_string, ]
}
}
```
Adding the 'ignore\_changes' [lifecycle block](https://www.terraform.io/language/meta-arguments/lifecycle#ignore_changes) to the secret version will prevent Terraform from overwriting the secret once it has been created. I tested this just now to confirm that a new secret with a new random value will be created, and subsequent executions of `terraform apply` do not overwrite the secret.
| 5,545
|
52,277,877
|
I'm trying to scrape this website using python and selenium. However all the information I need is not on the main page, so how would I click the links in the 'Application number' column one by one go to that page scrape the information then return to original page?
Ive tried:
```
def getData():
data = []
select = Select(driver.find_elements_by_xpath('//*[@id="node-41"]/div/div/div/div/div/div[1]/table/tbody/tr/td/a/@href'))
list_options = select.options
for item in range(len(list_options)):
item.click()
driver.get(url)
```
URL: <http://www.scilly.gov.uk/planning-development/planning-applications>
Screenshot of the site:
[](https://i.stack.imgur.com/oDOMT.png)
|
2018/09/11
|
[
"https://Stackoverflow.com/questions/52277877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10347887/"
] |
To open multiple hrefs within a webtable to scrape through selenium you can use the following solution:
* Code Block:
```
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
hrefs = []
options = Options()
options.add_argument("start-maximized")
options.add_argument("disable-infobars")
options.add_argument("--disable-extensions")
options.add_argument("--disable-gpu")
options.add_argument("--no-sandbox")
driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:\WebDrivers\ChromeDriver\chromedriver_win32\chromedriver.exe')
driver.get('http://www.scilly.gov.uk/planning-development/planning-applications')
windows_before = driver.current_window_handle # Store the parent_window_handle for future use
elements = WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "td.views-field.views-field-title>a"))) # Induce WebDriverWait for the visibility of the desired elements
for element in elements:
hrefs.append(element.get_attribute("href")) # Collect the required href attributes and store in a list
for href in hrefs:
driver.execute_script("window.open('" + href +"');") # Open the hrefs one by one through execute_script method in a new tab
WebDriverWait(driver, 10).until(EC.number_of_windows_to_be(2)) # Induce WebDriverWait for the number_of_windows_to_be 2
windows_after = driver.window_handles
new_window = [x for x in windows_after if x != windows_before][0] # Identify the newly opened window
# driver.switch_to_window(new_window) <!---deprecated>
driver.switch_to.window(new_window) # switch_to the new window
# perform your webscraping here
print(driver.title) # print the page title or your perform your webscraping
driver.close() # close the window
# driver.switch_to_window(windows_before) <!---deprecated>
driver.switch_to.window(windows_before) # switch_to the parent_window_handle
driver.quit() #Quit your program
```
* Console Output:
```
Planning application: P/18/064 | Council of the ISLES OF SCILLY
Planning application: P/18/063 | Council of the ISLES OF SCILLY
Planning application: P/18/062 | Council of the ISLES OF SCILLY
Planning application: P/18/061 | Council of the ISLES OF SCILLY
Planning application: p/18/059 | Council of the ISLES OF SCILLY
Planning application: P/18/058 | Council of the ISLES OF SCILLY
Planning application: P/18/057 | Council of the ISLES OF SCILLY
Planning application: P/18/056 | Council of the ISLES OF SCILLY
Planning application: P/18/055 | Council of the ISLES OF SCILLY
Planning application: P/18/054 | Council of the ISLES OF SCILLY
```
---
References
----------
You can find a couple of relevant detailed discussions in:
* [WebScraping JavaScript-Rendered Content using Selenium in Python](https://stackoverflow.com/questions/59144599/webscraping-javascript-rendered-content-using-selenium-in-python/59156403#59156403)
* [StaleElementReferenceException even after adding the wait while collecting the data from the wikipedia using web-scraping](https://stackoverflow.com/questions/65623799/staleelementreferenceexception-even-after-adding-the-wait-while-collecting-the-d/65631425#65631425)
* [Unable to access the remaining elements by xpaths in a loop after accessing the first element- Webscraping Selenium Python](https://stackoverflow.com/questions/59706039/unable-to-access-the-remaining-elements-by-xpaths-in-a-loop-after-accessing-the/59712944#59712944)
* [How to open each product within a website in a new tab for scraping using Selenium through Python](https://stackoverflow.com/questions/57640584/how-to-open-each-product-within-a-website-in-a-new-tab-for-scraping-using-seleni/57641549#57641549)
* [How to open multiple hrefs within a webtable to scrape through selenium](https://stackoverflow.com/questions/52277877/how-to-open-multiple-hrefs-within-a-webtable-to-scrape-through-selenium/52281843#52281843)
|
What you can do is the following:
```
import selenium
from selenium.webdriver.common.keys import Keys
from selenium import Webdriver
import time
url = "url"
browser = Webdriver.Chrome() #or whatever driver you use
browser.find_element_by_class_name("views-field views-field-title").click()
# or use this browser.find_element_by_xpath("xpath")
#Note you will need to change the class name to click a different item in the table
time.sleep(5) # not the best way to do this but its simple. Just to make sure things load
#it is here that you will be able to scrape the new url I will not post that as you can scrape what you want.
# When you are done scraping you can return to the previous page with this
driver.execute_script("window.history.go(-1)")
```
hope this is what you are looking for.
| 5,546
|
20,952,629
|
I am trying to install Python Cassandra Driver and constantly getting error "vcvarsall.bat not found"
I tried using lots of solutions posted already in stackoverflow but non of them are working.
here is what i tried-
1. Using mingw gcc compiler.I followed every step, setting the path variable etc.
and then tried using "setup.py install --compiler=mingw32" but again got same error.
2. i have VS08 installed. and also path variable "VS80COMNTOOLS=C:\Program Files\Microsoft Visual Studio 8\Common7\Tools\" is set. This is also not working.
3. Installed mingw- base tools,make tool,gcc compiler and then again followed step one but no help.
-edit
operating system windows enterprise N x64.
python version-2.7
I tried to install it on my windows server machine using first step and it worked fine but not working on my laptop.
|
2014/01/06
|
[
"https://Stackoverflow.com/questions/20952629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2640953/"
] |
You will need to shorten the `memo` field to be able to use it in this manner. Try:
```
CAST(Desc AS NVARCHAR(2000))
```
|
Use Having clause instead of where condition in query.
```
Select count(*) as NBoccurrence,descr
From tbl_nm
Group by descr
Having your condition
```
| 5,548
|
25,996,880
|
This should be easy, but as ever Python's wildly overcomplicated datetime mess is making simple things complicated...
So I've got a time string in HH:MM format (eg. '09:30'), which I'd like to turn into a datetime with today's date. Unfortunately the default date is Jan 1 1900:
```
>>> datetime.datetime.strptime(time_str, "%H:%M")
datetime.datetime(1900, 1, 1, 9, 50)
```
[datetime.combine](https://docs.python.org/2/library/datetime.html#datetime.datetime.combine) looks like it's meant exactly for this, but I'll be darned if I can figure out how to get the time parsed so it'll accept it:
```
now = datetime.datetime.now()
>>> datetime.datetime.combine(now, time.strptime('09:30', '%H:%M'))
TypeError: combine() argument 2 must be datetime.time, not time.struct_time
>>> datetime.datetime.combine(now, datetime.datetime.strptime('09:30', '%H:%M'))
TypeError: combine() argument 2 must be datetime.time, not datetime.datetime
>>> datetime.datetime.combine(now, datetime.time.strptime('09:30', '%H:%M'))
AttributeError: type object 'datetime.time' has no attribute 'strptime'
```
This monstrosity works...
```
>>> datetime.datetime.combine(now,
datetime.time(*(time.strptime('09:30', '%H:%M')[3:6])))
datetime.datetime(2014, 9, 23, 9, 30)
```
...but there **must** be a better way to do that...!?
|
2014/09/23
|
[
"https://Stackoverflow.com/questions/25996880",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/218340/"
] |
The [function signature](https://docs.python.org/3/library/datetime.html#datetime.datetime.combine) says:
```
datetime.combine(date, time)
```
so pass a `datetime.date` object as the first argument, and a `datetime.time` object as the second argument:
```
>>> import datetime as dt
>>> today = dt.date.today()
>>> time = dt.datetime.strptime('09:30', '%H:%M').time()
>>> dt.datetime.combine(today, time)
datetime.datetime(2014, 9, 23, 9, 30)
```
|
`pip install python-dateutil`
```
>>> from dateutil import parser as dt_parser
>>> dt_parser.parse('07:20')
datetime.datetime(2016, 11, 25, 7, 20)
```
<https://dateutil.readthedocs.io/en/stable/parser.html>
| 5,549
|
56,893,578
|
I am trying to make a python program(python 3.6) that writes commands to terminal to download a specific youtube video(using youtube-dl).
If I go on terminal and execute the following command:
```
cd; cd Desktop; youtube-dl "https://www.youtube.com/watch?v=b91ovTKCZGU"
```
It will download the video to my desktop. However, if I execute the below code, which should be doing the same command on terminal, it does not throw an error but also does not download that video.
```py
import subprocess
cmd = ["cd;", "cd", "Desktop;", "youtube-dl", "\"https://www.youtube.com/watch?v=b91ovTKCZGU\""]
print(subprocess.call(cmd, stderr=subprocess.STDOUT,shell=True))
```
It seems that this just outputs 0. I do not think there is any kind of error 0 that exists(there are error 126 and 127). So if it is not throwing an error, why does it also not download the video?
Update:
I have fixed the above code by passing in a string, and have checked that youtube-dl is installed in my default python and is also in the folder where I want to download the videos, but its still throwing error 127, meaning command "youtube-dl" is not found.
|
2019/07/04
|
[
"https://Stackoverflow.com/questions/56893578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11303221/"
] |
Binary pattern matching can dissect the string:
```
data = ["AM00", "CC11", "CB11"]
for <<key::binary-size(2), value::binary>> <- data, into: %{} do
{key, value}
end
```
output:
```
%{"AM" => "00", "CB" => "11", "CC" => "11"}
```
That only works for single byte characters.
To handle `UTF-8` characters as well as `ASCII` characters:
```
data = ["èü00", "C€11", "€ä11"]
for <<char1::utf8, char2::utf8, rest::binary>> <- data, into: %{} do
{<<char1::utf8, char2::utf8>>, rest}
end
```
output:
```
%{"C€" => "11", "èü" => "00", "€ä" => "11"}
```
|
>
> I need to transform this list [into a] map...I tried with `Enum.map`.
>
>
>
You can also use [Enum.reduce](https://hexdocs.pm/elixir/Enum.html#reduce/3) instead of `Enum.map` when you want the result to be a map. The following example uses `Enum.reduce` and it can handle single byte `ASCII` characters as well as `UTF-8` (multi-byte) characters:
```
["AM00", "CC11", "CB11"]
initial value for acc variable
|
V
|> Enum.reduce(%{},
fn str, acc ->
{first_two, last_two} = String.split_at(str, 2)
Map.put(acc, first_two, last_two) # return the new value for acc
end
)
```
output:
```
%{"AM" => "00", "CB" => "11", "CC" => "11"}
```
And:
```
["èü00", "C€11", "€ä11"]
|> Enum.reduce(%{},
fn str, acc ->
{first_two, last_two} = String.split_at(str, 2)
Map.put(acc, first_two, last_two)
end
)
```
output:
```
%{"C€" => "11", "èü" => "00", "€ä" => "11"}
```
| 5,550
|
9,227,859
|
I'm using the code below (Python 2.7 and Python 3.2) to show an Open Files dialog that supports multiple-selection. On Linux filenames is a python list, but on Windows filenames is returned as `{C:/Documents and Settings/IE User/My Documents/VPC_EULA.txt} {C:/Documents and Settings/IE User/My Documents/VPC_ReadMe.txt}`, i.e. a raw TCL list.
Is this a python bug, and does anyone here know a good way to convert the raw TCL list into a python list?
```
if sys.hexversion >= 0x030000F0:
import tkinter.filedialog as filedialog
else:
import tkFileDialog as filedialog
options = {}
options['filetypes'] = [('vnote files', '.vnt') ,('all files', '.*')]
options['multiple'] = 1
filenames = filedialog.askopenfilename(**options)
```
|
2012/02/10
|
[
"https://Stackoverflow.com/questions/9227859",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/94078/"
] |
The problem is an “interesting” interaction between Tcl, Tk and Python, each of which is doing something sensible on its own but where the combination isn't behaving correctly. The deep issue is that Tcl and Python have *very* different ideas about what types mean, and this is manifesting itself as a value that Tcl sees as a list but Python sees as a string (with the code in Tk assuming that it doesn't need to be careful to be clean for Python). Arguably the Python interface should use the fact that it can know that a Tcl list will be coming back from a multiple selection and hide this, but it doesn't so you're stuck.
I can (and should!) fix this in Tk, but I don't know how long it would take for the fix to find its way back to you that way.
---
[EDIT]: This is now fixed (with [this](https://core.tcl.tk/tk/vpatch?from=dfb108b671df7455&to=c309fddc73beb7b6) patch) in the Tk 8.5 maintenance branch and on the main development branch. I can't predict when you'll be able to get a fixed version unless you grab the source out of our fossil repository and build it yourself.
|
This fix works for me:
```
if sys.hexversion >= 0x030000F0:
import tkinter.filedialog as filedialog
string_type = str
else:
import tkFileDialog as filedialog
string_type = basestring
options = {}
options['filetypes'] = [('vnote files', '.vnt') ,('all files', '.*')]
options['multiple'] = 1
filenames = filedialog.askopenfilename(**options)
if isinstance(filenames, string_type):
# tkinter is not converting the TCL list into a python list...
# see http://stackoverflow.com/questions/9227859/
#
# based on suggestion by Cameron Laird in http://bytes.com/topic/python/answers/536853-tcl-list-python-list
if sys.hexversion >= 0x030000F0:
import tkinter
else:
import Tkinter as tkinter
tk_eval = tkinter.Tk().tk.eval
tcl_list_length = int(tk_eval("set tcl_list {%s}; llength $tcl_list" % filenames))
filenames = [] # change to a list
for i in range(tcl_list_length):
filenames.append(tk_eval("lindex $tcl_list %d" % i))
return filenames
```
| 5,552
|
28,515,972
|
Currently I am installing psycopg2 for work within eclipse with python.
I am finding a lot of problems:
1. The first problem `sudo pip3.4 install psycopg2` is not working and it is showing the following message
>
> Error: pg\_config executable not found.
>
>
>
FIXED WITH:`export PATH=/Library/PostgreSQL/9.4/bin/:"$PATH”`
2. When I import psycopg2 in my project i obtein:
>
> ImportError:
> dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/\_psycopg.so
> Library libssl.1.0.0.dylib
> Library libcrypto.1.0.0.dylib
>
>
>
FIXED WITH:
`sudo ln -s /Library/PostgreSQL/9.4/lib/libssl.1.0.0.dylib /usr/lib
sudo ln -s /Library/PostgreSQL/9.4/lib/libcrypto.1.0.0.dylib /usr/lib`
3. Now I am obtaining:
>
> ImportError:
> dlopen(/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/\_psycopg.so,
> 2): Symbol not found: \_lo\_lseek64 Referenced from:
> /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/\_psycopg.so
> Expected in: /usr/lib/libpq.5.dylib in
> /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/psycopg2/\_psycopg.so
>
>
>
Can you help me?
|
2015/02/14
|
[
"https://Stackoverflow.com/questions/28515972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3964833/"
] |
Here's a fix that worked for me on El Capitan that doesn't require restarting to work around the OS X El Capitan System Integrity Protection (SIP):
```
brew unlink postgresql && brew link postgresql
brew link --overwrite postgresql
```
[H/T Farhan Ahmad](http://www.thebitguru.com/blog/view/432-psycopg2%20on%20El%20Capitan)
|
well, I'd like to give my solution, the problem is related with the version of c. So, I just typed:
```
CFLAGS='-std=c99' pip install psycopg2==2.6.1
```
| 5,557
|
43,424,895
|
I have created a virtualenv using the following command:
`python3 -m venv --without-pip ./env_name`
I am now wondering if it is possible to add PIP to it manually.
|
2017/04/15
|
[
"https://Stackoverflow.com/questions/43424895",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5969463/"
] |
Once you opened the file, use this one-liner using `split` as you mentionned and nested list comprehension:
```
with open(f, encoding="UTF-8") as file: # safer way to open the file (and close it automatically on block exit)
result = [[int(x) for x in l.split()] for l in file]
```
* the inner listcomp splits & converts each line to integers (making an array of integers)
* the outer listcomp just iterates on the lines of the file
note that it will fail if there are something else than integers in your file.
(as a side note, `file` is a built-in in python 2, but not anymore in python 3, however I usually refrain from using it)
|
You can do like this,
```
[map(int,i.split()) for i in filter(None,open('abc.txt').read().split('\n'))]
```
Line by line execution for more information
```
In [75]: print open('abc.txt').read()
3 2 7 4
1 8 9 3
6 5 4 1
1 0 8 7
```
`split` with newline.
```
In [76]: print open('abc.txt').read().split('\n')
['3 2 7 4', '', '1 8 9 3', '', '6 5 4 1', '', '1 0 8 7', '']
```
Remove the unnecessary null string.
```
In [77]: print filter(None,open('abc.txt').read().split('\n'))
['3 2 7 4', '1 8 9 3', '6 5 4 1', '1 0 8 7']
```
`split` with spaces
```
In [78]: print [i.split() for i in filter(None,open('abc.txt').read().split('\n'))]
[['3', '2', '7', '4'], ['1', '8', '9', '3'], ['6', '5', '4', '1'], ['1', '0', '8', '7']]
```
convert the element to `int`
```
In [79]: print [map(int,i.split()) for i in filter(None,open('abc.txt').read().split('\n'))]
[[3, 2, 7, 4], [1, 8, 9, 3], [6, 5, 4, 1], [1, 0, 8, 7]]
```
| 5,567
|
22,845,913
|
I am trying to implement a function to generate java hashCode equivalent in node.js and python to implement redis sharding. I am following the really good blog @below mentioned link to achieve this
<http://mechanics.flite.com/blog/2013/06/27/sharding-redis/>
But i am stuck at the difference in hashCode if string contains some characters which are not ascii as in below example. for regular strings i could get both node.js and python give me same hash code.
here is the code i am using to generate this:
--Python
```
def _java_hashcode(s):
hash_code = 0
for char in s:
hash_code = 31*h + ord(char)
return ctypes.c_int32(h).value
```
--Node as per above blog
```
String.prototype.hashCode = function() {
for(var ret = 0, i = 0, len = this.length; i < len; i++) {
ret = (31 * ret + this.charCodeAt(i)) << 0;
}
return ret;
};
```
--Python output
```
For string '者:s��2�*�=x�' hash is = 2014651066
For string '359196048149234' hash is = 1145341990
```
--Node output
```
For string '者:s��2�*�=x�' hash is = 150370768
For string '359196048149234' hash is = 1145341990
```
Please guide me, where am i mistaking.. do i need to set some type of encoding in python and node program, i tried a few but my program breaks in python.
|
2014/04/03
|
[
"https://Stackoverflow.com/questions/22845913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3230928/"
] |
```py
def java_string_hashcode(s):
"""Mimic Java's hashCode in python 2"""
try:
s = unicode(s)
except:
try:
s = unicode(s.decode('utf8'))
except:
raise Exception("Please enter a unicode type string or utf8 bytestring.")
h = 0
for c in s:
h = int((((31 * h + ord(c)) ^ 0x80000000) & 0xFFFFFFFF) - 0x80000000)
return h
```
This is how you should do it in python 2.
The problem is two fold:
* You should be using the unicode type, and make sure that it is so.
* After every step you need to prevent python from auto-converting to long type by using bitwise operations to get the correct int type for the following step. (swapping the sign bit, masking to 32 bit, then subtracting the amount of the sign bit will give us a negative int if the sign bit is present and a positive int when the sign bit is not present. This mimics the int behavior in Java.
Also, as in the other answer, for hard-coded non-ascii characters, please save your source file as utf8 and at the top of the file write:
```py
#!/usr/bin/env python
# -*- coding: utf-8 -*-
```
And make sure if you receive user input that you handle them as unicode type and not string type. (not a problem for python 3)
|
Python 2 will assume ASCII encoding unless told otherwise. Since [PEP 0263](http://legacy.python.org/dev/peps/pep-0263/), you can specify utf-8 encoded strings with the following at the top of the file.
```
#!/usr/bin/python
# -*- coding: utf-8 -*-
```
| 5,572
|
12,311,226
|
I am new to python and am having difficulties with an assignment for a class.
Here's my code:
```
print ('Plants for each semicircle garden: ',round(semiPlants,0))
```
Here's what gets printed:
```
('Plants for each semicircle garden:', 50.0)
```
As you see I am getting the parenthesis and apostrophes, which I do not want shown.
|
2012/09/07
|
[
"https://Stackoverflow.com/questions/12311226",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1355158/"
] |
You're clearly using python2.x when you think you're using python3.x. In python 2.x, the stuff in the parenthesis is being interpreted as a `tuple`.
One fix is to use string formatting to do this:
```
print ( 'Plants for each semicircle garden: {0}'.format(round(semiPlants,0)))
```
which will work with python2.6 and onward (parenthesis around a single argument aren't interpreted as a `tuple`. To get a 1-tuple, you need to do `(some_object,)`)
|
You've tagged this question Python-3.x, but it looks like you are actually running your code with Python 2.
To see what version you're using, run, "python -V".
| 5,573
|
57,169,454
|
My dataset is a set of 2 columns with Spanish and English sentences. I created a training dataset using the Dataset API using the below code:
```
train_examples = tf.data.experimental.CsvDataset("./Data/train.csv", [tf.string, tf.string])
val_examples = tf.data.experimental.CsvDataset("./Data/validation.csv", [tf.string, tf.string])
```
##Create a custom subwords tokenizer from the training dataset.
```
tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(en.numpy() for pt, en in train_examples), target_vocab_size=2**13)
tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
```
I am getting the following error:
```
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 30: invalid continuation byte
```
Traceback:
```
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-27-c90f5c60daf2> in <module>
1 tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
----> 2 (en.numpy() for pt, en in train_examples), target_vocab_size=2**13)
3
4 tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
5 (pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_datasets/core/features/text/subword_text_encoder.py in build_from_corpus(cls, corpus_generator, target_vocab_size, max_subword_length, max_corpus_chars, reserved_tokens)
291 generator=corpus_generator,
292 max_chars=max_corpus_chars,
--> 293 reserved_tokens=reserved_tokens)
294
295 # Binary search on the minimum token count to build a vocabulary with
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow_datasets/core/features/text/subword_text_encoder.py in _token_counts_from_generator(generator, max_chars, reserved_tokens)
394 token_counts = collections.defaultdict(int)
395 for s in generator:
--> 396 s = tf.compat.as_text(s)
397 if max_chars and (num_chars + len(s)) >= max_chars:
398 s = s[:(max_chars - num_chars)]
~/venv/lib/python3.7/site-packages/tensorflow/python/util/compat.py in as_text(bytes_or_text, encoding)
85 return bytes_or_text
86 elif isinstance(bytes_or_text, bytes):
---> 87 return bytes_or_text.decode(encoding)
88 else:
89 raise TypeError('Expected binary or unicode string, got %r' % bytes_or_text)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd5 in position 30: invalid continuation byte
```
|
2019/07/23
|
[
"https://Stackoverflow.com/questions/57169454",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8382950/"
] |
You can perform ordering on the job\_set queryset, for example. Is something like this what you are looking for?
```
for worker in Worker.objects.all():
latest_job = worker.job_set.latest('start_date')
print(worker.name, latest_job.location, latest_job.start_date)
```
|
You can use the `last` method on the filtered and ordered (by `start_date`, in ascending order) queryset of a worker.
For example, if the name of the worker is `foobar`, you can do:
```
Job.objects.filter(worker__name='foobar').order_by('start_date').last()
```
this will give you the last `Job` (based on `start_date`) of the worker named `foobar`.
FWIW you can also get the `first` element if you're sorting in the descending order of `start_date`:
```
Job.objects.filter(worker__name='foobar').order_by('-start_date').first()
```
| 5,575
|
1,043,735
|
I am making a little webgame that has tasks and solutions, the solutions are solved by entering a code given to user after completion of a task. To have some security (against cheating) i dont want to store the codes genereted by the game in plain text. But since i need to be able to give a player the code when he has accomplished the task i cant hash it since then i cant retrive it.
So what is the most secure way to encrypt/decrypt something using python?
|
2009/06/25
|
[
"https://Stackoverflow.com/questions/1043735",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/42546/"
] |
The most secure encryption is no encryption. Passwords should be reduced to a hash. This is a one-way transformation, making the password (almost) unrecoverable.
When giving someone a code, you can do the following to be *actually* secure.
(1) generate some random string.
(2) give them the string.
(3) save the hash of the string you generated.
Once.
If they "forget" the code, you have to (1) be sure they're authorized to be given the code, then (2) do the process again (generate a new code, give it to them, save the hash.)
|
If it's a web game, can't you store the codes server side and send them to the client when he completed a task? What's the architecture of your game?
As for encryption, maybe try something like [pyDes](http://twhiteman.netfirms.com/des.html)?
| 5,577
|
24,497,121
|
I ask because the examples in the Facebook Ads API (<https://developers.facebook.com/docs/reference/ads-api/adimage/#create>) for creating an Ad Image all use curl, but I want to do it with python requests. Or if someone can answer the more specific question of how to create an Ad Image on the Facebook Ads API from python, that'd be great as well. You can assume I have the location on disk of the image file to upload. Should be a simple POST request to the endpoint /act\_[account\_id]/adimages, right?
|
2014/06/30
|
[
"https://Stackoverflow.com/questions/24497121",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3391108/"
] |
Try `Holders.grailsApplication.config`:
```
@TestMixin(GrailsUnitTestMixin)
class TicketRequestEmailInfoSpec extends Specification {
def setup() {
Holders.grailsApplication.config.acme.purchase.trsUrlBase = "http://localhost:8082/purchase/order/"
}
```
|
Didn't works injecting `grailsApplication` to test as `def grailsApplication` ?
Also you can import it as `import static grails.util.Holders.config as grailsConfig`.
And then use it as
```
@TestMixin(GrailsUnitTestMixin)
class TicketRequestEmailInfoSpec extends Specification {
def setup() {
grailsConfig.acme.purchase.trsUrlBase = "http://localhost:8082/purchase/order/"
}
```
| 5,583
|
55,582,117
|
GIMP has a convenient function that allows you to convert an arbitrary color to an alpha channel.
Essentially all pixels become transparent relative to how far away from the chosen color they are.
I want to replicate this functionality with opencv.
I tried iterating through the image:
```
for x in range(rows):
for y in range(cols):
mask_img[y, x][3] = cv2.norm(img[y, x] - (255, 255, 255, 255))
```
But this is prohibitively expensive, it takes about 10 times longer to do that iteration than it takes to simply set the field to 0 (6 minutes vs an hour)
This seems more a python problem than an algorithmic problem. I have done similar things in C++ and it's not as as bad in terms of performance.
Does anyone have suggestions on achieving this?
|
2019/04/08
|
[
"https://Stackoverflow.com/questions/55582117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6202327/"
] |
Here is my attempt only using `numpy` matrix operations.
My input image `colortrans.png` looks like this:
[](https://i.stack.imgur.com/l1ZPc.png)
I want to make the diagonal purple part `(128, 0, 128)` transparent with some tolerance `+/- (25, 0, 25)` to the left and right, resulting in some transparency gradient.
Here comes the code:
```py
import cv2
import numpy as np
# Input image
input = cv2.imread('images/colortrans.png', cv2.IMREAD_COLOR)
# Convert to RGB with alpha channel
output = cv2.cvtColor(input, cv2.COLOR_BGR2RGBA)
# Color to make transparent
col = (128, 0, 128)
# Color tolerance
tol = (25, 0, 25)
# Temporary array (subtract color)
temp = np.subtract(input, col)
# Tolerance mask
mask = (np.abs(temp) <= tol)
mask = (mask[:, :, 0] & mask[:, :, 1] & mask[:, :, 2])
# Generate alpha channel
temp[temp < 0] = 0 # Remove negative values
alpha = (temp[:, :, 0] + temp[:, :, 1] + temp[:, :, 2]) / 3 # Generate mean gradient over all channels
alpha[mask] = alpha[mask] / np.max(alpha[mask]) * 255 # Gradual transparency within tolerance mask
alpha[~mask] = 255 # No transparency outside tolerance mask
# Set alpha channel in output
output[:, :, 3] = alpha
# Output images
cv2.imwrite('images/colortrans_alpha.png', alpha)
cv2.imwrite('images/colortrans_output.png', output)
```
The resulting alpha channel `colortrans_alpha.png` looks like this:
[](https://i.stack.imgur.com/K4EHh.png)
And, the final output image `colortrans_output.png` looks like this:
[](https://i.stack.imgur.com/k9AAN.png)
Is that, what you wanted to achieve?
|
I've done a project that converted all pixels that are close to white into transparent pixels using the `PIL` (python image library) module. I'm not sure how to implement your algorithm for "relative to how far away from chosen color they are", but my code looks like:
```
from PIL import Image
planeIm = Image.open('InputImage.png')
planeIm = planeIm.convert('RGBA')
datas = planeIm.getdata()
newData = []
for item in datas:
if item[0] > 240 and item[1] > 240 and item[2] > 240:
newData.append((255, 255, 255, 0)) # transparent pixel
else:
newData.append(item) # unedited pixel
planeIm.putdata(newData)
planeIm.save('output.png', "PNG")
```
This goes through a 1920 X 1080 image for me in 1.605 seconds, so maybe if you implement your logic into this you will see the speed improvements you want?
It might be even faster if `newData` is initialized instead of being `.append()`ed every time too! Something like:
```
planeIm = Image.open('EGGW spider.png')
planeIm = planeIm.convert('RGBA')
datas = planeIm.getdata()
newData = [(255, 255, 255, 0)] * len(datas)
for i in range(len(datas)):
if datas[i][0] > 240 and datas[i][1] > 240 and datas[i][2] > 240:
pass # we already have (255, 255, 255, 0) there
else:
newData[i] = datas[i]
planeIm.putdata(newData)
planeIm.save('output.png', "PNG")
```
Although for me this second approach runs at 2.067 seconds...
multithreading
==============
An example of threading to calculate a different image would look like:
```
from PIL import Image
from threading import Thread
from queue import Queue
import time
start = time.time()
q = Queue()
planeIm = Image.open('InputImage.png')
planeIm = planeIm.convert('RGBA')
datas = planeIm.getdata()
new_data = [0] * len(datas)
print('putting image into queue')
for count, item in enumerate(datas):
q.put((count, item))
def worker_function():
while True:
# print("Items in queue: {}".format(q.qsize()))
index, pixel = q.get()
if pixel[0] > 240 and pixel[1] > 240 and pixel[2] > 240:
out_pixel = (0, 0, 0, 0)
else:
out_pixel = pixel
new_data[index] = out_pixel
q.task_done()
print('starting workers')
worker_count = 100
for i in range(worker_count):
t = Thread(target=worker_function)
t.daemon = True
t.start()
print('main thread waiting')
q.join()
print('Queue has been joined')
planeIm.putdata(new_data)
planeIm.save('output.png', "PNG")
end = time.time()
elapsed = end - start
print('{:3.3} seconds elapsed'.format(elapsed))
```
Which for me now takes 58.1 seconds! A terrible speed difference! I would attribute this to:
* Having to iterate each pixel twice, once to put it into a queue and once to process it and write it to the `new_data` list.
* The overhead needed to create threads. Each new thread will take a few ms to create, so making a large amount (100 in this case) can add up.
* A simple algorithm was used to modify the pixels, threading would shine when large amounts of computation are required on each input (more like your case)
* Threading doesn't utilize multiple cores, you need multi*processing* to get that -> my task manager says I was only using 10% of my CPU and it idles at 1-2% already...
| 5,584
|
11,483,366
|
>
> **Possible Duplicate:**
>
> [Making a method private in a python subclass](https://stackoverflow.com/questions/451963/making-a-method-private-in-a-python-subclass)
>
> [Private Variables and Methods in Python](https://stackoverflow.com/questions/3385317/private-variables-and-methods-in-python)
>
>
>
How can I define a method in a python class that is protected and only subclasses can see it?
This is my code:
```
class BaseType(Model):
def __init__(self):
Model.__init__(self, self.__defaults())
def __defaults(self):
return {'name': {},
'readonly': {},
'constraints': {'value': UniqueMap()},
'cType': {}
}
cType = property(lambda self: self.getAttribute("cType"), lambda self, data: self.setAttribute('cType', data))
name = property(lambda self: self.getAttribute("name"), lambda self, data: self.setAttribute('name', data))
readonly = property(lambda self: self.getAttribute("readonly"),
lambda self, data: self.setAttribute('readonly', data))
constraints = property(lambda self: self.getAttribute("constraints"))
def getJsCode(self):
pass
def getCsCode(self):
pass
def generateCsCode(self, template=None, constraintMap=None, **kwargs):
if not template:
template = self.csTemplate
if not constraintMap: constraintMap = {}
atts = ""
constraintMap.update(constraintMap)
for element in self.getNoneEmptyAttributes():
if not AbstractType.constraintMap.has_key(element[0].lower()):
continue
attTemplate = Template(AbstractType.constraintMap[element[0].lower()]['cs'])
attValue = str(element[1]['value'])
atts += "%s " % attTemplate.substitute({'value': attValue})
kwargs.update(dict(attributes=atts))
return template.substitute(kwargs)
class MainClass(BaseType, Model):
def __init__(self):
#Only Model will initialize
Model.__init__(self, self.__defaults())
BaseType.__init__(self)
def __defaults(self):
return {'name': {},
'fields': {'value': UniqueMap()},
'innerClass': {'value': UniqueMap()},
'types': {}
}
fields = property(lambda self: self.getAttribute("fields"))
innerClass = property(lambda self: self.getAttribute("innerClass"))
types = property(lambda self: self.getAttribute("types"))
@staticmethod
def isType(iType):
# return type(widget) in WidgetSelector.widgets.itervalues()
return isinstance(iType, AbstractType)
def addType(self, type):
if not MainClass.isType(type):
raise Exception, "Unknown widget type %s" % type
self.types[type.name] = type
```
I want that just subclasses of `BaseType` see the `generateCsCode` method of `BaseType`.
|
2012/07/14
|
[
"https://Stackoverflow.com/questions/11483366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1254709/"
] |
Python does not support access protection as C++/Java/C# does. Everything is public. The motto is, "We're all adults here." Document your classes, and insist that your collaborators read and follow the documentation.
The culture in Python is that names starting with underscores mean, "don't use these unless you really know you should." You might choose to begin your "protected" methods with underscores. But keep in mind, this is just a convention, it doesn't change how the method can be accessed.
Names beginning with double underscores (`__name`) are mangled, so that inheritance hierarchies can be built without fear of name collisions. Some people use these for "private" methods, but again, it doesn't change how the method can be accessed.
The best strategy is to get used to a model where all the code in a single process has to be written to get along.
|
You can't. Python intentionally does not support access control. By convention, methods starting with an underscore are private, and you should clearly state in the documentation who is supposed to use the method.
| 5,589
|
30,061,620
|
I am calculating Euclidean Distance with python code below:
```
def getNeighbors(trainingSet, testInstance, k, labels):
distances = []
for x in range(len(trainingSet)):
dist = math.sqrt(((testInstance[0] - trainingSet[x][0]) ** 2) + ((testInstance[1] - trainingSet[x][1]) ** 2))
distances.append([dist, labels[x]])
distances = np.array(distances)
return distances
```
For calculating distance of a given point with 10 other points, it's good. But when I calculate distance of a point with 18563 other points, then the computer gets hanged and don't response for around 3 Hrs.
**How can I make calculation for** 18563 **points faster?**
|
2015/05/05
|
[
"https://Stackoverflow.com/questions/30061620",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2831683/"
] |
You can speed this up by converting to NumPy first, then using vector operations, instead of doing the work in a loop, then converting to NumPy. Something like this:
```
trainingArray = np.array(trainingSet)
distances = ((testInstance[0] - trainingArray[:, 0]) ** 2 +
(testInstance[1] - trainingArray[:, 1]) ** 2).sqrt()
```
(That's obviously untested, because without enough context to know what's actually in those variables I had to guess, but it will be something close to that.)
There are other things you can do to squeeze out a few extra %—try replacing the `** 2` with self-multiplication or the `sqrt` with `** .5`, or (probably best) replacing the whole thing with `np.hypot`. (If you don't know how to use [`timeit`](https://docs.python.org/3/library/timeit.html)—or, even better, IPython and it's `%timeit` magic—now is a great time to learn.)
But ultimately, this is just going to give you a constant-multiplier speedup of about an order of magnitude. Maybe it takes 15 minutes instead of 3 hours. That's nice, but… why is it taking 3 hours in the first place? What you're doing here should be taking on the order of seconds, or even less. There's clearly something much bigger wrong here, like maybe you're calling this function N\*\*2 times when you think you're only calling it once. And you really need to fix that part.
Of course it's still worth doing this. First, element-wise operations are simpler and more readable than loops, and harder to get wrong. Second, even if you reduce the whole program to 3.8 seconds, you'll be happy for the order-of-magnitude speedup to 0.38 seconds, right?
|
Writing own sqrt calculator has its own risks, hypot is very safe for resistance to overflow and underflow
```py
x_y = np.array(trainingSet)
x = x_y[0]
y = x_y[1]
distances = np.hypot(
np.subtract.outer(x, x),
np.subtract.outer(y, y)
)
```
Speed wise they are same
```
%%timeit
np.hypot(i, j)
# 1.29 µs ± 13.2 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
```
%%timeit
np.sqrt(i**2+j**2)
# 1.3 µs ± 9.87 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
```
Underflow
---------
```py
i, j = 1e-200, 1e-200
np.sqrt(i**2+j**2)
# 0.0
```
Overflow
--------
```py
i, j = 1e+200, 1e+200
np.sqrt(i**2+j**2)
# inf
```
No Underflow
------------
```py
i, j = 1e-200, 1e-200
np.hypot(i, j)
# 1.414213562373095e-200
```
No Overflow
-----------
```py
i, j = 1e+200, 1e+200
np.hypot(i, j)
# 1.414213562373095e+200
```
[Refer](https://stackoverflow.com/a/69233365/7035448)
| 5,590
|
45,495,753
|
After many different ways of trying to install jupyter, it does not seem to install correctly.
May be MacOS related based on how many MacOS system python issues I've been having recently
`pip install jupyter --user`
Seems to install correctly
But then jupyter is not found
`where jupyter`
`jupyter not found`
Not found
Trying another install method found on SO
`pip install --upgrade notebook`
Seems to install correctly
jupyter is still not found
`where pip` `/usr/local/bin/pip`
What can I do to get the command line `jupyter notebook` command working as in the first step here: <https://jupyter.readthedocs.io/en/latest/running.html#running>
|
2017/08/03
|
[
"https://Stackoverflow.com/questions/45495753",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1018733/"
] |
Short answer: Use `python -m notebook`
After updating to OS Catalina, I installed a brewed python: `brew install python`.
It symlinks the Python3, but not the `python` command, so I added to my `$PATH` variable the following:
```
/usr/local/opt/python/libexec/bin
```
to make the brew python the default python command (don't use system python, and now python2.7 is deprecated). `python -m pip install jupyter` works, and I can find the jupyter files in `~/Library/Python/3.7/bin/`, but the tutorial command of `jupyter notebook` doesn't work. Instead I just run `python -m notebook`.
|
You need to add the local python install directory to your path. Apparently this is not done by default on MacOS.
Try:
```
export PATH="$HOME/Library/Python/<version number>/bin:$PATH"
```
and/or add it to your `~/.bashrc`.
| 5,591
|
4,710,037
|
I'm having trouble with SWIG, shared pointers, and inheritance.
I am creating various c++ classes which inherit from one another, using
Boost shared pointers to refer to them, and then wrapping these shared
pointers with SWIG to create the python classes.
My problem is the following:
* B is a subclass of A
* sA is a shared pointer to A
* sB is a shared pointer to B
* f(sA) is a function expecting a shared pointer to A
* If I pass sB to f() then an error is raised.
* This error only occurs at the python level.
* At the C++ level I can pass sB to f() without a problem.
I have boost 1.40 and swig 1.3.40.
Below are the contents of 5 files which will reproduce the problem
with:
```
python setup.py build_ext --inplace
python test.py
```
**swig\_shared\_ptr.h**
```
#ifndef INCLUDED_SWIG_SHARED_PTR_H
#define INCLUDED_SWIG_SHARED_PTR_H
#include <boost/shared_ptr.hpp>
class Base {};
class Derived : public Base {};
typedef boost::shared_ptr<Base> base_sptr;
typedef boost::shared_ptr<Derived> derived_sptr;
void do_something (base_sptr bs);
base_sptr make_base();
derived_sptr make_derived();
#endif
```
**swig\_shared\_ptr.cc**
```
#include <iostream>
#include "swig_shared_ptr.h"
void do_something (base_sptr bs)
{
std::cout << "Doing something." << std::endl;
}
base_sptr make_base() { return base_sptr(new Base ()); };
derived_sptr make_derived() { return derived_sptr(new Derived ()); };
```
**swig\_shared\_ptr.i**
```
%module(docstring="
Example module showing problems I am having with SWIG, shared pointers
and inheritance.
") swig_shared_ptr
%{
#include "swig_shared_ptr.h"
%}
%include <swig_shared_ptr.h>
%include <boost_shared_ptr.i>
%template(base_sptr) boost::shared_ptr<Base>;
%template(derived_sptr) boost::shared_ptr<Derived>;
```
**setup.py**
```
"""
setup.py file for swig_shared_ptr
"""
from distutils.core import setup, Extension
swig_shared_ptr_module = Extension('_swig_shared_ptr',
include_dirs = ['/usr/include/boost'],
sources=['swig_shared_ptr.i', 'swig_shared_ptr.cc'],
)
setup (name = 'swig_shared_ptr',
version = '0.1',
author = "Ben",
description = """Example showing problems I am having with SWIG, shared
pointers and inheritance.""",
ext_modules = [swig_shared_ptr_module],
py_modules = ["swig_shared_ptr"],
)
```
**test.py**
```
import swig_shared_ptr as ssp
bs = ssp.make_base()
dr = ssp.make_derived()
# Works fine.
ssp.do_something(bs)
# Fails with "TypeError: in method 'do_something', argument 1 of type 'base_sptr'"
ssp.do_something(dr)
```
|
2011/01/17
|
[
"https://Stackoverflow.com/questions/4710037",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1229275/"
] |
The following change appears to solve the problem.
In **swig\_shared\_ptr.i** the two lines:
```
%template(base_sptr) boost::shared_ptr<Base>;
%template(derived_sptr) boost::shared_ptr<Derived>;
```
are moved so that they are above the line
```
%include <swig_shared_ptr.h>
```
and are then replaced (in SWIG 1.3) by:
```
SWIG_SHARED_PTR(Base, Base)
SWIG_SHARED_PTR_DERIVED(Derived, Base, Derived)
```
or (in SWIG 2.0) by:
```
%shared_ptr(Base)
%shared_ptr(Derived)
```
|
SWIG doesn't know anything about the `boost::shared_ptr<T>` class. It therefore can't tell that `derived_sptr` can be "cast" (which is, I believe, implemented with some crazy constructors and template metaprogramming) to `derived_sptr`. Because SWIG requires fairly simple class definitions (or inclusion of simple files with `%include`), you won't be able to accurately declare the `shared_ptr` class, because Boost is *incredibly* non-simple with its compiler compensation and template tricks.
As to a solution: is it absolutely necessary to hand out shared pointers? SWIG's C++ wrappers basically function as shared pointers. Boost and SWIG are very, very difficult to get to work together.
| 5,597
|
2,400,605
|
I'm Delphi developer, and I would like to build few web applications, I know about Intraweb, but I think it's not a real tool for web development, maybe for just intranet applications
so I'm considering PHP, Python or ruby, I prefer python because it's better syntax than other( I feel it closer to Delphi), also I want to deploy the application to shared hosting, specially Linux.
so as Delphi developer, what do you choose to develop web application?
|
2010/03/08
|
[
"https://Stackoverflow.com/questions/2400605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/235131/"
] |
PHP is the best to start, but as experienced programmer you may want to look at Python, because PHP is a C style language. Python would be easier after Pascal, I think.
Take a look at examples:
On PHP: <http://en.wikipedia.org/wiki/Php#Syntax>
On Python: <http://en.wikipedia.org/wiki/Python_(programming_language)#Syntax_and_semantics>
Note, that Ruby and Python are rarely used by them selves in web-development. Usually **Django** and **Ruby on railes** frameworks are used. In PHP there are several variants. You can code from the scratch, or also use some framework.
I used to code on PHP for about five years and now started to learn Django (python based framework) and I think it's the best thing there is. Take a look: <http://djangoproject.com/>
|
Only good answer - C# ;) Seriously ;)
Why? Anders Hejlsberg. He made it. It is the direct continuation of his work that started with Turbo Pascal and went over to Delphi... then Microsoft hired him and he moved from Pascal to C (core langauge) and made C#.
Read it up on <http://en.wikipedia.org/wiki/Anders_Hejlsberg>
If you come from Delphi, you will love it ;)
| 5,598
|
31,682,981
|
I am a newbie in Django un I want to write a Test for Django Web-poll application (<https://docs.djangoproject.com/en/1.8/intro/tutorial01/>)
but I got this error:
```
devuser@localhost:~/Django-apps/poolApp$ django-admin shell --plain --no-startup
Traceback (most recent call last):
File "/usr/local/bin/django-admin", line 9, in <module>
load_entry_point('Django==1.8.3', 'console_scripts', 'django-admin')()
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/core/management/__init__.py", line 338, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/core/management/__init__.py", line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/core/management/base.py", line 405, in run_from_argv
connections.close_all()
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/db/utils.py", line 258, in close_all
for alias in self:
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/db/utils.py", line 252, in __iter__
return iter(self.databases)
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/utils/functional.py", line 60, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/db/utils.py", line 151, in databases
self._databases = settings.DATABASES
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/conf/__init__.py", line 48, in __getattr__
self._setup(name)
File "/usr/local/lib/python2.7/dist-packages/Django-1.8.3-py2.7.egg/django/conf/__init__.py", line 42, in _setup
% (desc, ENVIRONMENT_VARIABLE))
django.core.exceptions.ImproperlyConfigured: Requested setting DATABASES, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
```
I would like to know what is the best approach to do it:
define the environment variable DJANGO\_SETTINGS\_MODULE or call settings.configure()
|
2015/07/28
|
[
"https://Stackoverflow.com/questions/31682981",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4450024/"
] |
Use `manage.py`, not `django-admin`.
|
What does work is using
```
django-admin shell --plain --no-startup --pythonpath "." --settings "myproject.settings"
```
while you are in the root of your django app.
However `manage.py shell` (or the amazing `shell_plus` from django\_extensions <https://github.com/django-extensions/django-extensions>) is recommended instead
| 5,608
|
46,034,924
|
When I try to run python manage.py runserver I get this error:
```
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/Users/user/lokvi/lokvi_env/lib/python2.7/site-packages/django/core/management/__init__.py", line 363, in execute_from_command_line
utility.execute()
File "/Users/user/lokvi/lokvi_env/lib/python2.7/site-packages/django/core/management/__init__.py", line 307, in execute
settings.INSTALLED_APPS
File "/Users/user/lokvi/lokvi_env/lib/python2.7/site-packages/django/conf/__init__.py", line 56, in __getattr__
self._setup(name)
File "/Users/user/lokvi/lokvi_env/lib/python2.7/site-packages/django/conf/__init__.py", line 41, in _setup
self._wrapped = Settings(settings_module)
File "/Users/user/lokvi/lokvi_env/lib/python2.7/site-packages/django/conf/__init__.py", line 110, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named settings
```
I have python 2.7 in my virtualenv. I noticed strange thing at the last lines of my stack trace, the line before the last line has path that goes like that: `/lokvi_env/lib/python2.7` etc
But the last line goes like that `System/Library/Frameworks` etc, so it seems like path has changed from virtualenv to system. Is it ok?
|
2017/09/04
|
[
"https://Stackoverflow.com/questions/46034924",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3565829/"
] |
You need to import `settings` module
```
from django.conf import settings
```
|
Oh, It was not the python path specific question, sorry. I just needed `__init__.py` in settings module inside of my project, since there were no settings it tried to find it in python lib itself and couldn't, I believe.
| 5,609
|
2,134,941
|
For a package of mine, I have a README.rst file that is read into the setup.py's long description like so:
```
readme = open('README.rst', 'r')
README_TEXT = readme.read()
readme.close()
setup(
...
long_description = README_TEXT,
....
)
```
This way that I can have the README file show up on my [github page](http://github.com/jasonbaker/envbuilder) every time I commit and on the [pypi page](http://pypi.python.org/pypi/envbuilder/) every time I `python setup.py register`. There's only one problem. I'd like the github page to say something like "This document reflects a pre-release version of envbuilder. For the most recent release, see pypi."
I could just put those lines in README.rst and delete them before I `python setup.py register`, but I know that there's going to be a time that I forget to remove the sentences before I push to pypi.
I'm trying to think of the best way to automate this so I don't have to worry about it. Anyone have any ideas? Is there any setuptools/distutils magic I can do?
|
2010/01/25
|
[
"https://Stackoverflow.com/questions/2134941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2147/"
] |
Another option is to side-step the issue completely by adding a paragraph that works in both environments: "The latest unstable code is on github. The latest stable kits are on pypi."
After all, why assume that pypi people don't want to be pointed to github? This would be more helpful to both audiences, and simplifies your setup.py.
|
You could always do this:
```
GITHUB_ALERT = 'This document reflects a pre-release version...'
readme = open('README.rst', 'r')
README_TEXT = readme.read().replace(GITHUB_ALERT, '')
readme.close()
setup(
...
long_description = README_TEXT,
....
)
```
But then you'd have to keep that `GITHUB_ALERT` string in sync with the actual wording of the `README`. Using a regular expression instead (to, say, match a line beginning with *Note for Github Users:* or something) might give you a little more flexibility.
| 5,610
|
64,375,499
|
```
def save_weights(self, filename = "./" + str(timestamp) + "-tfsave"):
### save model weights
saver = tf.train.Saver()
saver.save(self.sess, filename)
print("saved to:",filename)
```
---
UnknownError Traceback (most recent call last)
~\Anaconda3\envs\tf\lib\site-packages\tensorflow\_core\python\client\session.py in \_do\_call(self, fn, \*args)
1364 try:
-> 1365 return fn(\*args)
1366 except errors.OpError as e:
~\Anaconda3\envs\tf\lib\site-packages\tensorflow\_core\python\client\session.py in \_run\_fn(feed\_dict, fetch\_list, target\_list, options, run\_metadata)
1349 return self.\_call\_tf\_sessionrun(options, feed\_dict, fetch\_list,
-> 1350 target\_list, run\_metadata)
1351
~\Anaconda3\envs\tf\lib\site-packages\tensorflow\_core\python\client\session.py in \_call\_tf\_sessionrun(self, options, feed\_dict, fetch\_list, target\_list, run\_metadata)
1442 fetch\_list, target\_list,
-> 1443 run\_metadata)
1444
UnknownError: Failed to rename: ./2020-10-15\_18:28-tfsave.data-00000-of-00001.tempstate9752799594239982307 to: ./2020-10-15\_18:28-tfsave.data-00000-of-00001 : The parameter is incorrect.
; Unknown error
[[{{node save/SaveV2}}]]
|
2020/10/15
|
[
"https://Stackoverflow.com/questions/64375499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12137831/"
] |
You can achieve this without javascript by using a [media-query](https://www.w3schools.com/css/css_rwd_mediaqueries.asp):
```css
#show-nav {
display: none;
background-color: red;
}
/* if screen width is smaller than 720px, #show-nav will be a block */
@media only screen and (max-width: 720px) {
#show-nav {
display: block;
}
}
```
```html
<nav id="show-nav">
Navbar
</nav>
```
|
You could use only css to do so.
DEMO (make it full page)
------------------------
```css
body{
margin: 0;
}
nav{
width: 100vw;
background:yellow;
}
@media only screen and (min-width: 720px){
nav{
background:red;
}
}
```
```html
<nav>
<ul>
<li>Welcome</li>
</ul>
</nav
```
| 5,613
|
50,539,199
|
So I am trying to find a way to "merge" a dependency list which is in the form of a dictionary in python, and I haven't been able to come up with a solution. So imagine a graph along the lines of this: (all of the lines are downward pointing arrows in this directed graph)
```
1 2 4
\ / / \
3 5 8
\ / \ \
6 7 9
```
this graph would produce a dependency dictionary that looks like this:
```
{3:[1,2], 5:[4], 6:[3,5], 7:[5], 8:[4], 9:[8], 1:[], 2:[], 4:[]}
```
such that keys are nodes in the graph, and their values are the nodes they are dependent on.
I am trying to convert this into a total ancestry list in terms of a tree, so that each node is a key, and its value is a list of ALL nodes that lead to it, not just it's immediate parents. The resulting dictionary would be:
```
{3:[1,2], 5:[4], 6:[3, 5, 1, 2, 4], 7:[5, 4], 8:[4], 9:[8, 4], 1:[], 2:[], 3:[]}
```
Any suggestions on how to solve this? I have been banging my head into it for a while, tried a recursive solution that I haven't been able to get working.
|
2018/05/26
|
[
"https://Stackoverflow.com/questions/50539199",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9849681/"
] |
You can use a chained `dict comprehension` with `list comprehension` for up to two nodes.
```
>>> {k: v + [item for i in v for item in d.get(i, [])] for k,v in d.items()}
{3: [1, 2],
5: [4],
6: [3, 5, 1, 2, 4],
7: [5, 4],
8: [4],
9: [8, 4],
1: [],
2: [],
4: []}
```
---
For unlimited depth, you can use a recursive approach
```
def get_ant(node, d):
if node:
return d.get(node,[]) + [item for x in d.get(node, []) for item in get_ant(x, d) ]
return []
```
Then,
```
>>> get_ant(6, d)
[3, 5, 1, 2, 10, 4]
```
To get all cases:
```
>>> {k: get_ant(k, d) for k in d.keys()}
{3: [1, 2, 10],
5: [4],
6: [3, 5, 1, 2, 10, 4],
7: [5, 4],
8: [4],
9: [8, 4],
1: [10],
2: [],
4: []}
```
|
Here's a really simple way to do it.
```
In [22]: a
Out[22]: {1: [], 2: [], 3: [1, 2], 4: [], 5: [4], 6: [3, 5], 7: [5], 8: [4], 9: [8]}
In [23]: final = {}
In [24]: for key in a:
...: nodes = set()
...:
...: for val in a[key]:
...: nodes.add(val)
...: if val in a:
...: nodes.update(set(a[val]))
...:
...: final[key] = list(nodes)
In [25]: final
Out[25]:
{1: [],
2: [],
3: [1, 2],
4: [],
5: [4],
6: [3, 1, 2, 5, 4],
7: [5, 4],
8: [4],
9: [8, 4]}
```
| 5,615
|
24,635,064
|
Here is my problem with urllib in python 3.
I wrote a piece of code which works well in Python 2.7 and is using urllib2. It goes to the page on Internet (which requires authorization) and grabs me the info from that page.
The real problem for me is that I can't make my code working in python 3.4 because there is no urllib2, and urllib works differently; even after few hours of googling and reading I got nothing. So if somebody can help me to solve this, I'd really appreciate that help.
Here is my code:
```
request = urllib2.Request('http://mysite/admin/index.cgi?index=127')
base64string = base64.encodestring('%s:%s' % ('login', 'password')).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
resulttext = result.read()
```
|
2014/07/08
|
[
"https://Stackoverflow.com/questions/24635064",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3816874/"
] |
Thankfully to you guys I finally figured out the way it works.
Here is my code:
```
request = urllib.request.Request('http://mysite/admin/index.cgi?index=127')
base64string = base64.b64encode(bytes('%s:%s' % ('login', 'password'),'ascii'))
request.add_header("Authorization", "Basic %s" % base64string.decode('utf-8'))
result = urllib.request.urlopen(request)
resulttext = result.read()
```
After all, there is one more difference with urllib: the `resulttext` variable in my case had the type of `<bytes>` instead of `<str>`, so to do something with text inside it I had to decode it:
```
text = resulttext.decode(encoding='utf-8',errors='ignore')
```
|
What about [urllib.request](https://docs.python.org/dev/library/urllib.request.html) ? It seems it has everything you need.
```
import base64
import urllib.request
request = urllib.request.Request('http://mysite/admin/index.cgi?index=127')
base64string = bytes('%s:%s' % ('login', 'password'), 'ascii')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib.request.urlopen(request)
resulttext = result.read()
```
| 5,616
|
57,073,765
|
I'm writing API results to CSV file in python 3.7. Problem is it adds double quotes ("") to each row when it writes to file.
I'm passing format as csv to API call, so that I get results in csv format and then I'm writing it to csv file, store to specific location.
Please suggest if there is any better way to do this.
Here is the sample code..
```
with open(target_file_path, 'w', encoding='utf8') as csvFile:
writer = csv.writer(csvFile, quoting=csv.QUOTE_NONE, escapechar='\"')
for line in rec.split('\r\n'):
writer.writerow([line])
```
when I use `escapechar='\"'` it adds (`"`) at the of every column value.
here is sample records..
```
2264855868",42.38454",-71.01367",07/15/2019 00:00:00",07/14/2019 20:00:00"
2264855868",42.38454",-71.01367",07/15/2019 01:00:00",07/14/2019 21:00:00"
```
|
2019/07/17
|
[
"https://Stackoverflow.com/questions/57073765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8449329/"
] |
This is maybe the same as your original code, but use `prop()` instead of `attr()`.
>
> [`attr()`](https://api.jquery.com/attr/) - As of jQuery 1.6, the .attr() method returns undefined for attributes that have not been set. To retrieve and change DOM properties such as the checked, selected, or disabled state of form elements, use the .prop() method.
>
>
>
```js
$('#settingsForm input').on('change', function() {
var len = $('#settingsForm input:checked').length;
if (len === 0) {
$(this).prop('checked', true);
console.log('You must select at least 1 checkbox');
}
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<form id="settingsForm">
<input class="form-check-input" type="checkbox" name="emailFrequency_daily">
<label>Daily</label>
<input class="form-check-input" type="checkbox" name="emailFrequency_weekly">
<label>Weekly</label>
<input class="form-check-input" type="checkbox" name="emailFrequency_monthly">
<label>Monthly</label>
</form>
```
|
You can try to bind the change event on the `.form-check-input` class, and inside that event you can check wheter if the checkboxes are "empty" or not.
```js
$('.form-check-input').on("change", function(){
var noChecked = $('.form-check-input:checked').length;
if(noChecked === 0){
console.log("0 checkboxes are checked");
} else {
console.log("checkboxes ok", noChecked);
}
});
```
```html
<form id="settingsForm">
<input class="form-check-input" type="checkbox" name="emailFrequency_daily">
<label>Daily</label>
<input class="form-check-input" type="checkbox" name="emailFrequency_weekly">
<label>Weekly</label>
<input class="form-check-input" type="checkbox" name="emailFrequency_monthly">
<label>Monthly</label>
</form>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
```
| 5,619
|
19,736,625
|
I'm trying to write my first python script. I want to write a program that will get information out of a website.
I managed to open the website, read all the data and transform the data from bytes to a string.
```
import urllib.request
response = urllib.request.urlopen('http://www.imdb.com/title/tt0413573/episodes?season=10')
website = response.read()
response.close()
html = website.decode("utf-8")
print(type(html))
print(html)
```
The string is massive, I don't know if I show transform it to a list and iterate over the list or just keep it as a string.
What I would like to do if find all the keyword `airdate` and them get the next line in the string.
When I scroll through the string this is the relevant bits:
```
<meta itemprop="episodeNumber" content="10"/>
<div class="airdate">
Nov. 21, 2013
</div>
```
This happens lots of times inside the string. What I'm trying to do is to loop through the string and return this result:
```
"episodeNumber" = some number
"airdate" = what ever date
```
For overtime this happens in the string. I tried:
```
keywords = ["airdate","episodeNumber"]
for i in keywords:
if i in html:
print (something)
```
I hope I'm explaining myself in the right way. I will edit the question if needed.
|
2013/11/01
|
[
"https://Stackoverflow.com/questions/19736625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2904861/"
] |
When dealing with structured texts like HTML/XML it is a good idea to use existing tools that leverage this structure. Instead of using regex or searching by hand, this gives a much more reliable and readable solution. In this case, I suggest to install [lxml](http://lxml.de/) to parse the HTML.
Applying this principle to your problem, try the following (I assume that you use Python 3 because you imported urllib.request):
```
import lxml.html as html
import urllib.request
resp = urllib.request.urlopen('http://www.imdb.com/title/tt0413573/episodes?season=10')
fragment = html.fromstring(resp.read())
for info in fragment.find_class('info'):
print('"episodeNumber" = ', info.find('meta').attrib['content'])
print('"airdate" =', info.find_class('airdate')[0].text_content().strip())
```
To make sure that the episode number and airdate are corresponding, I search for the surrounding element (a div with class 'info') and then extract the data you want.
I'm sure the code can be made prettier with a fancier selection of elements, but this should get you started.
---
[Added more information on the solution concerning the structure in the HTML.]
The string containing the data of one episode looks as follows:
```
<div class="info" itemprop="episodes" itemscope itemtype="...">
<meta itemprop="episodeNumber" content="1"/>
<div class="airdate">Sep. 26, 2013</div> <!-- already stripped whitespace -->
<strong>
<a href="/title/tt2911802/" title="Seal Our Fate" itemprop="name">...</a>
</strong>
<div class="item_description" itemprop="description">...</div>
<div class="popoverContainer"></div>
<div class="popoverContainer"></div>
</div>
```
You first select the div containing all data of one episode by its class 'info'. The first information you want is in a child of the div.info element, the meta element, stored in its property 'content'.
Next, you want the information stored in the div.airdate element, this time it is stored inside the element as text. To get rid of the whitespace around it, I then used the strip() method.
|
Would that work?
```
lines = website.splitlines()
lines.append('')
for index, line in enumerate(lines):
for keyword in ["airdate","episodeNumber"]:
if keyword in line:
print(lines[index + 1])
```
It prints the next line if the keyword is found in the line.
| 5,620
|
61,370,108
|
I have a data input pipeline that has:
* input datapoints of types that are not castable to a `tf.Tensor` (dicts and whatnot)
* preprocessing functions that could not understand tensorflow types and need to work with those datapoints; some of which do data augmentation on the fly
I've been trying to fit this into a `tf.data` pipeline, and I'm stuck on running the preprocessing for multiple datapoints in parallel. So far I've tried this:
* use `Dataset.from_generator(gen)` and do the preprocessing in the generator; this works but it processes each datapoint sequentially, no matter what arrangement of `prefetch` and fake `map` calls I patch on it. Is it impossible to prefetch in parallel?
* encapsulate the preprocessing in a `tf.py_function` so I could `map` it in parallel over my Dataset, but
1. this requires some pretty ugly (de)serialization to fit exotic
types into string tensors,
2. apparently the execution of the `py_function` would be handed over to the (single-process) python interpreter, so I'd be stuck with the python GIL which would not help me much
* I saw that you could do some tricks with `interleave` but haven't found any which does not have issues from the first two ideas.
Am I missing anything here? Am I forced to either modify my preprocessing so that it can run in a graph or is there a way to multiprocess it?
Our previous way of doing this was using keras.Sequence which worked well but there's just too many people pushing the upgrade to the `tf.data` API. *(hell, even trying the keras.Sequence with tf 2.2 yields `WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended.`)*
*Note: I'm using tf 2.2rc3*
|
2020/04/22
|
[
"https://Stackoverflow.com/questions/61370108",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5235369/"
] |
Please read the documentation first before post any questions. Visit <https://react-bootstrap.github.io/components/toasts/>
change
>
> import { Toast } from 'react-bootstrap';
>
>
>
with
>
> import Toast from 'react-bootstrap/Toast'
>
>
>
|
Change your import to
import { Toast } from 'react-bootstrap/Toast'
| 5,622
|
25,056,700
|
I am trying to connect to cassandra from python , I have installed `cassandra` as `pip install pycassa`.When i am trying to connect to the `cassandra` i am getting the following exception
```
from pycassa.pool import ConnectionPool
pool = ConnectionPool('Keyspace1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/site-packages/pycassa/pool.py", line 382, in __init__
self.fill()
File "/usr/lib/python2.7/site-packages/pycassa/pool.py", line 442, in fill
conn = self._create_connection()
File "/usr/lib/python2.7/site-packages/pycassa/pool.py", line 431, in _create_connection
(exc.__class__.__name__, exc))
pycassa.pool.AllServersUnavailable: An attempt was made to connect to each of the servers twice, but none of the attempts succeeded. The last failure was TTransportException: Could not connect to localhost:9160
```
I am using python 2.7.
What is the problem, Any help would be appreciated.
|
2014/07/31
|
[
"https://Stackoverflow.com/questions/25056700",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3531707/"
] |
Perhaps the best known is the *branchless absolute value*:
```
int m = x >> 31;
int abs = x + m ^ m;
```
Which uses an arithmetic shift to copy the signbit to all bits. Most uses of arithmetic shift that I've encountered were of that form. Of course an arithmetic shift is not *required* for this, you could replace all occurrences of `x >> 31` (where `x` is an `int`) by `-(x >>> 31)`.
The value 31 comes from the size of `int` in bits, which is 32 by definition in Java. So shifting right by 31 shifts out all bits except the signbit, which (since it's an arithmetic shift) is copied to those 31 bits, leaving a copy of the signbit in every position.
|
In C when writing device drivers, bit shift operators are used extensively since bits are used as switches that need to be turned on and off. Bit shift allow one to easily and correctly target the right switch.
Many hashing and cryptographic functions make use of bit shift. Take a look at [Mercenne Twister](https://en.wikipedia.org/wiki/Mersenne_twister#Pseudocode).
Lastly, it is sometimes useful to use bitfields to contain state information. Bit manipulation functions including bit shift are useful for these things.
| 5,623
|
45,920,527
|
I would like to download a subset of a WAT archive segment from Amazon S3.
**Background:**
Searching the Common Crawl index at <http://index.commoncrawl.org> yields results with information about the location of WARC files on AWS S3. For example, searching for [url=www.celebuzz.com/2017-01-04/\*&output=json](http://index.commoncrawl.org/CC-MAIN-2017-34-index?url=www.celebuzz.com%2F2017-01-04%2F*&output=json) yields JSON-formatted results, one of which is
`{
"urlkey":"com,celebuzz)/2017-01-04/watch-james-corden-george-michael-tribute",
...
"filename":"crawl-data/CC-MAIN-2017-34/segments/1502886104631.25/warc/CC-MAIN-20170818082911-20170818102911-00023.warc.gz",
...
"offset":"504411150",
"length":"14169",
...
}`
The `filename` entry indicates which archive segment contains the WARC file for this particular page. This archive file is huge; but fortunately the entry also contains `offset` and `length` fields, which can be used to request the range of bytes containing the relevant subset of the archive segment (see, e.g., [lines 22-30 in this gist](https://gist.github.com/Smerity/56bc6f21a8adec920ebf)).
**My question:**
Given the location of a WARC file segment, I know how to construct the name of the corresponding WAT archive segment (see, e.g., [this tutorial](https://dmorgan.info/posts/common-crawl-python/)). I only need a subset of the WAT file, so I would like to request a range of bytes. But how do I find the corresponding offset and length for the WAT archive segment?
I have checked the [API documentation](https://github.com/ikreymer/pywb/wiki/CDX-Server-API) for the Common Crawl index server, and it isn't clear to me that this is even possible. But in case it is, I'm posting this question.
|
2017/08/28
|
[
"https://Stackoverflow.com/questions/45920527",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2861935/"
] |
The Common Crawl index does not contain offsets into WAT and WET files. So, the only way is to search the whole WAT/WET file for the desired record/URL. Eventually, it would be possible to estimate the offset because the record order in WARC and WAT/WET files is the same.
|
After many trial and error I had managed to get a range from a warc file in python and boto3 the following way:
```
# You have this form the index
offset, length, filename = 2161478, 12350, "crawl-data/[...].warc.gz"
import boto3
from botocore import UNSIGNED
from botocore.client import Config
# Boto3 anonymous login to common crawl
s3 = boto3.client('s3', config=Config(signature_version=UNSIGNED))
# Count the range
offset_end = offset + length - 1
byte_range = 'bytes={offset}-{end}'.format(offset=2161478, end=offset_end)
gzipped_text = s3.get_object(Bucket='commoncrawl', Key=filename, Range=byte_range)['Body'].read()
# The requested file in GZIP
with open("file.gz", 'w') as f:
f.write(gzipped_text)
```
The rest is optimisation... Hope it helps! :)
| 5,633
|
52,898,576
|
I am trying to read a json data from an input file, and to pass it as the request to make a http call in python.
Here is the highlights in my python code:
```
with open('input.json') as f:
raw_data = json.load(f)
cookies = ...
headers = {
'Content-Type': 'application/json;charset=UTF-8',
'Accept': 'application/json text/plain, */*',
...
}
response = requests.put('https://.../template/...02420afe4907', headers=headers, cookies=cookies, data=raw_data)
```
but it fails for 400 Error. The response contents shows:
```
b'<!DOCTYPE html>\n<html lang="en">\n<head>\n<meta charset="utf-8">\n<title>Error</title>\n</head>\n<body>\n<pre>SyntaxError: Unexpected token # in JSON at position 0<br>
```
But if I initialize it directly, such as:
```
raw_data = '{"name":"template-123","comment":"",...}'
```
The call can be made successfully.
This is the my input.json looks like:
```
{
"name":"template-123",
"comment":"",
...
}
```
Does anyone know how to fix this. **I need to get the original data from this file.** Thanks.
|
2018/10/19
|
[
"https://Stackoverflow.com/questions/52898576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3595231/"
] |
When you pass a `dict` (which is what `raw_data` is) as the `data` argument to `requests.put`, it will be form-encoded, which does not make for valid JSON. Either pass serialized JSON to `data`:
```
requests.put(..., data=json.dumps(raw_data), ...)
```
or use the `json` keyword and let `requests` do the serialization for you:
```
requests.put(..., json=raw_data, ...)
```
|
```
with open('input.json') as f:
```
did you mean
```
with open('input.json','r') as f:
```
or
```
with open('input.json','rb') as f:
```
if the data is stored as bytes, you have to read it in as 'rb'.
| 5,634
|
58,660,173
|
Given a list of dictionaries like:
```
history = [
{
"actions": [{"action": "baz", "people": ["a"]}, {"action": "qux", "people": ["d", "e"]}],
"events": ["foo"]
},
{
"actions": [{"action": "baz", "people": ["a", "b", "c"]}],
"events": ["foo", "bar"]
},
]
```
What is the most efficient (whilst still readable) way to get a list of dicts, where each dict is an unique `event` and the list of actions for that event have been merged based on the `action` key. For example, for the above list, the desired output is:
```
output = [
{
"event": "foo",
"actions": [
{"action": "baz", "people": ["a", "b", "c"]},
{"action": "qux", "people": ["d", "e"]}
]
},
{
"event": "bar",
"actions": [
{"action": "baz", "people": ["a", "b", "c"]}
]
},
]
```
I can't change the output structure as it's being consumed by something external. I've wrote the following code which works but is very verbose and has poor readability.
```
from collections import defaultdict
def transform(history):
d = defaultdict(list)
for item in history:
for event in item["events"]:
d[event] = d[event] + item["actions"]
transformed = []
for event, actions in d.items():
merged_actions = {}
for action in actions:
name = action["action"]
if merged_actions.get(name):
merged_actions[name]["people"] = list(set(action["people"]) | set(merged_actions[name]["people"]))
else:
merged_actions[name] = {
"action": action["action"],
"people": action["people"]
}
transformed.append({
"event": event,
"actions": list(merged_actions.values())
})
return transformed
```
I'm only targeting python3.6+
|
2019/11/01
|
[
"https://Stackoverflow.com/questions/58660173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4007992/"
] |
You can use `collections.defaultdict` with `itertools.groupby`:
```
from collections import defaultdict
from itertools import groupby as gb
d = defaultdict(list)
for i in history:
for b in i['events']:
d[b].extend(i['actions'])
new_d = {a:[(j, list(k)) for j, k in gb(sorted(b, key=lambda x:x['action']), key=lambda x:x['action'])] for a, b in d.items()}
result = [{'event':a, 'actions':[{'action':c, 'people':list(set([i for k in b for i in k['people']]))} for c, b in d]} for a, d in new_d.items()]
```
Output:
```
[
{'event': 'foo',
'actions': [
{'action': 'baz', 'people': ['b', 'a', 'c']},
{'action': 'qux', 'people': ['d', 'e']}
]
},
{'event': 'bar',
'actions': [{'action': 'baz', 'people': ['b', 'a', 'c']}]
}
]
```
|
It is not a less verbose answer, but maybe a bit better readable. Also it does not depend on anything else and is just standard python.
```
tmp_dict = {}
for d in history:
for event in d["events"]:
if event not in tmp_dict:
tmp_dict[event] = {}
for actions in d["actions"]:
tmp_dict[event][actions["action"]] = actions["people"]
else:
for actions in d["actions"]:
if actions["action"] in tmp_dict[event]:
tmp_dict[event][actions["action"]].extend(actions["people"])
else:
tmp_dict[event][actions["action"]] = actions["people"]
output = [{"event": event, "actions": [{"action": ac, "people": list(set(peop))} for ac, peop in tmp_dict[event].items()]} for event in tmp_dict]
print (output)
```
Output:
```
[
{'event': 'foo',
'actions': [
{'action': 'qux', 'people': ['e', 'd']},
{'action': 'baz', 'people': ['a', 'c', 'b']}
]
},
{'event': 'bar',
'actions': [{'action': 'baz', 'people': ['a', 'c', 'b']}]
}
]
```
| 5,635
|
25,043,982
|
i'm having some trouble to handle jpeg files on Python under AWS Elastic Beanstalk.
I have this on .ebextensions/python.config file:
```
packages:
yum:
libjpeg-turbo-devel: []
libpng-devel: []
freetype-devel: []
...
```
So i believe i have libjpeg installed and working (i tried libjpeg-devel, but yum can't find this package).
Also, i have this on my requirements.txt:
```
Pillow==2.5.1
...
```
So i believe i have Pillow installed and working on my environment.
Then, since i have Pillow and libjpeg, i'm trying to do some work using PIL.Image in a Python script and save to a file. Like this:
```
from PIL import Image
def resize_image(image,new_size,crop=False,correctOrientationSize=False):
assert type(new_size) == dict
assert new_size.has_key('width') and new_size.has_key('height')
THUM_SIZE = [new_size['width'],new_size['height']]
file_like = cStringIO.StringIO(base64.decodestring(image))
thumbnail = Image.open(file_like)
(width,height) = thumbnail.size
if correctOrientationSize and height > width:
THUM_SIZE.reverse()
thumbnail.thumbnail(THUM_SIZE)
if crop:
# Recorta imagem
thumbnail = crop_image(thumbnail)
output = cStringIO.StringIO()
thumbnail.save(output,format='jpeg')
return output.getvalue().encode('base64')
```
However, when i try to run it on Elastic Beanstalk's instance, the exception "decoder jpeg not available" when it calls .save() method.
If i SSH into my instance, it works just fine and i already tried to rebuild the environment.
What am i doing wrong?
**UPDATE:**
As suggested, i SSHed again into the instance and reinstalled Pillow through pip (/opt/python/run/venv/bin/pip), not before i has had sure libjpeg-devel was on environment before Pillow.
I ran selftest.py and it confirmed that i had support for jpeg. So, in a last try, i went to "Restart App Server" on Elastic Beanstalk interface. It worked.
Thank you all.
|
2014/07/30
|
[
"https://Stackoverflow.com/questions/25043982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1541615/"
] |
Following the general advice from [here](https://stackoverflow.com/questions/8915296/python-image-library-fails-with-message-decoder-jpeg-not-available-pil?rq=1), I solved this by adding the following in my .ebextensions configuration and re-deploying.
```
packages:
yum:
libjpeg-turbo-devel: []
libpng-devel: []
freetype-devel: []
container_commands:
...
05_uninstall_pil:
command: "source /opt/python/run/venv/bin/activate && yes | pip uninstall Pillow"
06_reinstall_pil:
command: "source /opt/python/run/venv/bin/activate && yes | pip install Pillow --no-cache-dir"
```
|
As suggested, I SSHed again into the instance and reinstalled Pillow through pip (/opt/python/run/venv/bin/pip), not before I has had sure libjpeg-devel was on environment before Pillow.
I ran selftest.py and it confirmed that I had support for jpeg. So, in a last try, I went to "Restart App Server" on Elastic Beanstalk interface. It worked.
| 5,636
|
25,087,111
|
I'm running a simulation on a 2D space with periodic boundary conditions. A continuous function is represented by its values on a grid. I need to be able to evaluate the function and its gradient at any point in the space. Fundamentally, this isn't a hard problem -- or to be precise, it's an almost already solved problem. The function can be interpolated using a cubic spline with scipy.interpolate.RectBivariateSpline. The reason it's *almost* solved is that RectBivariateSpline cannot handle periodic boundary conditions, nor can anything else in scipy.interpolate, as far as I can figure out from the documentation.
Is there a python package that can do this? If not, can I adapt scipy.interpolate to handle periodic boundary conditions? For instance, would it be enough to put a border of, say, four grid elements around the entire space and explicitly represent the periodic condition on it?
[ADDENDUM] A little more detail, in case it matters: I am simulating the motion of animals in a chemical gradient. The continuous function I mentioned above is the concentration of a chemical that they are attracted to. It changes with time and space according to a straightforward reaction/diffusion equation. Each animal has an x,y position (which cannot be assumed to be at a grid point). They move up the gradient of attractant. I'm using periodic boundary conditions as a simple way of imitating an unbounded space.
|
2014/08/01
|
[
"https://Stackoverflow.com/questions/25087111",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2672942/"
] |
It appears that the python function that comes closest is scipy.signal.cspline2d. This is exactly what I want, *except* that it assumes mirror-symmetric boundary conditions. Thus, it appears that I have three options:
1. Write my own cubic spline interpolation function that works with periodic boundary conditions, perhaps using the cspline2d sources (which are based on functions written in C) as a starting point.
2. The kludge: the effect of data at i on the spline coefficient at j
goes as r^|i-j|, with r = -2 + sqrt(3) ~ -0.26. So the effect of
the edge is down to r^20 ~ 10^-5 if I nest the grid within a border
of width 20 all the way around that replicates the periodic values,
something like this:
bzs1 = np.array(
[zs1[i%n,j%n] for i in range(-20, n+20) for j in range(-20, n+20)] )
bzs1 = bzs1.reshape((n + 40, n + 40))
Then I call cspline2d on the whole array, but use only the middle. This should work, but it's ugly.
3. Use Hermite interpolation instead. In a 2D regular grid, this corresponds to [bicubic interpolation](http://en.wikipedia.org/wiki/Bicubic_interpolation). The disadvantage is that the interpolated function has a discontinuous second derivative. The advantages are it is (1) relatively easy to code, and (2) for my application, computationally efficient. At the moment, this is the solution I'm favoring.
I did the math for interpolation with trig functions rather than polynomials, as @mdurant suggested. It turns out to be very similar to the cubic spline, but requires more computation and produces worse results, so I won't be doing that.
**EDIT:** A colleague told me of a fourth solution:
1. The [GNU Scientific Library](http://www.gnu.org/software/gsl/) (GSL) has interpolation functions that can handle periodic boundary conditions. There are two (at least) python interfaces to GSL: [PyGSL](http://pygsl.sourceforge.net/) and [CythonGSL](https://github.com/twiecki/CythonGSL). Unfortunately, GSL interpolation seems to be restricted to one dimension, so it's not a lot of use to me, but there's lots of good stuff in GSL.
|
Another function that could work is `scipy.ndimage.interpolation.map_coordinates`.
It does spline interpolation with periodic boundary conditions.
It does not not directly provide derivatives, but you could calculate them numerically.
| 5,637
|
27,214,053
|
I am getting an error when trying to install uinput
I tried PIP and `easy_install`. I also tried to install 'manually' from tar package.
I always get an error. Below is the error I get when installing with easy\_install.
Can you guide me on how to fix it ?
```
rpi@torpi ~/scripts $ sudo easy_install python-uinput
Searching for python-uinput
Reading http://pypi.python.org/simple/python-uinput/
Best match: python-uinput 0.10.2
Downloading https://pypi.python.org/packages/source/p/python-uinput/python-uinput-0.10.2.tar.gz#md5=abbbbfc50d03a0585a5231d9396f78bd
Processing python-uinput-0.10.2.tar.gz
Running python-uinput-0.10.2/setup.py -q bdist_egg --dist-dir /tmp/easy_install-ZWLsct/python-uinput-0.10.2/egg-dist-tmp-bPeztQ
/usr/bin/ld: cannot find libudev.so
collect2: ld returned 1 exit status
error: Setup script exited with error: command 'gcc' failed with exit status 1
```
|
2014/11/30
|
[
"https://Stackoverflow.com/questions/27214053",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4308766/"
] |
Had this problem and fixed it with
```
sudo apt-get install libudev-dev
```
|
I could not get input to install properly so, I ended up using [evdev](https://pypi.python.org/pypi/evdev) instead.
| 5,642
|
49,741,030
|
I am building a RESTful API using Python 3.6, the Falcon Framework, Google App Engine, and Firebase Cloud Firestore. At runtime I am receiving the following error ...
```
File "E:\Bill\Documents\GitHubProjects\LetsHang-BackEnd\lib\google\cloud\firestore_v1beta1\_helpers.py", line 24, in <module> import grpc
File "E:\Bill\Documents\GitHubProjects\LetsHang-BackEnd\lib\grpc\__init__.py", line 22, in <module>
from grpc._cython import cygrpc as _cygrpc
ImportError: cannot import name cygrpc
```
When researching StackOverFlow, I found an [article regarding an AWS Lambda deployment](https://stackoverflow.com/questions/46669176/aws-lambda-to-firestore-error-cannot-import-name-cygrpc), but it suggests a solution based on Docker. Docker is not a viable solution for us. I also found an article off StackOverflow that suggests running "pip install grpcio". We did not without luck.
We build the App Engine dependencies using a requirements.txt file. This file has the following contents ...
```
falcon==1.4.1
google-api-python-client
google-cloud-firestore
firebase-admin
enum34
grpcio
```
We apply the requirements file using the command ...
```
pip install -t lib -r requirements.txt
```
The App Engine server is started with the command ...
```
dev_appserver.py .
```
The development environment is Windows 10.
|
2018/04/09
|
[
"https://Stackoverflow.com/questions/49741030",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7183052/"
] |
You seem to be mixing up the GAE standard and flexible environments:
* using Python 3.6 is only possible in the flexible environment (which, BTW, is fundamentally Docker-based)
* installing app dependencies in the `lib` directory and using `dev_appserver.py` for local development are only applicable to the standard environment
Somehow related: [How to tell if a Google App Engine documentation page applies to the standard or the flexible environment](https://stackoverflow.com/questions/45842772/how-to-tell-if-a-google-app-engine-documentation-page-applies-to-the-standard-or)
|
Ok. I will write up my findings just in case there's another fool like me.
First, [Dan's](https://stackoverflow.com/users/4495081/dan-cornilescu) response is correct. I was mixing standard and flexible environments. I had looked up a method for using the Falcon Framework with App Engine; as it turns out, the only article uses the standard environment. So that's how I wound up using dev\_appserver.py. My app, however, is Python 3.6 and has dependencies that prevent stepping down to 2.7.
To develop locally for the flexible environment, you simply need to run as you normally would. In the case of Falcon Framework, that means using the Waitress wsgi server.
I find that it is a good practice to build and use a Python virtual environment. You use the virtualenv command for that. At deployment time, Google builds a docker container for the app in the cloud. To reconstruct all the necessary Python packages, you have to supply a requirements.txt file. If you have a virtual environment, then the requirements file is easily produced using pip freeze.
| 5,645
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.