qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
13,993,617
|
I am currently using Python v2.6 and trying to merge words into a line. My code supposed to read data from a text file, in which I have two rows of data both of which are strings. Then, it takes the second row data every time, which are the words of sentences, those are separated by delimiter strings, such that:
Inside the .txt:
```
"delimiter_string"
"row_1_data" "row_2_data"
"row_1_data" "row_2_data"
"row_1_data" "row_2_data"
"row_1_data" "row_2_data"
"row_1_data" "row_2_data"
"delimiter_string"
"row_1_data" "row_2_data"
"row_1_data" "row_2_data"
...
```
Those "row\_2\_data" will add-up to a sentence later. Sorry for the long introduction btw.
Here is my code:
```
import sys
import re
newLine = ''
for line in sys.stdin:
word = line.split(' ')[1]
if word == '<S>+BSTag':
continue
elif word == '</S>+ESTag':
print newLine
newLine = ''
continue
else:
w = re.sub('\[.*?]', '', word)
if newLine == '':
newLine += w
else:
newLine += ' ' + w
```
"BSTag" is the tag for "Sentence Begins" and "ESTag" is for "Sentence Ends": the so called "delimiters". "re.sub" is used for a special purpose and it works as far as I checked.
The problem is that, when I execute this python script from the command line in linux with the following command: $ cat file.txt | script.py | less, I can not see any output, but just a blank file.
For those who are not familiar with linux, I guess the problem has nothing to do with terminal execution, thus you can neglect that part. Simply, the code does not work as intended and I can not find a single mistake.
Any help will be appreciated, and thanks for reading the long post :)
---
Ok, the problem is solved, which was actually a corpus error instead of a coding one. A very odd entry was detected in the text file, which was causing problems. Removing it solved it. You can use both of these approaches: mine and the one presented by "snurre" if you want a similar text processing.
Cheers.
|
2012/12/21
|
[
"https://Stackoverflow.com/questions/13993617",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1839494/"
] |
```
def foo(lines):
output = []
for line in lines:
words = line.split()
if len(words) < 2:
word = words[0]
else:
word = words[1]
if word == '</S>+ESTag':
yield ' '.join(output)
output = []
elif word != '<S>+BSTag':
output.append(words[1])
for sentence in foo(sys.stdin):
print sentence
```
Your regex is a little funky. From what I can tell, it's replacing anything between (and including) a pair of `[` and `]` with `''`, so it ends up printing empty strings.
|
I think the problem is that the script isn't being executed (unless you just excluded the [shebang](http://docs.python.org/2/using/unix.html#miscellaneous) in the code you posted)
Try this
```
cat file.txt | python script.py | less
```
| 14,788
|
47,177,112
|
I manually create PySpark DataFrame as follows:
```
acdata = sc.parallelize([
[('timestamp', 1506340019), ('pk', 111), ('product_pk', 123), ('country_id', 'FR'), ('channel', 'web')]
])
# Convert to tuple
acdata_converted = acdata.map(lambda x: (x[0][1], x[1][1], x[2][1]))
# Define schema
acschema = StructType([
StructField("timestamp", LongType(), True),
StructField("pk", LongType(), True),
StructField("product_pk", LongType(), True),
StructField("country_id", StringType(), True),
StructField("channel", StringType(), True)
])
df = sqlContext.createDataFrame(acdata_converted, acschema)
```
But when I write `df.head()` and do `spark-submit`, I get the following error:
```
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000003/pyspark.zip/pyspark/worker.py", line 177, in main
process()
File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000003/pyspark.zip/pyspark/worker.py", line 172, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000003/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000001/pyspark.zip/pyspark/sql/session.py", line 520, in prepare
File "/mnt/yarn/usercache/hdfs/appcache/application_1510134261242_0002/container_1510134261242_0002_01_000003/pyspark.zip/pyspark/sql/types.py", line 1358, in _verify_type
"length of fields (%d)" % (len(obj), len(dataType.fields)))
ValueError: Length of object (3) does not match with length of fields (12)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
What does it mean and how to solve it?
|
2017/11/08
|
[
"https://Stackoverflow.com/questions/47177112",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7316807/"
] |
You need to map all 5 fields to match with the schema defined.
```
acdata_converted = acdata.map(lambda x: (x[0][1], x[1][1], x[2][1], x[3][1], x[4][1]))
```
|
I'd do it this way:
```
acdata = sc.parallelize([{'timestamp': 1506340019, 'pk': 111, 'product_pk': 123, 'country_id': 'FR', 'channel': 'web'}, {...}])
# Define schema
acschema = StructType([
StructField("timestamp", LongType(), True),
StructField("pk", LongType(), True),
StructField("product_pk", LongType(), True),
StructField("country_id", StringType(), True),
StructField("channel", StringType(), True)
])
df = sqlContext.createDataFrame(acdata_converted, acschema)
```
Also think if you really need to parallelize data. It's also possible to create DataFrame from dictionary.
| 14,789
|
70,828,210
|
I have `Django` project and want to look the another db (not default db created by `Php Symfony`)
Django can set up two DB in `settins.py`
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
"NAME": config("DB_NAME"),
"USER": config("DB_USER"),
"PASSWORD": config("DB_PASSWORD"),
"HOST": config("DB_HOST"),
"PORT": config("DB_PORT"),
'OPTIONS': {
'charset': 'utf8mb4',
'init_command': "SET sql_mode='STRICT_TRANS_TABLES'"
},
}
'extern': {
'ENGINE': 'django.db.backends.mysql',
'NAME': config("DB_EXTERN_NAME"),
'USER': config("DB_EXTERN_USER"),
'PASSWORD': config("DB_EXTERN_PASSWORD"),
'HOST': config("DB_EXTERN_HOST"),
'PORT': config("DB_EXTERN_PORT"),
}
}
```
and set `models.py` for example
```
class Area(models.Model):
class Meta:
db_table = 'my_area'
```
However it requires to change the extern database when `python manage.py makemigrations`,`python manage.py migrate`
I just want to reference the extern db not alter.
Is there any good practice??
|
2022/01/24
|
[
"https://Stackoverflow.com/questions/70828210",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1942868/"
] |
You may be looking for the [`managed = false`](https://docs.djangoproject.com/en/4.0/ref/models/options/#django.db.models.Options.managed) meta setting on your models. That will cause Django not to try to manage those models (such as creating migrations for them). It's commonly used when working with externally managed databases.
EG:
```
class Area(models.Model):
class Meta:
db_table = 'my_area'
managed = false
```
|
I think what you need do is run
```
python manage.py migrate --database extern
```
| 14,790
|
43,433,406
|
I am trying to run app not written by me app.
When I write
`python manage.py makemigrations`
I got:
```
Traceback (most recent call last):
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\sqlite3\base.py", line 337, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: django_content_type
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\core\management\__init__.py", line 367, in execute_from_command_line
utility.execute()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\core\management\__init__.py", line 341, in execute
django.setup()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\apps\registry.py", line 115, in populate
app_config.ready()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\apps.py", line 23, in ready
self.module.autodiscover()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\__init__.py", line 26, in autodiscover
autodiscover_modules('admin', register_to=site)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\utils\module_loading.py", line 50, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "C:\Users\direwolf\Documents\web\python\alexbog80-motivity-3e5c21f03b3e\app\motivity\admin.py", line 23, in <module>
admin.site.register(Offer, OfferAdmin)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\sites.py", line 110, in register
system_check_errors.extend(admin_obj.check())
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\options.py", line 117, in check
return self.checks_class().check(self, **kwargs)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\checks.py", line 520, in check
errors.extend(self._check_list_display(admin_obj))
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\checks.py", line 596, in _check_list_display
for index, item in enumerate(obj.list_display)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\checks.py", line 596, in <listcomp>
for index, item in enumerate(obj.list_display)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\checks.py", line 604, in _check_list_display_item
elif hasattr(model, item):
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tagging\fields.py", line 55, in __get__
return edit_string_for_tags(Tag.objects.usage_for_model(owner))
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tagging\models.py", line 157, in usage_for_model
usage = self.usage_for_queryset(queryset, counts, min_count)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tagging\models.py", line 183, in usage_for_queryset
extra_joins, extra_criteria, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tagging\models.py", line 113, in _get_usage
'content_type_id': ContentType.objects.get_for_model(model).pk,
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\contenttypes\models.py", line 52, in get_for_model
ct = self.get(app_label=opts.app_label, model=opts.model_name)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\query.py", line 379, in get
num = len(clone)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\query.py", line 238, in __len__
self._fetch_all()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\query.py", line 1087, in _fetch_all
self._result_cache = list(self.iterator())
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\query.py", line 54, in __iter__
results = compiler.execute_sql()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\sql\compiler.py", line 835, in execute_sql
cursor.execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\utils\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\sqlite3\base.py", line 337, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: django_content_type
```
What do I do?
upd 1:
`python manage.py migrate` traceback:
```
Traceback (most recent call last):
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\sqlite3\base.py", line 337, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: django_content_type
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\core\management\__init__.py", line 367, in execute_from_command_line
utility.execute()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\core\management\__init__.py", line 341, in execute
django.setup()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\__init__.py", line 27, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\apps\registry.py", line 115, in populate
app_config.ready()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\apps.py", line 23, in ready
self.module.autodiscover()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\__init__.py", line 26, in autodiscover
autodiscover_modules('admin', register_to=site)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\utils\module_loading.py", line 50, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "C:\Users\direwolf\Documents\web\python\alexbog80-motivity-3e5c21f03b3e\app\motivity\admin.py", line 23, in <module>
admin.site.register(Offer, OfferAdmin)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\sites.py", line 110, in register
system_check_errors.extend(admin_obj.check())
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\options.py", line 117, in check
return self.checks_class().check(self, **kwargs)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\checks.py", line 520, in check
errors.extend(self._check_list_display(admin_obj))
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\checks.py", line 596, in _check_list_display
for index, item in enumerate(obj.list_display)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\checks.py", line 596, in <listcomp>
for index, item in enumerate(obj.list_display)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\admin\checks.py", line 604, in _check_list_display_item
elif hasattr(model, item):
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tagging\fields.py", line 55, in __get__
return edit_string_for_tags(Tag.objects.usage_for_model(owner))
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tagging\models.py", line 157, in usage_for_model
usage = self.usage_for_queryset(queryset, counts, min_count)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tagging\models.py", line 183, in usage_for_queryset
extra_joins, extra_criteria, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\tagging\models.py", line 113, in _get_usage
'content_type_id': ContentType.objects.get_for_model(model).pk,
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\contrib\contenttypes\models.py", line 52, in get_for_model
ct = self.get(app_label=opts.app_label, model=opts.model_name)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\query.py", line 379, in get
num = len(clone)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\query.py", line 238, in __len__
self._fetch_all()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\query.py", line 1087, in _fetch_all
self._result_cache = list(self.iterator())
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\query.py", line 54, in __iter__
results = compiler.execute_sql()
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\models\sql\compiler.py", line 835, in execute_sql
cursor.execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\utils\six.py", line 685, in reraise
raise value.with_traceback(tb)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "C:\Users\direwolf\AppData\Local\Programs\Python\Python36-32\lib\site-packages\django\db\backends\sqlite3\base.py", line 337, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils
```
.OperationalError: no such table: django\_content\_type
Upd 2:
Offer class in `models.py`
```
class Offer(TimeStampMixin, SEOFieldsMixin):
title = models.CharField(u'Название предложения', max_length=255, blank=False)
discription = models.TextField(u'Красивое описание оффера', null=False, blank=False)
cover = models.ImageField(u'Обложка', blank=False, null=False)
tags = TagAutocompleteField(blank=False, verbose_name='Теги')
active = models.BooleanField(u'Активный?', default=True)
publish = models.BooleanField(u'Показывать на сайте?', default=True)
def preview_image(self):
# try:
thumbnail = get_thumbnail(self.cover, 'x50', crop='center')
# except TypeError:
# return u'Нет картинки'
# except:
# return u'Нет картинки' # if original img not exist
return '<a href="%s/"><img src="%s"/></a>' % (self.id, thumbnail.url)
preview_image.short_description = u'Обложка'
preview_image.allow_tags = True
class Meta:
verbose_name = u'Предложение'
verbose_name_plural = u'Предложения'
ordering = ('-modified_value',)
get_latest_by = 'created_value'
def __unicode__(self):
return u'%s' % self.title
```
I didn't find admin class
only this:
```
from django.contrib import admin
from app.motivity.models import TaskOffer, Offer, UserOffers
class TaskOfferAdmin(admin.TabularInline):
model = TaskOffer
exclude = ['meta_title', 'meta_description', 'meta_keywords']
class OfferAdmin(admin.ModelAdmin):
list_display = ['preview_image','title','tags', 'publish']
list_display_links = ['title']
list_filter = ['tags']
inlines = [TaskOfferAdmin, ]
fields = ['title', 'discription', 'cover', 'tags']
class UserOfferAdmin(admin.ModelAdmin):
pass
admin.site.register(UserOffers, UserOfferAdmin)
admin.site.register(Offer, OfferAdmin)
```
|
2017/04/16
|
[
"https://Stackoverflow.com/questions/43433406",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2950593/"
] |
I had the same issue while using **Python 3.6.5** and **Django==2.1.7** on **Mac OS 10.15.2**. I fixed it by manually creating the table `django_content_type` with the columns: `id, app_label, model`
On running `python manage.py migrate` and the error appears, there should be a `*.sqlite3` file created on the path specified on the `DATABASE` key within your `settings.py` file.
**Example:**
```
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'base_site.sqlite3'),
}
}
```
My Sqlite DB filename is `base_site.sqlite3` and is created at the `BASE_DIR` of the project.
**Steps to follow:**
* Connect to the created SQLite DB using:
```
sqlite3 <DB filename>
```
* Create the table using the following command:
```
CREATE TABLE IF
NOT EXISTS "django_content_type" (
"id" integer NOT NULL PRIMARY KEY AUTOINCREMENT,
"app_label" varchar(100) NOT NULL,
"model" varchar(100) NOT NULL
);
```
* Finally run the migrations using the command below:
```
python manage.py migrate --fake-initial
```
This should run the migrations as expected.
|
It seems like I was using python 3x
And this app was written using python 2x
| 14,793
|
20,492,625
|
I've written tests for my python code and want to check how much % is covered with tests, so I decided to use python coverage. But I have a problem launching it. I launch my tests with this bash command:
```
export PYTHONPATH=. && python files/test/tests.py
```
My python program is in "files" directory, and tests are in "test", so I can't launch it another way.
Using
```
export PYTHONPATH=. && python coverage files/test/tests.py
```
raises Error. How to correctly use coverage in my situation ?
|
2013/12/10
|
[
"https://Stackoverflow.com/questions/20492625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2876296/"
] |
The correct way to do this is to use an appropriate **coverage** plugin for the unit testing framework/runner you are using:
Here are some combinations:
* [pytest](http://pytest.org/latest/) + [pytest-cov](https://pypi.python.org/pypi/pytest-cov)
* [nose](https://pypi.python.org/pypi/nose/1.3.0) + [nose-cov](https://pypi.python.org/pypi/nose-cov)
There are probably other tools and combinations you can use. But these two are probably the most common (*no reference*).
|
```
coverage run files/test/tests.py
```
| 14,796
|
53,129,263
|
Depend on this tutorials [grpc basic](https://grpc.io/docs/tutorials/basic/python.html)
I clone `https://github.com/grpc/grpc` to local,
`cd example/python/helloworld`
start server `python greeter_server.py`
then start client `python greeter_client.py`,
but get error
```
Traceback (most recent call last):
File "greeter_client.py", line 35, in <module>
run()
File "greeter_client.py", line 30, in run
response = stub.SayHello(helloworld_pb2.HelloRequest(name='you'))
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 533, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline) grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Socket closed"
debug_error_string = "{"created":"@1541228979.471085000","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"Socket closed","grpc_status":14}"
```
then I execuse `sudo python greeter_client.py`, get the correct result.
Why I should add sudo to get the correct result?
|
2018/11/03
|
[
"https://Stackoverflow.com/questions/53129263",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6449456/"
] |
1. I found I set a global http proxy `export http_proxy=http://127.0.0.1:1087`, I closed this proxy, then It was find.
2. update `greeter_client.py`, change `localhost` to `127.0.0.1`. It's find to me.
|
Could you try few options and share your feedback:
**Option - 1**
another port(except 50051) in [client](https://github.com/grpc/grpc/blob/master/examples/python/helloworld/greeter_client.py) and [server](https://github.com/grpc/grpc/blob/master/examples/python/helloworld/greeter_server.py#L36)?
**Option-2**
Try with 0.0.0.0 in [client](https://github.com/grpc/grpc/blob/master/examples/python/helloworld/greeter_client.py)
Thanks,
Dheeraj
| 14,797
|
41,434,350
|
I work on a project and I want to download a csv file from a url. I did some research on the site but none of the solutions presented worked for me.
The url offers you directly to download or open the file of the blow I do not know how to say a python to save the file (it would be nice if I could also rename it)
But when I open the url with this code nothing happens.
```
import urllib
url='https://data.toulouse-metropole.fr/api/records/1.0/download/?dataset=dechets-menagers-et-assimiles-collectes'
testfile = urllib.request.urlopen(url)
```
Any ideas?
|
2017/01/02
|
[
"https://Stackoverflow.com/questions/41434350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6435119/"
] |
Try this. Change "folder" to a folder on your machine
```
import os
import requests
url='https://data.toulouse-metropole.fr/api/records/1.0/download/?dataset=dechets-menagers-et-assimiles-collectes'
response = requests.get(url)
with open(os.path.join("folder", "file"), 'wb') as f:
f.write(response.content)
```
|
You can adapt an example from [the docs](https://docs.python.org/3/howto/urllib2.html)
```
import urllib.request
url='https://data.toulouse-metropole.fr/api/records/1.0/download/?dataset=dechets-menagers-et-assimiles-collectes'
with urllib.request.urlopen(url) as testfile, open('dataset.csv', 'w') as f:
f.write(testfile.read().decode())
```
| 14,798
|
35,780,768
|
I am getting this message when I try to install aws. Anyone have any ideas of what's going on?
```
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip/basecommand.py", line 209, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip/commands/install.py", line 317, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip/req/req_set.py", line 725, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip/req/req_install.py", line 756, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip/utils/__init__.py", line 266, in renames
shutil.move(old, new)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-TIuiKe-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'
```
The command I'm using is pip install awscli
|
2016/03/03
|
[
"https://Stackoverflow.com/questions/35780768",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5505587/"
] |
Pure virtual functions will never cause anything to fail during linking. Instead, pure virtual functions will cause a compilation error if you try to instantiate the object of an abstract type.
Reminder - an abstract type is a type which has (directly or indirectly through inheritance) at least one pure virtual function which was not overridden.
|
This may compile or build, if the solution has been previously built. This means that it is using old object code. Try to do a clean build of your solution, then rebuild it. Once your solution is cleaned. Then try to compile the inherited class. Another thing that may be of concern is that you have declared your inherited class as:
>
> class AImpl : A { ... };
>
>
>
My question on this is how is `AImpl` being inherited from `A`? Are you intending `public`, `protected` or `private` inheritance?
**EDIT**
If you are linking to this as a library and your current solution does not show any compile, build, link errors; this is because your current solution is using the old `lib` or `dll` that was already built. If you go back into your library solution and do a clean build, it should not compile, and thus you won't have a newer version of your library to link to.
| 14,799
|
33,312,175
|
I want to use [`re.MULTILINE`](https://docs.python.org/2/library/re.html#re.MULTILINE) but **NOT** [`re.DOTALL`](https://docs.python.org/2/library/re.html#re.DOTALL), so that I can have a regex that includes both an "any character" wildcard and the normal `.` wildcard that doesn't match newlines.
Is there a way to do this? What should I use to match any character in those instances that I want to include newlines?
|
2015/10/23
|
[
"https://Stackoverflow.com/questions/33312175",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/44330/"
] |
To match a newline, or "any symbol" without `re.S`/`re.DOTALL`, you may use any of the following:
1. `(?s).` - the [inline modifier group](https://www.regular-expressions.info/modifiers.html) with `s` flag on sets a scope where all `.` patterns match any char including line break chars
2. Any of the following work-arounds:
```
[\s\S]
[\w\W]
[\d\D]
```
The main idea is that the opposite shorthand classes inside a character class match any symbol there is in the input string.
Comparing it to `(.|\s)` and other variations with alternation, the character class solution is much more efficient as it involves much less backtracking (when used with a `*` or `+` quantifier). Compare the small example: it takes [`(?:.|\n)+`](https://regex101.com/r/pX7lM6/1) 45 steps to complete, and it takes [`[\s\S]+`](https://regex101.com/r/pX7lM6/2) just 2 steps.
See a [Python demo](https://ideone.com/GZEQNf) where I am matching a line starting with `123` and up to the first occurrence of `3` at the start of a line and including the rest of that line:
```py
import re
text = """abc
123
def
356
more text..."""
print( re.findall(r"^123(?s:.*?)^3.*", text, re.M) )
# => ['123\ndef\n356']
print( re.findall(r"^123[\w\W]*?^3.*", text, re.M) )
# => ['123\ndef\n356']
```
|
Match any character (including new line):
-----------------------------------------
Regular Expression: (Note the use of space ' ' is also there)
```
[\S\n\t\v ]
```
Example:
--------
```
import re
text = 'abc def ###A quick brown fox.\nIt jumps over the lazy dog### ghi jkl'
# We want to extract "A quick brown fox.\nIt jumps over the lazy dog"
matches = re.findall('###[\S\n ]+###', text)
print(matches[0])
```
**The 'matches[0]' will contain:**
'A quick brown fox.**\n**It jumps over the lazy dog'
Description of '\S' Python docs:
--------------------------------
`\S`
Matches any character which is not a whitespace character.
( See: <https://docs.python.org/3/library/re.html#regular-expression-syntax> )
| 14,801
|
66,880,698
|
I have a notebook that runs overnight, and prints out a bunch of stuff, including images and such. I want to cause this output to be saved programatically (perhaps at certain intervals). I also want to save the code that was run. In a Jupyter notebook, you could do:
```
from IPython.display import display, Javascript
display(Javascript('IPython.notebook.save_checkpoint();'))
# causes the current .ipynb file to save itself (same as hitting CTRL+s)
```
(from [Save an IPython notebook programmatically from within itself?](https://stackoverflow.com/questions/32237275/save-an-ipython-notebook-programmatically-from-within-itself))
Although, I found that this javascript injection did not work in Jupyter lab(Jupyter not found). My question is how to do the equivalent of the above code in Jupyter lab. Upon inspecting the HTML of the jupyter lab, I could not find the Jupyter object.
|
2021/03/31
|
[
"https://Stackoverflow.com/questions/66880698",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11632499/"
] |
You can use [ipylab](https://github.com/jtpio/ipylab) to access JupyterLab API from Python. To save the notebook just invoke the `docmanager:save` command:
```py
from ipylab import JupyterFrontEnd
app = JupyterFrontEnd()
app.commands.execute('docmanager:save')
```
You can get the full list of commands with `app.commands.list_commands()`.
|
JupyterLab has a bulit-in auto-save function. You can configure the time interval using the Advanced Settings Editor, the Document Manager section (see screenshot below).
[](https://i.stack.imgur.com/01vR4.png)
However, if you *really* want a JavaScript solution you could just invoke the keyboard shortcut `Ctrl` + `s` with:
```
from IPython.display import display, Javascript
display(Javascript(
"document.body.dispatchEvent("
"new KeyboardEvent('keydown', {key:'s', keyCode: 83, ctrlKey: true}"
"))"
))
```
this will only work as long as you do not change focus to a different notebook. However, you can always use an invisible HTML node such as input to reclaim the focus first:
```
from IPython.display import display, HTML
script = """
this.nextElementSibling.focus();
this.dispatchEvent(new KeyboardEvent('keydown', {key:'s', keyCode: 83, ctrlKey: true}));
"""
display(HTML((
'<img src onerror="{}" style="display:none">'
'<input style="width:0;height:0;border:0">'
).format(script)))
```
And you can always wrap the script in `window.setTimout` or `window.setInterval` - but it should not be needed thanks to the built in auto-save function of JupyterLab.
| 14,802
|
42,103,367
|
I am using multiprocessing.Pool.imap to run many independent jobs in parallel using Python 2.7 on Windows 7. With the default settings, my total CPU usage is pegged at 100%, as measured by Windows Task Manager. This makes it impossible to do any other work while my code runs in the background.
I've tried limiting the number of processes to be the number of CPUs minus 1, as described in [How to limit the number of processors that Python uses](https://stackoverflow.com/questions/5495203/how-to-limit-the-number-of-processors-that-python-uses):
```
pool = Pool(processes=max(multiprocessing.cpu_count()-1, 1)
for p in pool.imap(func, iterable):
...
```
This does reduce the total number of running processes. However, each process just takes up more cycles to make up for it. So my total CPU usage is still pegged at 100%.
Is there a way to directly limit the total CPU usage - NOT just the number of processes - or failing that, is there any workaround?
|
2017/02/08
|
[
"https://Stackoverflow.com/questions/42103367",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4410133/"
] |
The solution depends on what you want to do. Here are a few options:
Lower priorities of processes
-----------------------------
You can [`nice`](https://en.wikipedia.org/wiki/Nice_(Unix)) the subprocesses. This way, though they will still eat 100% of the CPU, when you start other applications, the OS gives preference to the other applications. If you want to leave a work intensive computation run on the background of your laptop and don't care about the CPU fan running all the time, then setting the nice value with `psutils` is your solution. This script is a test script which runs on all cores for enough time so you can see how it behaves.
```
from multiprocessing import Pool, cpu_count
import math
import psutil
import os
def f(i):
return math.sqrt(i)
def limit_cpu():
"is called at every process start"
p = psutil.Process(os.getpid())
# set to lowest priority, this is windows only, on Unix use ps.nice(19)
p.nice(psutil.BELOW_NORMAL_PRIORITY_CLASS)
if __name__ == '__main__':
# start "number of cores" processes
pool = Pool(None, limit_cpu)
for p in pool.imap(f, range(10**8)):
pass
```
The trick is that `limit_cpu` is run at the beginning of every process (see [`initializer` argment in the doc](https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool)). Whereas Unix has levels -19 (highest prio) to 19 (lowest prio), Windows has [a few distinct levels](https://msdn.microsoft.com/en-us/library/ms686219(v=vs.85).aspx) for giving priority. `BELOW_NORMAL_PRIORITY_CLASS` probably fits your requirements best, there is also `IDLE_PRIORITY_CLASS` which says Windows to run your process only when the system is idle.
You can view the priority if you switch to detail mode in Task Manager and right click on the process:
[](https://i.stack.imgur.com/K9QY4.png)
Lower number of processes
-------------------------
Although you have rejected this option it still might be a good option: Say you limit the number of subprocesses to half the cpu cores using `pool = Pool(max(cpu_count()//2, 1))` then the OS initially runs those processes on half the cpu cores, while the others stay idle or just run the other applications currently running. After a short time, the OS reschedules the processes and might move them to other cpu cores etc. Both Windows as Unix based systems behave this way.
**Windows: Running 2 processes on 4 cores:**

**OSX: Running 4 processes on 8 cores**:
[](https://i.stack.imgur.com/K9QY4.png)
You see that both OS balance the process between the cores, although not evenly so you still see a few cores with higher percentages than others.
Sleep
-----
If you absolutely want to go sure, that your processes never eat 100% of a certain core (e.g. if you want to prevent that the cpu fan goes up), then you can run sleep in your processing function:
```
from time import sleep
def f(i):
sleep(0.01)
return math.sqrt(i)
```
This makes the OS "schedule out" your process for `0.01` seconds for each computation and makes room for other applications. If there are no other applications, then the cpu core is idle, thus it will never go to 100%. You'll need to play around with different sleep durations, it will also vary from computer to computer you run it on. If you want to make it very sophisticated you could adapt the sleep depending on what `cpu_times()` reports.
|
**On the OS level
---------------**
you can use `nice` to set a priority to a single command. You could also start a python script with nice. (Below from: <http://blog.scoutapp.com/articles/2014/11/04/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups>)
>
> **nice**
>
>
> The nice command tweaks the priority level of a process so that it runs less frequently. This is useful when you need to run a
> CPU intensive task as a background or batch job. The niceness level
> ranges from -20 (most favorable scheduling) to 19 (least favorable).
> Processes on Linux are started with a niceness of 0 by default. The
> nice command (without any additional parameters) will start a process
> with a niceness of 10. At that level the scheduler will see it as a
> lower priority task and give it less CPU resources.Start two
> matho-primes tasks, one with nice and one without:
>
>
>
```
nice matho-primes 0 9999999999 > /dev/null &matho-primes 0 9999999999 > /dev/null &
matho-primes 0 9999999999 > /dev/null &
```
>
> Now run top.
>
>
>
[](https://i.stack.imgur.com/CPuTD.jpg)
As a function in Python
-----------------------
Another approach is to use psutils to check your CPU load average for the past minute and then have your threads check the CPU load average and spool up another thread if you are below the specified CPU load target, and sleep or kill the thread if you are above the CPU load target. This will get out of your way when you are using your computer, but will maintain a constant CPU load.
```
# Import Python modules
import time
import os
import multiprocessing
import psutil
import math
from random import randint
# Main task function
def main_process(item_queue, args_array):
# Go through each link in the array passed in.
while not item_queue.empty():
# Get the next item in the queue
item = item_queue.get()
# Create a random number to simulate threads that
# are not all going to be the same
randomizer = randint(100, 100000)
for i in range(randomizer):
algo_seed = math.sqrt(math.sqrt(i * randomizer) % randomizer)
# Check if the thread should continue based on current load balance
if spool_down_load_balance():
print "Process " + str(os.getpid()) + " saying goodnight..."
break
# This function will build a queue and
def start_thread_process(queue_pile, args_array):
# Create a Queue to hold link pile and share between threads
item_queue = multiprocessing.Queue()
# Put all the initial items into the queue
for item in queue_pile:
item_queue.put(item)
# Append the load balancer thread to the loop
load_balance_process = multiprocessing.Process(target=spool_up_load_balance, args=(item_queue, args_array))
# Loop through and start all processes
load_balance_process.start()
# This .join() function prevents the script from progressing further.
load_balance_process.join()
# Spool down the thread balance when load is too high
def spool_down_load_balance():
# Get the count of CPU cores
core_count = psutil.cpu_count()
# Calulate the short term load average of past minute
one_minute_load_average = os.getloadavg()[0] / core_count
# If load balance above the max return True to kill the process
if one_minute_load_average > args_array['cpu_target']:
print "-Unacceptable load balance detected. Killing process " + str(os.getpid()) + "..."
return True
# Load balancer thread function
def spool_up_load_balance(item_queue, args_array):
print "[Starting load balancer...]"
# Get the count of CPU cores
core_count = psutil.cpu_count()
# While there is still links in queue
while not item_queue.empty():
print "[Calculating load balance...]"
# Check the 1 minute average CPU load balance
# returns 1,5,15 minute load averages
one_minute_load_average = os.getloadavg()[0] / core_count
# If the load average much less than target, start a group of new threads
if one_minute_load_average < args_array['cpu_target'] / 2:
# Print message and log that load balancer is starting another thread
print "Starting another thread group due to low CPU load balance of: " + str(one_minute_load_average * 100) + "%"
time.sleep(5)
# Start another group of threads
for i in range(3):
start_new_thread = multiprocessing.Process(target=main_process,args=(item_queue, args_array))
start_new_thread.start()
# Allow the added threads to have an impact on the CPU balance
# before checking the one minute average again
time.sleep(20)
# If load average less than target start single thread
elif one_minute_load_average < args_array['cpu_target']:
# Print message and log that load balancer is starting another thread
print "Starting another single thread due to low CPU load balance of: " + str(one_minute_load_average * 100) + "%"
# Start another thread
start_new_thread = multiprocessing.Process(target=main_process,args=(item_queue, args_array))
start_new_thread.start()
# Allow the added threads to have an impact on the CPU balance
# before checking the one minute average again
time.sleep(20)
else:
# Print CPU load balance
print "Reporting stable CPU load balance: " + str(one_minute_load_average * 100) + "%"
# Sleep for another minute while
time.sleep(20)
if __name__=="__main__":
# Set the queue size
queue_size = 10000
# Define an arguments array to pass around all the values
args_array = {
# Set some initial CPU load values as a CPU usage goal
"cpu_target" : 0.60,
# When CPU load is significantly low, start this number
# of threads
"thread_group_size" : 3
}
# Create an array of fixed length to act as queue
queue_pile = list(range(queue_size))
# Set main process start time
start_time = time.time()
# Start the main process
start_thread_process(queue_pile, args_array)
print '[Finished processing the entire queue! Time consuming:{0} Time Finished: {1}]'.format(time.time() - start_time, time.strftime("%c"))
```
| 14,805
|
16,867,347
|
From what I have read, there are two ways to debug code in Python:
* With a traditional debugger such as `pdb` or `ipdb`. This supports commands such as `c` for `continue`, `n` for `step-over`, `s` for `step-into` etc.), but you don't have direct access to an IPython shell which can be extremely useful for object inspection.
* Using **IPython** by [embedding](http://ipython.org/ipython-doc/dev/interactive/reference.html#embedding-ipython) an IPython shell in your code. You can do `from IPython import embed`, and then use `embed()` in your code. When your program/script hits an `embed()` statement, you are dropped into an IPython shell. This allows the full inspection of objects and testing of Python code using all the IPython goodies. However, when using `embed()` you can't **step-by-step** through the code anymore with handy keyboard shortcuts.
Is there any way to combine the best of both worlds? I.e.
1. Be able to **step-by-step** through your code with handy pdb/ipdb keyboard shortcuts.
2. At any such step (e.g. on a given statement), have access to a full-fledged **IPython shell**.
### IPython debugging *as in* MATLAB:
An example of this type of "enhanced debugging" can be found in MATLAB, where the user **always** has full access to the MATLAB engine/shell, and she can still **step-by-step** through her code, define conditional breakpoints, etc. From what I have discussed with other users, this is the debugging feature that people miss the most when moving from MATLAB to IPython.
### IPython debugging in Emacs and other editors:
I don't want to make the question too specific, but I work mostly in Emacs, so I wonder if there is any way to bring this functionality into it. **Ideally**, Emacs (or the editor) would allow the programmer to set breakpoints anywhere on the code and communicate with the interpreter or debugger to have it stop in the location of your choice, and bring to a full IPython interpreter on that location.
|
2013/05/31
|
[
"https://Stackoverflow.com/questions/16867347",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/283296/"
] |
(Update on May 28, 2016) Using RealGUD in Emacs
===============================================
For anyone in Emacs, [this thread](https://github.com/rocky/emacs-dbgr/issues/96) shows how to accomplish everything described in the OP (and more) using
1. a new important debugger in Emacs called **RealGUD** which can operate with any debugger (including `ipdb`).
2. The Emacs package `isend-mode`.
The combination of these two packages is extremely powerful and allows one to recreate exactly the behavior described in the OP and do even more.
More info on [the wiki article](https://github.com/rocky/emacs-dbgr/wiki/ipdb-notes) of RealGUD for ipdb.
---
Original answer:
================
After having tried many different methods for debugging Python, including everything mentioned in this thread, one of my preferred ways of debugging Python with IPython is with embedded shells.
### Defining a custom embedded IPython shell:
Add the following on a script to your `PYTHONPATH`, so that the method `ipsh()` becomes available.
```
import inspect
# First import the embed function
from IPython.terminal.embed import InteractiveShellEmbed
from IPython.config.loader import Config
# Configure the prompt so that I know I am in a nested (embedded) shell
cfg = Config()
prompt_config = cfg.PromptManager
prompt_config.in_template = 'N.In <\\#>: '
prompt_config.in2_template = ' .\\D.: '
prompt_config.out_template = 'N.Out<\\#>: '
# Messages displayed when I drop into and exit the shell.
banner_msg = ("\n**Nested Interpreter:\n"
"Hit Ctrl-D to exit interpreter and continue program.\n"
"Note that if you use %kill_embedded, you can fully deactivate\n"
"This embedded instance so it will never turn on again")
exit_msg = '**Leaving Nested interpreter'
# Wrap it in a function that gives me more context:
def ipsh():
ipshell = InteractiveShellEmbed(config=cfg, banner1=banner_msg, exit_msg=exit_msg)
frame = inspect.currentframe().f_back
msg = 'Stopped at {0.f_code.co_filename} at line {0.f_lineno}'.format(frame)
# Go back one level!
# This is needed because the call to ipshell is inside the function ipsh()
ipshell(msg,stack_depth=2)
```
Then, whenever I want to debug something in my code, I place `ipsh()` right at the location where I need to do object inspection, etc. For example, say I want to debug `my_function` below
### Using it:
```
def my_function(b):
a = b
ipsh() # <- This will embed a full-fledged IPython interpreter
a = 4
```
and then I invoke `my_function(2)` in one of the following ways:
1. Either by running a Python program that invokes this function from a Unix shell
2. Or by invoking it directly from IPython
Regardless of how I invoke it, the interpreter stops at the line that says `ipsh()`. Once you are done, you can do `Ctrl-D` and Python will resume execution (with any variable updates that you made). Note that, if you run the code from a regular IPython the IPython shell (case 2 above), the new IPython shell will be **nested** inside the one from which you invoked it, which is perfectly fine, but it's good to be aware of. Eitherway, once the interpreter stops on the location of `ipsh`, I can inspect the value of `a` (which be `2`), see what functions and objects are defined, etc.
### The problem:
The solution above can be used to have Python stop anywhere you want in your code, and then drop you into a fully-fledged IPython interpreter. Unfortunately it does not let you add or remove breakpoints once you invoke the script, which is highly frustrating. In my opinion, this is the **only** thing that is preventing IPython from becoming a great debugging tool for Python.
### The best you can do for now:
A workaround is to place `ipsh()` a priori at the different locations where you want the Python interpreter to launch an IPython shell (i.e. a `breakpoint`). You can then "jump" between different pre-defined, hard-coded "breakpoints" with `Ctrl-D`, which would exit the current embedded IPython shell and stop again whenever the interpreter hits the next call to `ipsh()`.
If you go this route, one way to exit "debugging mode" and ignore all subsequent breakpoints, is to use `ipshell.dummy_mode = True` which will make Python ignore any subsequent instantiations of the `ipshell` object that we created above.
|
Running from inside Emacs' IPython-shell and breakpoint set via pdb.set\_trace() should work.
Checked with python-mode.el, M-x ipython RET etc.
| 14,808
|
43,149,637
|
I have string in python containing a large text file (over 1MiB).
I need to split it to chunks.
Constrains:
* chunks can be splited only by newline character, and
* len(chunk) must be as big as possbile but smaller than LIMIT (i.e. 100KiB)
Lines longer than LIMIT can be ommited.
Any idea how to implement this nicely in python?
Thank you in advance.
|
2017/03/31
|
[
"https://Stackoverflow.com/questions/43149637",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/705676/"
] |
Following the suggestion of Linuxios you could use rfind to find the last newline within the limit and split at this point. If no newline character is found the chunk is too large and can be dismissed.
```
chunks = []
not_chunked_text = input_text
while not_chunked_text:
if len(not_chunked_text) <= LIMIT:
chunks.append(not_chunked_text)
break
split_index = not_chunked_text.rfind("\n", 0, LIMIT)
if split_index == -1:
# The chunk is too big, so everything until the next newline is deleted
try:
not_chunked_text = not_chunked_text.split("\n", 1)[1]
except IndexError:
# No "\n" in not_chunked_text, i.e. the end of the input text was reached
break
else:
chunks.append(not_chunked_text[:split_index+1])
not_chunked_text = not_chunked_text[split_index+1:]
```
`rfind("\n", 0, LIMIT)` returns the highest index where a newline character was found within the bounds of your LIMIT.
`not_chunked_text[:split_index+1]` is needed so that the newline character is included in the chunk
I interpreted the LIMIT as the biggest length of a chunk that is allowed. If a chunk with the length of LIMIT should not be allowed you have to add a `-1` after ever `LIMIT` in this code.
|
Here is my not-so-pythonic solution:
```
def line_chunks(lines, chunk_limit):
chunks = []
chunk = []
chunk_len = 0
for line in lines:
if len(line) + chunk_len < chunk_limit:
chunk.append(line)
chunk_len += len(line)
else:
chunks.append(chunk)
chunk = [line]
chunk_len = len(line)
chunks.append(chunk)
return chunks
chunks = line_chunks(data.split('\n'), 150)
print '\n---new-chunk---\n'.join(['\n'.join(chunk) for chunk in chunks])
```
| 14,818
|
12,905,300
|
I am trying to develop an application that creates an image and fills it with color pixels using bilinear interpolation and then displays it. My code so far is the following:
```
#include <QtCore/QCoreApplication>
#include <opencv/cv.h>
#include <opencv/highgui.h>
#include <iostream>
#include <string>
#include <sys/stat.h>
using namespace cv;
int main()
{
Mat image;
image.create( 500, 500, CV_8UC3);
//upper left corner
Vec3b ul( 255, 0, 0 );
//upper right corner
Vec3b ur( 0, 255, 0 );
//bottom left corner
Vec3b bl( 0, 0, 255 );
//bottom right corner
Vec3b br( 255, 0, 255 );
//for(int y=0;y<image.rows; y++)
//for(int x=0;x<image.cols;x++)
// call function to add noise
namedWindow("Colored Pixels");
imshow("Colored Pixels", image);
// shows image for 5 seconds
waitKey(10000);
return 0;
}
```
When I run my program in debug mode I get the following two pop up windows:
```
Unexpected GDB Exit
The gdb process exited unexpectedly (code 0)
```
and
```
Executable Failed
During startup program exited with code 0xc0000138
```
my debugger log screen shows the following:
```
sStarting debugger 'GdbEngine' for ABI 'x86-windows-msys-pe-32bit'...
dStart parameters: 'pixelGradient' mode: 1
dABI: x86-windows-msys-pe-32bit
dExecutable: C:\Users\encore\Desktop\Lectures\Year 3\QtCreator\pixelGradient-build-desktop-Qt_4_8_1_for_Desktop_-_MinGW__Qt_SDK__Debug\debug\pixelGradient.exe [terminal]
dDirectory: C:\Users\encore\Desktop\Lectures\Year 3\QtCreator\pixelGradient-build-desktop-Qt_4_8_1_for_Desktop_-_MinGW__Qt_SDK__Debug
dDebugger: C:\QtSDK\pythongdb\python_2.7based\gdb-i686-pc-mingw32.exe
dProject: C:\Users\encore\Desktop\Lectures\Year 3\QtCreator\pixelGradient (built: C:\Users\encore\Desktop\Lectures\Year 3\QtCreator\pixelGradient-build-desktop-Qt_4_8_1_for_Desktop_-_MinGW__Qt_SDK__Debug)
dQt: C:\QtSDK\Desktop\Qt\4.8.1\mingw
dQML server: 127.0.0.1:3768
dSysroot:
dDebug Source Loaction:
dSymbol file:
dDumper libraries: C:\QtSDK\Desktop\Qt\4.8.1\mingw\\qtc-debugging-helper\ C:\QtSDK\QtCreator\qtc-debugging-helper\168937759\ C:\Users\encore\AppData\Local\Nokia\QtCreator\qtc-debugging-helper\168937759\
d
dDebugger settings:
dUseAlternatingRowColours: false (default: false)
dFontSizeFollowsEditor: false (default: false)
dUseMessageBoxForSignals: true (default: true)
dAutoQuit: false (default: false)
dLogTimeStamps: false (default: false)
dVerboseLog: false (default: false)
dCloseBuffersOnExit: false (default: false)
dSwitchModeOnExit: false (default: false)
dUseDebuggingHelper: true (default: true)
dUseCodeModel: true (default: true)
dShowThreadNames: false (default: false)
dUseToolTips: false (default: false)
dUseToolTipsInLocalsView: false (default: false)
dUseToolTipsInBreakpointsView: false (default: false)
dUseAddressInBreakpointsView: false (default: false)
dUseAddressInStackView: false (default: false)
dRegisterForPostMortem: false (default: false)
dLoadGdbInit: true (default: true)
dScriptFile: (default: )
dWatchdogTimeout: 20 (default: 20)
dAutoEnrichParameters: false (default: false)
dTargetAsync: false (default: false)
dMaximalStackDepth: 20 (default: 20)
dAlwaysAdjustStackColumnWidths: false (default: false)
dShowStandardNamespace: true (default: true)
dShowQtNamespace: true (default: true)
dSortStructMembers: true (default: true)
dAutoDerefPointers: true (default: true)
dAlwaysAdjustLocalsColumnWidths: false (default: false)
dListSourceFiles: false (default: false)
dSkipKnownFrames: false (default: false)
dEnableReverseDebugging: false (default: false)
dAllPluginBreakpoints: true (default: true)
dSelectedPluginBreakpoints: false (default: false)
dAdjustBreakpointLocations: true (default: true)
dAlwaysAdjustBreakpointsColumnWidths: false (default: false)
dNoPluginBreakpoints: false (default: false)
dSelectedPluginBreakpointsPattern: .* (default: .*)
dBreakOnThrow: false (default: false)
dBreakOnCatch: false (default: false)
dBreakOnWarning: false (default: false)
dBreakOnFatal: false (default: false)
dAlwaysAdjustRegistersColumnWidths: false (default: false)
dAlwaysAdjustSnapshotsColumnWidths: false (default: false)
dAlwaysAdjustThreadsColumnWidths: false (default: false)
dAlwaysAdjustModulesColumnWidths: false (default: false)
dState changed from DebuggerNotReady(0) to EngineSetupRequested(1).
dQUEUE: SETUP ENGINE
dCALL: SETUP ENGINE
dTRYING TO START ADAPTER
dENABLING TEST CASE: 0
dSTARTING C:/QtSDK/pythongdb/python_2.7based/gdb-i686-pc-mingw32.exe -i mi
dGDB STARTED, INITIALIZING IT
<1show version
<2-list-features
<3set print object on
<4set breakpoint pending on
<5set print elements 10000
<6set overload-resolution off
<7handle SIGSEGV nopass stop print
<8set unwindonsignal on
<9pwd
<10set width 0
<11set height 0
<12set auto-solib-add on
<13-interpreter-exec console "maintenance set internal-warning quit no"
<14-interpreter-exec console "maintenance set internal-error quit no"
<15-interpreter-exec console "disassemble 0 0"
<16-interpreter-exec console "python execfile('C:/QtSDK/QtCreator/share/qtcreator/dumper/bridge.py')"
<17-interpreter-exec console "python execfile('C:/QtSDK/QtCreator/share/qtcreator/dumper/dumper.py')"
<18-interpreter-exec console "python execfile('C:/QtSDK/QtCreator/share/qtcreator/dumper/qttypes.py')"
<19-interpreter-exec console "bbsetup"
dADAPTER SUCCESSFULLY STARTED
dNOTE: ENGINE SETUP OK
dState changed from EngineSetupRequested(1) to EngineSetupOk(3).
dQUEUE: SETUP INFERIOR
dState changed from EngineSetupOk(3) to InferiorSetupRequested(4).
dQUEUE: SETUP INFERIOR
dCALL: SETUP INFERIOR
sSetting up inferior...
<20set substitute-path C:/iwmake/build_mingw_opensource C:/QtSDK/Desktop/Qt/4.8.1/mingw
<21set substitute-path C:/ndk_buildrepos/qt-desktop/src C:/QtSDK/Desktop/Qt/4.8.1/mingw
<22set substitute-path C:/qt-greenhouse/Trolltech/Code_less_create_more/Trolltech/Code_less_create_more/Troll/4.6/qt C:/QtSDK/Desktop/Qt/4.8.1/mingw
Attaching to 1260 (6412)
dTaking notice of pid 1260
<23attach 1260
>=thread-group-added,id="i1"
>~"GNU gdb (GDB) 7.2\n"
>~"Copyright (C) 2010 Free Software Foundation, Inc.\n"
>~"License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\n"
>~"This GDB was configured as \"i686-pc-mingw32\".\nFor bug reporting instructions, please see:\n"
>~"<http://www.gnu.org/software/gdb/bugs/>.\n"
>&"show version\n"
>~"GNU gdb (GDB) 7.2\n"
>~"Copyright (C) 2010 Free Software Foundation, Inc.\n"
>~"License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>\nThis is free software: you are free to change and redistribute it.\nThere is NO WARRANTY, to the extent permitted by law. Type \"show copying\"\nand \"show warranty\" for details.\n"
>~"This GDB was configured as \"i686-pc-mingw32\".\nFor bug reporting instructions, please see:\n"
>~"<http://www.gnu.org/software/gdb/bugs/>.\n"
>1^done
dPARSING VERSION: 1^done
d
dSUPPORTED GDB VERSION GNU gdb (GDB) 7.2
dCopyright (C) 2010 Free Software Foundation, Inc.
dLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
dThis is free software: you are free to change and redistribute it.
dThere is NO WARRANTY, to the extent permitted by law. Type "show copying"
dand "show warranty" for details.
dThis GDB was configured as "i686-pc-mingw32".
dFor bug reporting instructions, please see:
d<http://www.gnu.org/software/gdb/bugs/>.
dGNU gdb (GDB) 7.2
dCopyright (C) 2010 Free Software Foundation, Inc.
dLicense GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
dThis is free software: you are free to change and redistribute it.
dThere is NO WARRANTY, to the extent permitted by law. Type "show copying"
dand "show warranty" for details.
dThis GDB was configured as "i686-pc-mingw32".
dFor bug reporting instructions, please see:
d<http://www.gnu.org/software/gdb/bugs/>.
d
dUSING GDB VERSION: 70200, BUILD: 2010
>2^done,features=["frozen-varobjs","pending-breakpoints","thread-info","python"]
dFEATURES: 2^done,data={features=["frozen-varobjs","pending-breakpoints","thread-info","python"]}
d
>&"set print object on\n"
>3^done
>&"set breakpoint pending on\n"
>4^done
>&"set print elements 10000\n"
>5^done
>&"set overload-resolution off\n"
>6^done
>&"handle SIGSEGV nopass stop print\n"
>~"Signal Stop\tPrint\tPass to program\tDescription\n"
>~"SIGSEGV Yes\tYes\tNo\t\tSegmentation fault\n"
>7^done
>&"set unwindonsignal on\n"
>8^done
>&"pwd\n"
>~"Working directory C:\\Users\\encore.\n"
>9^done
>&"set width 0\n"
>10^done
>&"set height 0\n"
>11^done
>&"set auto-solib-add on\n"
>12^done
>13^done
>14^done
>&"A syntax error in expression, near `0'.\n"
>15^error,msg="A syntax error in expression, near `0'."
>16^done
>17^done
>18^done
>~"dumpers=[{type=\"QLinkedList\",formats=\"\"},{type=\"QSize\",formats=\"\"},{type=\"QFileInfo\",formats=\"\"},{type=\"QAbstractItemModel\",formats=\"\"},{type=\"std__stack\",formats=\"\"},{type=\"QTextDocument\",formats=\"\"},{type=\"QTJSC__JSValue\",formats=\"\"},{type=\"__gnu_cxx__hash_set\",formats=\"\"},{type=\"QStringList\",formats=\"\"},{type=\"QRegion\",formats=\"\"},{type=\"std__wstring\",formats=\"\"},{type=\"QString\",formats=\"Inline,Separate Window\",editable=\"true\"},{type=\"QTextCodec\",formats=\"\"},{type=\"QBasicAtomicInt\",formats=\"\"},{type=\"QScriptValue\",formats=\"\"},{type=\"QTime\",formats=\"\"},{type=\"QSharedData\",formats=\"\"},{type=\"std__vector\",formats=\"\",editable=\"true\"},{type=\"QRegExp\",formats=\"\"},{type=\"QTextCursor\",formats=\"\"},{type=\"QxXmlAttributes\",formats=\"\"},{type=\"QDateTime\",formats=\"\"},{type=\"QList\",formats=\"\"},{type=\"QStandardItem\",formats=\"\"},{type=\"std__deque\",formats=\"\"},{type=\"QFixed\",formats=\"\"},{type=\"QHash\",formats=\"\"},{type=\"QSharedPointer\",formats=\"\"},{type=\"QUrl\",formats=\"\"},{type=\"std__set\",formats=\"\"},{type=\"std__list\",formats=\"\"},{type=\"std__basic_string\",formats=\"\"},{type=\"QPoint\",formats=\"\"},{type=\"QHostAddress\",formats=\"\"},{type=\"QStack\",formats=\"\"},{type=\"QScopedPointer\",formats=\"\"},{type=\"QRectF\",formats=\"\"},{type=\"QMultiMap\",formats=\"\"},{type=\"QMapNode\",formats=\"\"},{type=\"QModelIndex\",formats=\"Normal,Enhanced\"},{type=\"QLocale\",formats=\"\"},{type=\"QSharedDataPointer\",formats=\"\"},{type=\"QVariant\",formats=\"\"},{type=\"string\",formats=\"\",editable=\"true\"},{type=\"QBasicAtomicPointer\",formats=\"\"},{type=\"QVector\",formats=\"\",editable=\"true\"},{type=\"QDate\",formats=\"\"},{type=\"QFile\",formats=\"\"},{type=\"QAtomicInt\",formats=\"\"},{type=\"TBuf\",formats=\"\"},{type=\"QWeakPointer\",formats=\"\"},{type=\"QSizeF\",formats=\"\"},{type=\"__m128\",formats=\"As Floats,As Doubles\"},{type=\"boost__optional\",formats=\"\"},{type=\"wstring\",formats=\"\"},{type=\"QPointF\",formats=\"\"},{type=\"TLitC\",formats=\"\"},{type=\"QRect\",formats=\"\"},{type=\"QByteArray\",formats=\"\"},{type=\"QMap\",formats=\"\"},{type=\"boost__shared_ptr\",formats=\"\"},{type=\"QChar\",formats=\"\"},{type=\"QDir\",formats=\"\"},{type=\"QPixmap\",formats=\"\"},{type=\"QFlags\",formats=\"\"},{type=\"std__map\",formats=\"\"},{type=\"QHashNode\",formats=\"\"},{type=\"QTemporaryFile\",formats=\"\"},{type=\"QObject\",formats=\"\"},{type=\"Eigen__Matrix\",formats=\"\"},{type=\"std__string\",formats=\"\",editable=\"true\"},{type=\"QImage\",formats=\"Normal,Displayed\"},{type=\"QSet\",formats=\"\"},],hasInferiorThreadList=\"1\"\n"
>19^done
>&"set substitute-path C:/iwmake/build_mingw_opensource C:/QtSDK/Desktop/Qt/4.8.1/mingw\n"
>20^done
>&"set substitute-path C:/ndk_buildrepos/qt-desktop/src C:/QtSDK/Desktop/Qt/4.8.1/mingw\n"
>21^done
>&"set substitute-path C:/qt-greenhouse/Trolltech/Code_less_create_more/Trolltech/Code_less_create_more/Troll/4.6/qt C:/QtSDK/Desktop/Qt/4.8.1/mingw\n"
>22^done
>&"attach 1260\n"
>~"Attaching to process 1260\n"
>=thread-group-started,id="i1",pid="1260"
sThread group i1 created
>=thread-created,id="1",group-id="i1"
sThread 1 created
>~"[New Thread 1260.0x190c]\n"
s[New Thread 1260.0x190c]
>23^running
Inferior attached, thread 6412 resumed
sSetting breakpoints...
dSetting breakpoints...
<24maint print msymbols C:/Users/encore/AppData/Local/Temp/gdb_ns_.Dq6408
>*running,thread-id="all"
>=thread-created,id="2",group-id="i1"
sThread 2 created
>~"[New Thread 1260.0xf9c]\n"
s[New Thread 1260.0xf9c]
>*running,thread-id="all"
>=thread-exited,id="2",group-id="i1"
sThread 2 in group i1 exited
>=thread-exited,id="1",group-id="i1"
sThread 1 in group i1 exited
>=thread-group-exited,id="i1"
sThread group i1 exited
>&"During startup program exited with code 0xc0000138.\n"
>23^error,msg="During startup program exited with code 0xc0000138."
dCOOKIE FOR TOKEN 23 ALREADY EATEN (InferiorSetupRequested). TWO RESPONSES FOR ONE COMMAND?
dNOTE: INFERIOR EXITED
dState changed from InferiorSetupRequested(4) to InferiorExitOk(16).
dState changed from InferiorExitOk(16) to InferiorShutdownOk(19).
dState changed from InferiorShutdownOk(19) to EngineShutdownRequested(20).
dQUEUE: SHUTDOWN ENGINE
sExecutable failed: During startup program exited with code 0xc0000138.
dCALL: SHUTDOWN ENGINE
dINITIATE GDBENGINE SHUTDOWN IN STATE 0, PROC: 2
<25-gdb-exit
>&"maint print msymbols C:/Users/encore/AppData/Local/Temp/gdb_ns_.Dq6408\n"
>24^done
dFOUND NON-NAMESPACED QT
dNOTE: INFERIOR SETUP OK
dState changed from EngineShutdownRequested(20) to InferiorSetupOk(6).
dState changed from InferiorSetupOk(6) to EngineRunRequested(7).
dQUEUE: RUN ENGINE
dCALL: RUN ENGINE
dNOTE: ENGINE RUN AND INFERIOR STOP OK
dState changed from EngineRunRequested(7) to InferiorStopOk(14).
dNOTE: INFERIOR RUN REQUESTED
dState changed from InferiorStopOk(14) to InferiorRunRequested(10).
sRunning requested...
<26-exec-continue
>25^exit
dGDB CLAIMS EXIT; WAITING
dGDB PROCESS FINISHED, status 0, code 0
dNOTE: ENGINE ILL ******
dState changed BY FORCE from InferiorRunRequested(10) to InferiorStopRequested(13).
dATTEMPT TO INTERRUPT INFERIOR
sStop requested...
dTRYING TO INTERRUPT INFERIOR
dCANNOT INTERRUPT 1260
```
do you have any idea as to how I can handle this?
|
2012/10/15
|
[
"https://Stackoverflow.com/questions/12905300",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1178770/"
] |
The following code is almost the same as William answered, but without using 'for' loop statement.
```
subdirs := A B C
.PHONY: all $(subdirs)
all: $(subdirs)
$(subdirs):
$(MAKE) -C $@
```
|
I'm rusty on makefiles and know for sure the following is not the best answer. But it might help for now...
```
TARGETS = A B C
.phoney: all
all:
@for subdir in $(TARGETS); do \
$(MAKE) -C $$subdir all || exit 1; \
done
```
Note that the indents must use a TAB, not spaces
| 14,819
|
23,332,259
|
I am trying to copy a sheet, `default_sheet`, into a new sheet `new_sheet` in the same workbook.
I did managed to create a new sheet and to copy the values from default sheet. How can I also copy the style of each cell into the new\_sheet cells?
```python
new_sheet = workbook.create_sheet()
new_sheet.title = sheetName
default_sheet = workbook.get_sheet_by_name('default')
new_sheet = workbook.get_sheet_by_name(sheetName)
for row in default_sheet.rows:
col_idx = float(default_sheet.get_highest_column())
starting_col = chr(65 + int(col_idx))
for row in default_sheet.rows:
for cell in row:
new_sheet[cell.get_coordinate()] = cell.value
<copy also style of each cell>
```
I am at the moment using openpyxl 1.8.2, but i have in mind to switch to 1.8.5.
One solution is with copy:
```python
from copy import copy, deepcopy
new_sheet._styles[cell.get_coordinate()] = copy(
default_sheet._styles[cell.get_coordinate()])
```
|
2014/04/28
|
[
"https://Stackoverflow.com/questions/23332259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3458191/"
] |
As of openpyxl 2.5.4, python 3.4: (subtle changes over the older version below)
```python
new_sheet = workbook.create_sheet(sheetName)
default_sheet = workbook['default']
from copy import copy
for row in default_sheet.rows:
for cell in row:
new_cell = new_sheet.cell(row=cell.row, column=cell.col_idx,
value= cell.value)
if cell.has_style:
new_cell.font = copy(cell.font)
new_cell.border = copy(cell.border)
new_cell.fill = copy(cell.fill)
new_cell.number_format = copy(cell.number_format)
new_cell.protection = copy(cell.protection)
new_cell.alignment = copy(cell.alignment)
```
For openpyxl 2.1
```python
new_sheet = workbook.create_sheet(sheetName)
default_sheet = workbook['default']
for row in default_sheet.rows:
for cell in row:
new_cell = new_sheet.cell(row=cell.row_idx,
col=cell.col_idx, value= cell.value)
if cell.has_style:
new_cell.font = cell.font
new_cell.border = cell.border
new_cell.fill = cell.fill
new_cell.number_format = cell.number_format
new_cell.protection = cell.protection
new_cell.alignment = cell.alignment
```
|
May be this is the convenient way for most.
```
from openpyxl import load_workbook
from openpyxl import Workbook
read_from = load_workbook('path/to/file.xlsx')
read_sheet = read_from.active
write_to = Workbook()
write_sheet = write_to.active
write_sheet['A1'] = read_sheet['A1'].value
write_sheet['A1'].style = read_sheet['A1'].style
write_to.save('save/to/file.xlsx')
```
| 14,820
|
19,322,350
|
I am trying to use f2py to interface my python programs with my Fortran modules.
I am on a Win7 platform.
I use latest Anaconda 64 (1.7) as a Python+NumPy stack.
My Fortran compiler is the latest Intel Fortran compiler 64 (version 14.0.0.103 Build 20130728).
I have been experiencing a number of issues when executing `f2py -c -m PyModule FortranModule.f90 --fcompiler=intelvem`
The last one, which I can't seem to sort out is that it looks like the sequence of flags f2py/distutils passes to the compiler does not match what ifort expects.
I get a series of warning messages regarding unknown options when ifort is invoked.
```
ifort: command line warning #10006: ignoring unknown option '/LC:\Anaconda\libs'
ifort: command line warning #10006: ignoring unknown option'/LC:\Anaconda\PCbuild\amd64'
ifort: command line warning #10006: ignoring unknown option '/lpython27'
```
I suspect this is related to the errors I get from the linker at the end
```
error LNK2019: unresolved external symbol __imp_PyImport_ImportModule referenced in function _import_array
error LNK2019... and so forth (there are about 30-40 lines like that, with different python modules missing)
```
and it concludes with a plain
```
fatal error LNK1120: 42 unresolved externals
```
My guess is that this is because the /link flag is missing in the sequence of options. Because of this, the /l /L options are not passed to the linker and the compiler believes these are addressed to him.
The ifort command generated by f2py looks like this:
```
ifort.exe -dll -dll Pymodule.o fortranobject.o FortranModule.o module-f2pywrappers2.o -LC:\Anaconda\libs -LC:\Anaconda\PCbuild\amd64 -lPython27
```
I have no idea why the "-dll" is repeated twice (I had to change that flag from an original "-shared").
Now, I have tried to look into the f2py and distutils codes but haven't figured out how to bodge an additional /link in the command output. I haven't even been able to locate where this output is generated.
If anyone has encountered this problem in the past and/or may have some suggestions, I would very much appreciate it.
Thank you for your time
|
2013/10/11
|
[
"https://Stackoverflow.com/questions/19322350",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2866568/"
] |
I encountered similar problems with my own code some time ago. If I understand the comments correctly you already used the approach that worked for me, so this is just meant as clarification and summary for all those that struggle with f2py and dependencies:
f2py seems to have problems resolving dependecies on external source files. If the external dependencies get passed to f2py as already compiled object files though, the linking works fine and the python library gets build without problems.
The easiest solution therefore seems to be:
1. compile all dependencies to object files (\*.o) using your prefered compiler and compiler settings
2. pass all object files to f2py, together with the **source file** of your main subroutine/ function/ module/ ...
3. use generated python library as expected
A simple python skript could look like this (pycompile.py):
```
#!python.exe
# -*- coding: UTF-8 -*-
import os
import platform
'''Uses f2py to compile needed library'''
# build command-strings
# command for compling *.o and *.mod files
fortran_exe = "gfortran "
# fortran compiler settings
fortran_flags = "<some_gfortran_flags> "
# add path to source code
fortran_source = ("./relative/path/to/source_1.f90 "
"C:/absolut/path/to/source_2.f90 "
"...")
# assemble fortran command
fortran_cmd = fortran_exe + fortran_flags + fortran_source
# command for compiling main source file using f2py
f2py_exe = "f2py -c "
# special compiler-options for Linux/ Windows
if (platform.system() == 'Linux'):
f2py_flags = "--compiler=unix --fcompiler=gnu95 "
elif (platform.system() == 'Windows'):
f2py_flags = "--compiler=mingw32 --fcompiler=gnu95 "
# add path to source code/ dependencies
f2py_source = ("-m for_to_py_lib "
"./path/to/main_source.f90 "
"source_1.o "
"source_2.o "
"... "
)
# assemble f2py command
f2py_cmd = f2py_exe + f2py_flags + f2py_source
# compile .o and .mod files
print "compiling object- and module-files..."
print
print fortran_cmd
os.system(fortran_cmd)
# compile main_source.f90 with f2py
print "================================================================"
print "start f2py..."
print
print f2py_cmd
os.system(f2py_cmd)
```
---
A more flexible solution for large projects could be provided via Makefile, as dicussed by [@bdforbes](https://stackoverflow.com/users/336001/bdforbes) in the comments ([for reference](http://pastebin.com/ChSxLzSb)) or a custom CMake User Command in combination with the above skript:
```
###############################################################################
# General project properties
################################################################################
# Set Project Name
project (for_to_py_lib)
# Set Version Number
set (for_to_py_lib_VERSION_MAJOR 1)
set (for_to_py_lib_VERSION_MINOR 0)
# save folder locations for later use/ scripting (see pycompile.py)
# relative to SOURCE folder
set(source_root ${CMAKE_CURRENT_LIST_DIR}/SOURCE) # save top level source dir for later use
set(lib_root ${CMAKE_CURRENT_LIST_DIR}/LIBRARIES) # save top level lib dir for later use
# relative to BUILD folder
set(build_root ${CMAKE_CURRENT_BINARY_DIR}) # save top level build dir for later use
###
### Fortran to Python library
###
find_package(PythonInterp)
if (PYTHONINTERP_FOUND)
# copy python compile skript file to build folder and substitute CMake variables
configure_file(${source_root}/pycompile.py ${build_root}/pycompile.py @ONLY)
# define for_to_py library ending
if (UNIX)
set(CMAKE_PYTHON_LIBRARY_SUFFIX .so)
elseif (WIN32)
set(CMAKE_PYTHON_LIBRARY_SUFFIX .pyd)
endif()
# add custom target to ALL, building the for_to_py python library (using f2py)
add_custom_target(for_to_py ALL
DEPENDS ${build_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX})
# build command for python library (execute python script pycompile.py containing the actual build commands)
add_custom_command(OUTPUT ${build_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX}
COMMAND ${PYTHON_EXECUTABLE} ${build_root}/pycompile.py
WORKING_DIRECTORY ${build_root}
DEPENDS ${build_root}/pycompile.py
${source_root}/path/to/source_1.f90
${source_root}/path/to/source_2.f90
${source_root}/INOUT/s4binout.f90
COMMENT "Generating fortran to python library")
# post build command for python library (copying of generated files)
add_custom_command(TARGET for_to_py
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${build_root}/s4_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX}
${lib_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX}
COMMENT "\
***************************************************************************************************\n\
copy of python library for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} placed in ${lib_root}/for_to_py${CMAKE_PYTHON_LIBRARY_SUFFIX} \n\
***************************************************************************************************"
)
endif (PYTHONINTERP_FOUND)
```
with modified pycompile:
```
#!python.exe
# -*- coding: UTF-8 -*-
...
fortran_source = ("@source_root@/source_1.f90 "
"@source_root@/source_2.f90 "
"...")
...
# add path to source code/ dependencies
f2py_source = ("-m for_to_py_lib "
"@build_root@/for_to_py.f90 "
"source_1.o "
"source_2.o "
"... "
)
...
# compile .o and .mod files
...
```
|
The library path is specified using /LIBPATH not /L
| 14,823
|
49,739,245
|
I have spent a good amount of time trying to determine what is going wrong exactly, with the code I am using to convert pdf to docx (and doc to docx) using LibreOffice.
I have used both the windows run interface to test-run some of the code I have found to be relevant, and have tried on python as well, neither of which works.
I have LibreOffice v6.0.2 installed on windows.
I have been using variations of this code to attempt to convert some pdf files to docx of which the specific pdf file is not really relevant:
```
import subprocess
lowriter='C://Program Files/LibreOffice/program/swriter.exe'
subprocess.run('{} --invisible --convert-to docx --outdir "{}" "{}"'
.format(lowriter,'dir',
'filepath.pdf',),shell=True)
```
I have tried code, again, in both the run interface on the windows os, and through python using the above code, with no luck. I have tried without the outdir as well, just in case I was writing that incorrectly, but always get a return code of 1:
```
CompletedProcess(args='C://Program Files/LibreOffice/program/swriter.exe
--invisible --convert-to docx --outdir "{dir}"
{filepath.pdf}"', returncode=1)
```
The dir and filepath.pdf are place holders I have put.
I have a similar problem with the doc to docx conversion.
|
2018/04/09
|
[
"https://Stackoverflow.com/questions/49739245",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8492478/"
] |
There are a number of problems here. You should first get the `--convert-to` call to work from the command line as @CristiFati commented, and then implement in python.
Here is the code that works on my system. No `//` in the path, and quotes are needed. Also, the folder is `LibreOffice 5` on my system.
```
import subprocess
lowriter = 'C:/Program Files (x86)/LibreOffice 5/program/swriter.exe'
subprocess.run(
'"{}" --convert-to docx --outdir "{}" "{}"'
.format(lowriter,'dir', 'filepath.doc',), shell=True)
```
Finally, it looks like converting from PDF to DOCX is not supported. LibreOffice Draw can open a PDF file and save as ODG format.
**EDIT**:
Here is working code to convert from PDF. I upgraded to LO 6, so the version number ("LibreOffice 5") is no longer required in the path.
```
import subprocess
loffice = 'C:/Program Files/LibreOffice/program/soffice.exe'
subprocess.run(
'"{}" --convert-to odg --outdir "{}" "{}"'
.format(loffice,'dir', 'filepath.pdf',), shell=True)
```
[](https://i.stack.imgur.com/nlIhm.jpg)
|
Install pdf2docx package in python
```
source = r'C:\Users\sdDesktop\New Project/Document2.pdf'
destination = r'C:\Users\sd\Desktop\New Project/sample_6.docx'
def Converter_pdf2docx(source,destination):
pdf_file = source
docx_file = destination
cv = Converter(pdf_file)
cv.convert(docx_file, start=0, end=None)
cv.close()
```
| 14,824
|
63,648,764
|
I have a program folder for which paths are required:
```
export RBT_ROOT=/path/to/installation/
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$RBT_ROOT/lib
export PATH=$PATH:$RBT_ROOT/bin
```
Then the command is run:
```
rbcavity -was -d -r <PRMFILE>
```
rbcavity - is an exe program contained in the program's bin folder
PRMFILE - is the program contained in the current path (working folder not included in the program folder)
This works from the command line, but not from python. How can I run this from a python script (3.5)? I tried subprocess.run but it doesn't find the command rbcavity... I'm new to linux and don't quite know how it works.
|
2020/08/29
|
[
"https://Stackoverflow.com/questions/63648764",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12399859/"
] |
The issue is that the website filters out requests without a proper `User-Agent`, so just use a random one from MDN:
```py
requests.get("https://apis.digital.gob.cl/fl/feriados/2020", headers={
"User-Agent" : "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36"
})
```
|
It might be due to idle timeout. Overriding default socket options can help
```
import socket
from urllib3.connection import HTTPConnection
HTTPConnection.default_socket_options = (
HTTPConnection.default_socket_options + [
(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1),
(socket.SOL_TCP, socket.TCP_KEEPIDLE, 45),
(socket.SOL_TCP, socket.TCP_KEEPINTVL, 10),
(socket.SOL_TCP, socket.TCP_KEEPCNT, 6)
]
)
```
| 14,825
|
28,999,913
|
Is there a 'correct' or preferred manner for sending data over a web socket connection?
In my case, I am sending the information from a C# application to a python (tornado) web server, and I am simply sending a string consisting of several elements separated by commas. In python, I use rudimentary techniques to split the string and then structure the elements into an object.
e.g:
```
'foo,0,bar,1'
```
becomes:
```
object = {
'foo': 0,
'bar': 1
}
```
In the other direction, I am sending the information as a JSON string which I then deserialise using Json.NET
I imagine there is no strictly right or wrong way of doing this, but are there significant advantages and disadvantages that I should be thinking of? And, somewhat related, is there a consensus for using string vs. binary formats?
|
2015/03/12
|
[
"https://Stackoverflow.com/questions/28999913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1190200/"
] |
Writing a *custom* encoding (eg, as "k,v,..") is *different* than 'using binary'.
*It is still text*, just a rigid under-defined one-off hand-rolled format that must be manually replicated. (What happens if a key or value contains a comma? What happens if the data needs to contain nested objects? How can null be interpreted differently than '' or 'null'?)
While JSON is definitely the most *ubiquitous* format for WebSockets one shouldn't (for interchange purposes) write JSON by hand - *one uses an existing serialization library on both ends*. (There are many reasons why JSON is ubiquitous which are covered in other answers - this doesn't mean it is always the 'best' format, however.)
To this end a *binary* serializer can also be used (BSON being a trivial example as it is effectively JSON-like in structure and operation). Just replace `JSON.parse` with `FORMATX.parse` as appropriate.
The only requirements are then:
* There is a suitable serializer/deserializer for the *all* the clients and servers. JSON works well here because it is so popular and there is no shortage of implementations.
There *are* various binary serialization libraries with *both* Python and C# libraries, but it will require finding a 'happy intersection'.
* The serialization format can represent the data. JSON usually works sufficiently and it has a *very nice 1-1 correspondence with basic object graphs* and simple values. It is also inherently schema-less.
Some formats are better are certain tasks and have different characteristics, features, or tool-chains. However most concepts (and arguably most DTOs) can be mapped onto JSON easily which makes it a good 'default' choice.
The other differences between different *kinds* of binary and text serializations is most mostly dressing - but if you'd like to start talking about schema vs. schema-less, extensibility, external tooling, metadata, non-compressed encoded sizes (or size after transport compression), compliance with a specific existing protocol, etc..
.. but the point to take away is **don't create a 'new' one-off format**. Unless of course, you just like making wheels or there is a *very specific* use-case to fit.
|
First advice would be to use the same format for both ways, not plain text in one direction and JSON in the other.
I personally think `{'foo':0,'bar':1}` is better than `foo,0,bar,1` because everybody understands JSON but for your custom format they might not without some explanations. The idea is you are inventing a data interchange format when JSON is already one and @jfriend00 is right, pretty much every language now understands JSON, [Python included](https://docs.python.org/2/library/json.html).
Regarding text vs binary, there isn't any consensus. As @ user2864740 mentions in the comments to my answer as long as the two sides understand each other, it doesn't really matter. This only becomes relevant if one of the sides has a preference for a format (consider for example opening the connection from the browser, using JavaScript - for that people might prefer JSON instead of binary).
My advice is to go with something simple as JSON and design your app so that you can change the wire format by swapping in another implementation without affecting the logic of your application.
| 14,826
|
33,006,474
|
In the cloud, I have multiple instances, each running a container with a different random name, e.g.:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc97950d924 aws_beanstalk/my-app:latest "/bin/sh -c 'python 3 hours ago Up 3 hours 80/tcp, 5000/tcp, 8080/tcp jolly_galileo
```
To enter them, I type:
```
sudo docker exec -it jolly_galileo /bin/bash
```
Is there a command or can you write a bash script to automatically execute the exec to enter the correct container?
|
2015/10/08
|
[
"https://Stackoverflow.com/questions/33006474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/478354/"
] |
"the correct container"?
To determine what is the "correct" container, your bash script would still need either the id or the name of that container.
For example, I [have a function in my `.bashrc`](https://github.com/VonC/b2d/blob/f9890cb6e1ee14842b8be2dd66a754550db793a9/.bash_aliases#L54):
```
deb() { docker exec -u git -it $1 bash; }
```
That way, I would type:
```
deb jolly_galileo
```
(it uses the account git, but you don't have to)
|
Here's my final solution. It edits the instance's .bashrc if it hasn't been edited yet, prints out docker ps, defines the dock function, and enters the container. A user can then type "exit" if they want to access the raw instances, and "exit" again to quit ssh.
```
commands:
bashrc:
command: if ! grep -Fxq "sudo docker ps" /home/ec2-user/.bashrc; then echo -e "dock() { sudo docker exec -it $(sudo docker ps -lq) bash; } \nsudo docker ps\ndock" >> /home/ec2-user/.bashrc; fi
```
| 14,827
|
32,488,029
|
I'm using django-twilio to try and respond to text messages coming from Twilio account. As it recommends using the twilio\_View decorator to imrove upon @csrf\_exempt, I'm using it. Problem is, it doesnt work. No matter what I try, I always get 403.
Things I've done:
1. Twilio test account. Added TWILIO\_ACCOUNT\_SID and TWILIO\_AUTH\_TOKEN in settings, which match the test account values.
2. Double checked the url set in twilio exists, and not HTTPS. Set in the properties of the twilio SMS phone number settings.
3. Upgraded to South 1.0, run migrations.
4. Turn off DJANGO\_TWILIO\_FORGERY\_PROTECTION, and it works.
5. Number the text is coming from is verified in twilio.
6. Production settings, so debug=False
Running Django 1.4.20, Twilio 4.5, django-twilio 0.8, python 2.7.
View is stupid simple:
```
from django_twilio.decorators import twilio_view
from twilio.twiml import Response
@twilio_view
def say_hello(request):
r = Response()
r.message('hello there')
return r
```
If you replace the twilio decorator with the @csrf\_exempt decorator, it all works fine. I get the response back to my phone.
In the decorator it looks for the HTTP\_X\_TWILIO\_SIGNATURE value in the request. Looking in my twilio alerts I can't see that value in the request. Don't know if it will show up there or not?
|
2015/09/09
|
[
"https://Stackoverflow.com/questions/32488029",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1907157/"
] |
Twilio team member here.
Are you importing twiml? Try making the first line of your say-hello function:
```
r = twiml.Response()
```
|
I don't see anything obviously wrong in what you are doing, so I would suggest removing the `@twilio_view` decorator and logging the X-Twilio-Signature header in your view to see what it is and manually checking to see if it's correct. (Basically, redoing the logic of the `@twilio_view` decorator in your view, just to see what's going on).
This [link](https://www.twilio.com/docs/security#validating-requests) kinda suggests that you should use a SSL URL before Twilio will add the X-Twilio-Signature to its request, but I'm not really sure if that's true.
| 14,829
|
60,286,928
|
I am learning Python (python3) and am working with a text file containing semi-JSON format. It is not full JSON because the "keys" are not surrounded by quotes. I am looking to programmatically add quotes around all of these key names. My plan was to "open" this file and parse each "line" as an individual string.
**From:**
>
> key\_name: { another\_key: "somevalue", second\_key: "anotherval" }
>
>
>
**Into:**
>
> "key\_name": { "another\_key": "somevalue", "second\_key": "anotherval" }
>
>
>
I'm sure regex would be the ideal way to do this - for the sake of learning I have been using arrays...
I have some code that works partially, but not all of the keys get parentheses placed around them.
```
str = "this is: a string: testing testing: blah blah more: test: hereis: test:"
cp_str = list(str[::-1])
skip = False
find_end = False
for step in range(len(cp_str) - 1):
if skip:
skip = False
continue
if cp_str[step] == ':':
cp_str.insert(step + 1, '"')
skip = True
find_end = True
if not skip and find_end and not(ord(cp_str[step].lower()) > 95 and ord(cp_str[step].lower()) < 95+26):
cp_str.insert(step, '"')
skip = True
find_end = False
print(''.join(cp_str[::-1]))
```
**Outputs:**
>
> this is: a string": testing "testing": blah blah "more":
> "test": "hereis": "test":
>
>
>
Any tips or help on the best ways to tackle this would be appreciated.
|
2020/02/18
|
[
"https://Stackoverflow.com/questions/60286928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8729789/"
] |
Avoid using regex to handle structured formats. It will almost always mis-handle certain corner cases.
Since your input is valid YAML, you can install [PyYAML](https://pypi.org/project/PyYAML/), load the input as YAML, and dump the data structure as JSON instead:
```
import yaml
import json
s = 'key_name: { another_key: "somevalue", second_key: "anotherval" }'
print(json.dumps(yaml.load(s)))
```
This outputs:
```
{"key_name": {"another_key": "somevalue", "second_key": "anotherval"}}
```
|
While the pattern `([{,]\s*)([^"]*?)(\s*:\s*)` isn't going to cover all corner cases, it should work fine for basic JSON content.
Example usage:
```
>>> import re
>>> data = '{ another_key: "somevalue", second_key: "anotherval" }'
>>> repl_fn = lambda x: f'{x.group(1)}"{x.group(2)}"{x.group(3)}'
>>> re.sub(r'([{,]\s*)([^"]*?)(\s*:\s*)', repl_fn, data)
'{ "another_key": "somevalue", "second_key": "anotherval" }'
```
| 14,830
|
57,701,538
|
I have a `Jupyter` notebook and I'd like to convert it into a `Python` script using the `nbconvert` command from *within* the `Jupyter` notebook.
I have included the following line at the end of the notebook:
```
!jupyter nbconvert --to script <filename>.ipynb
```
This creates a `Python` script. However, I'd like the resulting `.py` file to have the following properties:
1. No input statements, such as:
>
> # In[27]:
>
>
>
2. No markdown, including statements such as:
>
> # coding: utf-8
>
>
>
3. Ignore `%magic` commands such as:
1. `%matplotlib inline`
2. `!jupyter nbconvert --to script <filename>.ipynb`, i.e. the command within the notebook that executes the `Python` conversionCurrently, the `%magic` commands get translated to the form: `get_ipython().magic(...)`, but these are not necessarily recognized in `Python`.
|
2019/08/29
|
[
"https://Stackoverflow.com/questions/57701538",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2957960/"
] |
One way to get control of what appears in the output is to tag the cells that you don't want in the output and then use the TagRemovePreprocessor to remove the cells.
[](https://i.stack.imgur.com/gvqcA.png)
The code below also uses the exclude\_markdown function in the TemplateExporter to remove markdown.
```
!jupyter nbconvert \
--TagRemovePreprocessor.enabled=True \
--TagRemovePreprocessor.remove_cell_tags="['parameters']" \
--TemplateExporter.exclude_markdown=True \
--to python "notebook_with_parameters_removed.ipynb"
```
To remove the commented lines and the input statement markets (like # [1]), I believe you'll need to post-process the Python file with something like the following in the cell after the cell you call !jupyter nbconvert from (note that this is Python 3 code):
```
import re
from pathlib import Path
filename = Path.cwd() / 'notebook_with_parameters_removed.py'
code_text = filename.read_text().split('\n')
lines = [line for line in code_text if len(line) == 0 or
(line[0] != '#' and 'get_ipython()' not in line)]
clean_code = '\n'.join(lines)
clean_code = re.sub(r'\n{2,}', '\n\n', clean_code)
filename.write_text(clean_code.strip())
```
|
Jupyter nbconvert has made this a little bit easier with a new [template structure](https://nbconvert.readthedocs.io/en/latest/customizing.html).
Templates should be placed in the template path. This can be found by running `jupyter --paths`
Each template should be placed in its own directory within the template directory and must contain a conf.json and index.py.j2 file.
This [solution](https://stackoverflow.com/a/64144224/5530152) covers all the details for adding a template.
This template will remove all the of the markdown, magic and cell numbers leaving a "runnable" .py file. Run this template from within a notebook with `!jupyter nbconvert --to python --template my_clean_python_template my_notebook.ipynb`
**`index.py.j2`**
```
{%- extends 'null.j2' -%}
## set to python3
{%- block header -%}
#!/usr/bin/env python3
# coding: utf-8
{% endblock header %}
## remove cell counts entirely
{% block in_prompt %}
{% if resources.global_content_filter.include_input_prompt -%}
{% endif %}
{% endblock in_prompt %}
## remove markdown cells entirely
{% block markdowncell %}
{% endblock markdowncell %}
{% block input %}
{{ cell.source | ipython2python }}
{% endblock input %}
## remove magic statement completely
{% block codecell %}
{{'' if "get_ipython" in super() else super() }}
{% endblock codecell%}
```
| 14,831
|
29,578,217
|
I made sure to try installing PyQt4 on mac in many different ways, but I always get the error above.
My attempts have in common installing Python 3.4 from the official website installer, then installing Qt4 from [here](https://download.qt.io/archive/qt/4.8/4.8.6/) and finally installing SIP from the package available in the Riverbanks website.
I've already tried to install PyQt4 by running the configure-ng.py, the configure.py without options and configure.py with a reasonable number of different combinations (by empiric/hopelessness purposes), but I know the "pattern" for the options would be "-q" option to indicate the qmake path, "-d" option to the python path and "--use-arch x86\_64" to indicate, I guess, the machine architecture (I made sure to use "uname -a", something like that, to check if I really should use "x86\_64"). Simply nothing worked!
After all that trouble, I have tried to install SIP and PyQt4 on the Python 2.7 and lastly I've tried to use Homebrew to install all that stuff. Again it doesn't worked.
Someone has idea what could fix the problem? (Unfortunately I have the possibility to work with a mac just once a week, then I cannot test your solutions immediately.)
|
2015/04/11
|
[
"https://Stackoverflow.com/questions/29578217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2157820/"
] |
If you look a little on what Google has to say, there are several references to that problem. I see you are from Brasil so maybe this is your problem:
<https://github.com/thoughtbot/capybara-webkit/issues/291>
(which refers to: <https://github.com/thoughtbot/capybara-webkit/issues/224>
Also:
* <https://github.com/thoughtbot/capybara-webkit/issues/682>
* <https://github.com/thoughtbot/capybara-webkit/issues/291>
* <https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDcQFjAE&url=https%3A%2F%2Ftrac.macports.org%2Fticket%2F22924&ei=LCMpVYbtOIOYNv7NgPAO&usg=AFQjCNH7wcogTMUJHKR7NlOSIOWHWGzsIA&bvm=bv.90491159,d.eXY&cad=rja>
|
I had the same problem using GCC installed via MacPorts (tested several versions up to gcc5). The solution for me was using g++ supplied with the XCode command line tools. I uninstalled all MacPorts GCC versions. Below version details of the g++ command that worked.
```
$ g++ --version
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 6.0 (clang-600.0.51) (based on LLVM 3.5svn)
Target: x86_64-apple-darwin13.4.0
Thread model: posix
```
A similar question was asked here: [QT Creator adds -Xarch](https://stackoverflow.com/q/16701355/3922554)
| 14,833
|
56,387,349
|
I'm using Django Rest Framework and want to be able to delete a Content instance via `DELETE` to `/api/content/<int:pk>/`. I *don't* want to implement any method to respond to `GET` requests.
When I include a `.retrieve()` method as follows, the `DELETE` request **works**:
```
class ContentViewSet(GenericViewSet):
def get_queryset(self):
return Content.objects.filter(user=self.request.user)
def retrieve(self, request, pk=None):
pass #this works, but I don't want .retrieve() at all
def delete(self, request, pk=None):
content = self.get_object()
#look up some info info here
content.delete()
return Response('return some info')
```
If I replace `.retrieve()` with `RetrieveModelMixin` it also works. However, if I remove both of these, which is what want to do, I get the following error.
>
> django.urls.exceptions.NoReverseMatch: Reverse for 'content-detail' not found. 'content-detail' is not a valid view function or pattern name.
>
>
>
I haven't tested, but I assume the same thing would happen with `PUT` and `PATCH`.
My questions are:
1. How can I allow `DELETE` without implementing a `.retrieve()` method, and
2. Why can't DRF create the urlconf without `.retrieve()` implemented?
**UPDATE: Failing test and complete error traceback caused by removing `.retrieve()` method**
```
from rest_framework.test import APITestCase, APIClient
from myapp.models import Content
class ContentTestCase(APITestCase):
def setUp(self):
self.content = Content.objects.create(title='New content')
self.client = APIClient()
def test_DELETE_content(self):
url = reverse('content-detail', kwargs={'pk':self.content.pk})
response = self.client.delete(url)
self.assertEqual(response.status_code, 200)
```
Results in:
```
Traceback (most recent call last):
File "myproject/myapp/tests.py", line 548, in test_DELETE_content
url = reverse('content-detail', kwargs={'pk':self.content})
File "python3.6/site-packages/rest_framework/reverse.py", line 50, in reverse
url = _reverse(viewname, args, kwargs, request, format, **extra)
File "python3.6/site-packages/rest_framework/reverse.py", line 63, in _reverse
url = django_reverse(viewname, args=args, kwargs=kwargs, **extra)
File "python3.6/site-packages/django/urls/base.py", line 90, in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "python3.6/site-packages/django/urls/resolvers.py", line 636, in _reverse_with_prefix
raise NoReverseMatch(msg)
django.urls.exceptions.NoReverseMatch: Reverse for 'content-detail' not found. 'content-detail' is not a valid view function or pattern name.
```
|
2019/05/31
|
[
"https://Stackoverflow.com/questions/56387349",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4400877/"
] |
>
> 1. How can I allow DELETE without implementing a `.retrieve()` method?
>
>
>
Just remove the **`retrieve()`** method from the view class. Which means, the [**`GenericViewSet`**](https://www.django-rest-framework.org/api-guide/viewsets/#genericviewset) doesn't provide any ***HTTP Actions*** unless it's defined in your class.
So, the following will be your code snippet,
```
class ContentViewSet(GenericViewSet):
def get_queryset(self):
return Content.objects.filter(user=self.request.user)
def delete(self, request, pk=None):
content = self.get_object()
# look up some info info here
content.delete()
return Response('return some info')
```
or you could use **`mixin classes`** here,
```
from rest_framework.mixins import DestroyModelMixin
class ContentViewSet(DestroyModelMixin, GenericViewSet):
def get_queryset(self):
return Content.objects.filter(user=self.request.user)
```
---
>
> 2. Why can't DRF create the urlconf without `.retrieve()` implemented?
>
>
>
I'm not sure how you've defined your URLs. When I tried with ***DRF Router***, it's only creating the URL conf for defined actions.
You've got **`GET`** and **`DELETE`** actions on your end-point because of you'd defined the **`retrieve()`** method in your view class.
Hope this help :)
|
My solution for part 1. is to include the mixin but restrict the `http_method_names`:
```
class ContentViewSet(RetrieveModelMixin, GenericViewSet):
http_method_names = ['delete']
...
```
However, I still don't know why I have to include `RetrieveModelMixin` at all.
| 14,834
|
64,784,079
|
In an attempt to create a JWT in python I have written the following code.
```
#Header
header = str({"alg": "RS256"})
header_binary = header.encode()
header_base64 = base64.urlsafe_b64encode(header_binary)
print(header_base64)
#Claims set (Pay Load)
client_id=""
username=""
URL=""
exp_time=str(round(time.time())+300)
claims = str({"iss": client_id,
"sub": username,
"aud": URL,
"exp": exp_time})
claims_binary = claims.encode()
claims_base64 = base64.urlsafe_b64encode(claims_binary)
print(claims_base64)
```
I understand there is still more to do but I have the following problem. If I concatenate the two strings created above with a "." and put the resulting string in a JWT debugger it seems the claims set works perfectly but the same cannot be said for the header.
Please advise if this is the correct way to go about doing this and what my errors are.
|
2020/11/11
|
[
"https://Stackoverflow.com/questions/64784079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14233404/"
] |
Directly using LTPA in Tomcat is not possible unless you use 3rd party token services. The better way to have SSO experience between WebSphere and Tomcat is to use Windows ADFS as SSO server instead of LDAP. You can setup ADFS as either SAML identity provider and OpenID connect provider, and setup WebSphere and Tomcat as SAML or OIDC relying party to ADFS. ADFS server will be your SSO server, and user only need login to ADFS once, and will be authenticated to both WebSphere and Tomcat automatically.
If you can not use ADFS, you could setup one dedicated WebSphere Liberty server as OpenID Connect server (which can be configured to use Windows LDAP as user registry), and use the OpenID Connect server as SSO server for both WebSphere and Tomcat. Similar to ADFS case, user is only required to login to Liberty once, and will be login to both WebSphere and Tomcat automatically. Note here Liberty OIDC server plays role to translate LTPA token to OIDC token which can be consumed by Tomcat.
|
The other, simpler approach would be just to use Use Open Liberty instead of Tomcat, which I suggested in other thread. As usually there is no benefit using Tomcat over OpenLibery and LTPA token will work just via configuration in Liberty and can integrate with any older WebSpheres you have in your environment.
| 14,836
|
58,843,848
|
What I need is simple: a piece of code that will receive a GET request, process some data, and then generate a response. I'm completely new to python web development, so I've decided to use DRF for this purpose because it seemed like the most robust solution, but every example I found online consisted of CRUDs with models and views, and I figure what I need is something more simple (since the front-end is already created).
Could anyone provide an example on how to do this on DRF? (or even some other viable solution, having in mind that it needs to be robust enough to take on multiple requests at the same time in production)
|
2019/11/13
|
[
"https://Stackoverflow.com/questions/58843848",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7742448/"
] |
Simple way to do thing you want is by using Django REST Framework's `APIView` (or `@api_view` decorator).
Here is an example of it in the docs: <https://www.django-rest-framework.org/api-guide/views/>.
Besides code on that page, you would need to register your view on appropriate route, which can be found here: <https://www.django-rest-framework.org/api-guide/routers/>
|
Django and Django REST Framework are pretty heavy products out-of-the-box.
If you want something more lightweight that can handle many incoming requests, you could create a simple Express server using Node.js. This would result in very few lines of code on your end.
Sample Node server:
```
var express = require('express')
var app = express()
app.get('/', (req, res) => {
res.send('hello world')
});
app.listen(8000);
```
| 14,837
|
28,778,843
|
I apologize for my noobiness in java, but I am trying to make a very basic app for someone for their birthday and have only really done any programming in python. I have been trying to implement the code found in [android - how to make a button click play a sound file every time it been pressed?](https://stackoverflow.com/questions/19464782/android-how-to-make-a-button-click-play-a-sound-file-every-time-it-been-presse) and am having trouble. I have placed the assets folder in the main directory, the src directory, and the app directory to see if any helped and am still getting the error
Every time I attempt to run the program I get the following error
```
02-27 23:06:48.896 25643-25643/com.app.bdking.mineturtle W/System.err﹕ java.io.FileNotFoundException: hello.mp3
```
Here is my main activity
```
package com.app.bdking.mineturtle;
import android.content.res.AssetFileDescriptor;
import android.media.MediaPlayer;
import android.os.Bundle;
import android.app.Activity;
import android.view.Menu;
import android.view.View;
import android.widget.Button;
import java.io.IOException;
public class MainActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
final MediaPlayer mp = new MediaPlayer();
Button b = (Button) findViewById(R.id.button1);
b.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if(mp.isPlaying())
{
mp.stop();
}
try {
mp.reset();
AssetFileDescriptor afd;
afd = getAssets().openFd("hello.mp3");
mp.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength());
mp.prepare();
mp.start();
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
});
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
}
```
main xml is the same as on the aforementioned post.
Thanks in advance.
|
2015/02/28
|
[
"https://Stackoverflow.com/questions/28778843",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4616988/"
] |
You need to just Put your file in res/raw folder and use
```
public void onClick(View v) {
if(mp.isPlaying())
{
mp.stop();
} else{
try {
mp = MediaPlayer.create(this, R.raw.hello);
mp.prepare();
mp.start();
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
```
}
|
You can do it in another way. Put the .mp3 files under res/raw folder and use the following code:
```
MediaPlayer mediaPlayer = MediaPlayer.create(getApplicationContext(), R.raw.android);
mediaPlayer.start();
```
Refer this [link](http://www.tutorialspoint.com/android/android_mediaplayer.htm) for better example to understand
| 14,839
|
51,115,825
|
I have numpy array and two python lists of indexes with positions to increase arrays elements by one. Do numpy has some methods to vectorize this operation without use of `for` loops?
My current slow implementation:
```
a = np.zeros([4,5])
xs = [1,1,1,3]
ys = [2,2,3,0]
for x,y in zip(xs,ys): # how to do it in numpy way (efficiently)?
a[x,y] += 1
print(a)
```
Output:
```
[[0. 0. 0. 0. 0.]
[0. 0. 2. 1. 0.]
[0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0.]]
```
|
2018/06/30
|
[
"https://Stackoverflow.com/questions/51115825",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
`np.add.at` will do just that, just pass both indexes as a single 2D array/list:
```
a = np.zeros([4,5])
xs = [1, 1, 1, 3]
ys = [2, 2, 3, 0]
np.add.at(a, [xs, ys], 1) # in-place
print(a)
array([[0., 0., 0., 0., 0.],
[0., 0., 2., 1., 0.],
[0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0.]])
```
|
```
>>> a = np.zeros([4,5])
>>> xs = [1, 1, 1, 3]
>>> ys = [2, 2, 3, 0]
>>> a[[xs,ys]] += 1
>>> a
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 1., 0.],
[ 0., 0., 0., 0., 0.],
[ 1., 0., 0., 0., 0.]])
```
| 14,840
|
74,033,201
|
I have a python dictionary which contains multiple key,values which are actually image indexes. For e.g. the dictionary I have looks something as given below
```
{
1: [1, 2, 3],
2: [1, 2, 3],
3: [1, 2, 3],
4: [4, 5],
5: [4, 5],
6: [6]
}
```
this means that 1 is related to 1, 2 & 3. Similarly 2 is related to 1, 2 & 3. 6 has only 6 in it so it's related to none of the other elements. This is leading me to perform extra operations in my programs. I want the dictionary to be filtered and look like
```
# Preferred output
{
1: [2, 3],
4: [5],
6: []
}
```
So far I have tried
```
new_dict = dict()
# source is the original dictionary
for k,v in source.items():
ok = True
for k1,v1 in new_dict.items():
if k in v1: ok = False
if ok: new_dict[k] = v
```
This modifies the dictionary to
```
{1: [1, 2, 3], 4: [4, 5], 6: [6]}
```
I am looking for a more efficient and pythonic way to solve this problem.
|
2022/10/11
|
[
"https://Stackoverflow.com/questions/74033201",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7483151/"
] |
How about this, use my modification below to remove the first item in the list during each loop. I commented the line which does it.
```
new_dict = dict()
# source is the original dictionary
for k,v in source.items():
ok = True
for k1,v1 in new_dict.items():
if k in v1: ok = False
if ok: new_dict[k] = v[1:] #this line removes the first element.
```
My output is,
```
{1: [2, 3], 4: [5], 6: []}
```
|
your modified code:
**ver 1:**
```
new_dict = dict()
for k, v in source.items():
if not any(k in v1 for v1 in new_dict.values()):
new_dict[k] = v[1:]
```
**ver 2:**
```
tmp = dict()
for k, v in source.items():
tmp[tuple(v)] = tmp.get(tuple(v), []) + [k]
res = dict()
for k, v in tmp.items():
res[v[0]] = v[1:]
```
**ver 3:**
```
new_dict = {v[0]: list(v[1:]) for v in set(map(tuple, source.values()))}
```
| 14,841
|
37,584,629
|
I am trying to connect to a AWS Redshift server via SSL. I am using psycopg2 library in python to establish the connection and used `sslmode='require'` as a parameter in the connect line. Unfortunately i got this error:
```
sslmode value "require" invalid when SSL support is not compiled in
```
I read many other similar cases for PostgreSQL which mention the problem exists with the PostgeSQL version, but i didn't find any solutions for Redshift using Psycopg2. Do i need to install any specific SSL certificate for Redshift? If yes, how do i do it with Psycopg2? Any help will be appreciated.
|
2016/06/02
|
[
"https://Stackoverflow.com/questions/37584629",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5610841/"
] |
This worked for me: <https://stackoverflow.com/a/36489939/101266>
>
> had this same error, which turned out to be because I was using the Anaconda version of psycopg2. To fix it, I had adapt VictorF's solution from here and run:
>
>
>
```
conda uninstall psycopg2
sudo ln -s /Users/YOURUSERNAME/anaconda/lib/libssl.1.0.0.dylib /usr/local/lib
sudo ln -s /Users/YOURUSERNAME/anaconda/lib/libcrypto.1.0.0.dylib /usr/local/lib
pip install psycopg2
```
|
Is your cluster enabled for SSL connections? Your URL itself will have the SSL info. and you can use the same in your code.
| 14,843
|
16,946,051
|
I have a log file with arbitrary number of lines. All I need is to extract is one line of data from the log file which starts with a string “Total”. I do not want any other lines from the file.
How do I write a simple python program for this?
This is how my input file looks
```
TestName id eno TPS GRE FNP
Test 1205 1 0 78.00 0.00 0.02
Test 1206 1 0 45.00 0.00 0.02
Test 1207 1 0 73400 0.00 0.02
Test 1208 1 0 34.00 0.00 0.02
Totals 64 0 129.61 145.64 1.12
```
I am trying to get an output file which looks like
```
TestName id TPS GRE
Totals 64 129.61 145.64
```
Ok.. So I wanted only the 1st, 2nd, 4th and 5th column from the input file but not others. I am trying the list[index] to achieve this but getting a IndexError: (list index out of range ). Also the space between 2 columns are not the same so i am not sure how to split the columns and select the ones that i want. Can somebody please help me with this. below is the program I used
```
newFile = open('sana.log','r')
for line in newFile.readlines():
if ('TestName' in line) or ('Totals' in line):
data = line.split('\t')
print data[0]+data[1]
```
|
2013/06/05
|
[
"https://Stackoverflow.com/questions/16946051",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2180817/"
] |
```
theFile = open('thefile.txt','r')
FILE = theFile.readlines()
theFile.close()
printList = []
for line in FILE:
if ('TestName' in line) or ('Totals' in line):
# here you may want to do some splitting/concatenation/formatting to your string
printList.append(line)
for item in printList:
print item # or write it to another file... or whatever
```
|
```
for line in open('filename.txt', 'r'):
if line.startswith('TestName') or line.startswith('Totals'):
fields = line.rsplit(None, 5)
print '\t'.join(fields[:2] + fields[3:4])
```
| 14,844
|
34,254,594
|
I am trying to emulate a piano in python using mingus as suggested in [this question](https://stackoverflow.com/questions/6487180/synthesize-musical-notes-with-piano-sounds-in-python/ "this question"). I am running Ubuntu 14.04, and have already created an audio group and added myself to it.
I am using alsa.
I ran the code given in one of the answers to the aforementioned question and it ran fine in shell mode. However, when I wrote a python script and tried to run it, I did not get any sound whatsoever. Here is my code:
```
#!/usr/bin/env python
from mingus.midi import fluidsynth
DEF_FONT_PATH = '/usr/share/sounds/sf2/FluidR3_GM.sf2'
def main():
fluidsynth.init(DEF_FONT_PATH, 'alsa')
fluidsynth.play_Note(80, 0, 80)
if __name__ == '__main__':
main()
```
I have checked many other answers, and I cannot seem to find a solution.
|
2015/12/13
|
[
"https://Stackoverflow.com/questions/34254594",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2951705/"
] |
The data should be passed like this :-
```
$('#event_delete').on('click', function () {
$('#calendar').fullCalendar('removeEvents', calEvent.id);
$.ajax({
data: {id: calEvent.id},
type: "POST",
url: "http://localhost/book/js/delete_events.php"
});
alert (calEvent.id);
CloseModalBox();
});
```
|
Instead of reloading the entire page you can call $('#calendar').fullCalendar( 'refetchEvents' ). It will get all the events from your database and rerender them.
| 14,845
|
63,687,990
|
I have a `setup.py` which contains the following:
```
from pip._internal.req import parse_requirements
def load_requirements(fname):
"""Turn requirements.txt into a list"""
reqs = parse_requirements(fname, session="test")
return [str(ir.requirement) for ir in reqs]
setup(
name="Projectname",
[...]
python_requires='>=3.6',
extras_require={
'dev': load_requirements('./requirements/dev.txt')
},
install_requires=load_requirements('./requirements/prod.txt')
)
```
My `./requirements/prod.txt` looks like this:
```
-r common.txt
```
and my `./requirements/dev.txt` is similar but with some development specific packages. My `./requirements/common.txt` contains a line to pip-install a package from a github link, like:
```
-e git://github.com/BioGeek/tta_wrapper.git@master#egg=tta_wrapper
```
However, since I added that line, the command `python setup.py build` fails with:
```
error in Projectname setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
```
Versions of relevant packages:
```
pip 20.2.2
setuptools 50.0.0
```
How do I modify my `setup.py` or my requirements files to fix this?
**Edit**
After modifying my `setup.py` as shown in [the anwser](https://stackoverflow.com/a/63688209/50065) of [Martijn Pieters](https://stackoverflow.com/users/100297/martijn-pieters), I can confirm that `load_requirements` now turns my requirements files into a list with name@ url direct reference syntax where needed.
```
>>> load_requirements('./requirements/prod.txt')
['absl-py==0.8.1', 'GitPython==3.1.0', 'numpy==1.18.4', 'pip==20.2.2', 'protobuf==3.12.0', 'setuptools==41.0.0', 'scikit_learn==0.22', 'tensorflow_hub==0.8.0', 'importlib-metadata==1.6.1', 'keras-tuner==1.0.1', 'apache-beam==2.23.0', 'ml-metadata==0.23.0', 'pyarrow==0.17.0', 'tensorflow==2.3.0', 'tensorflow-data-validation==0.23.0', 'tensorflow-metadata==0.23.0', 'tensorflow-model-analysis==0.23.0', 'tensorflow-transform==0.23.0', 'tta_wrapper @ git://github.com/BioGeek/tta_wrapper.git@master']
```
However, now I get the following error when I run `python setup.py build`:
```
$ python setup.py build
/home/biogeek/code/programname/env/lib/python3.6/site-packages/_distutils_hack/__init__.py:30: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
running build
Traceback (most recent call last):
File "setup.py", line 91, in <module>
install_requires=load_requirements('./requirements/prod.txt')
File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/_distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/_distutils/dist.py", line 984, in run_command
cmd_obj = self.get_command_obj(command)
File "/home/biogeek/code/programname/env/lib/python3.6/site-packages/setuptools/_distutils/dist.py", line 859, in get_command_obj
cmd_obj = self.command_obj[command] = klass(self)
File "/usr/lib/python3.6/distutils/cmd.py", line 57, in __init__
raise TypeError("dist must be a Distribution instance")
TypeError: dist must be a Distribution instance
```
**Edit 2**
I finally made my installation succeed. I tried a few things, so not entirely sure what in the end resolved the issue, but I:
* downgraded `setuptools` from `50.0.0` to `41.0.0`
* put `setuptools` as the first line in my requirements file (see [here](https://github.com/pypa/setuptools/pull/784#issuecomment-248008457))
* added a crude, hacky one-off function to point to the zip archive with the name @ url syntax.
```py
def _format_requirement(req):
if str(req.requirement) == 'git://github.com/BioGeek/tta_wrapper.git@master#egg=tta_wrapper':
return 'tta_wrapper @ https://github.com/BioGeek/tta_wrapper/archive/v0.0.1.zip'
return str(req.requirement)
```
|
2020/09/01
|
[
"https://Stackoverflow.com/questions/63687990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/50065/"
] |
You can only use [PEP 508 - *Dependency specification for Python Software Packages*](https://www.python.org/dev/peps/pep-0508/) requirements. `git://github.com/BioGeek/tta_wrapper.git@master#egg=tta_wrapper` is not valid syntax according to that standard.
`setuptools` does accept the [`name@ url` direct reference syntax](https://www.python.org/dev/peps/pep-0440/#direct-references):
```none
tta_wrapper @ git://github.com/BioGeek/tta_wrapper.git
```
You can't put that in a requirements.txt file however, not *and* use the `-e` switch. The latter can only take a VCS URL or a local file path, **not** a requirement specification; see the [*Requirements File Format* section](https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format).
So you have translate between formats here. I'd check for the `is_editable` flag on the `ParsedRequirement` objects that `parse_requirements()` produces, and alter behaviour accordingly. You'd have to parse the requirement string as a URL, pull out the `#egg=` fragment and put that at the front:
```
from urllib.parse import urlparse
def _format_requirement(req):
if req.is_editable:
# parse out egg=... fragment from VCS URL
parsed = urlparse(req.requirement)
egg_name = parsed.fragment.partition("egg=")[-1]
without_fragment = parsed._replace(fragment="").geturl()
return f"{egg_name} @ {without_fragment}"
return req.requirement
def load_requirements(fname):
"""Turn requirements.txt into a list"""
reqs = parse_requirements(fname, session="test")
return [_format_requirement(ir) for ir in reqs]
```
The above then turns `-e git:...#egg=tta_wrapper` into `tta_wrapper @ git:...`:
```
>>> load_requirements('./requirements/dev.txt')
['tta_wrapper @ git://github.com/BioGeek/tta_wrapper.git@master', 'black==20.08b1']
```
|
In my case there is no any github link in my requirements, but the line
```
-r common.txt
```
in `./requirements/prod.txt` caused the same error.
I've added stupid condition and now it works for me:
```py
def load_requirements(filename) -> list:
requirements = []
try:
with open(filename) as req:
requirements = [line for line in req.readlines() if line.strip() != "-r common.txt"]
except Exception as e:
print(e)
return requirements
```
| 14,846
|
29,192,068
|
On Windows 7 machine, Pycharm (community or professional) and Python 3.4 (tried Anaconda 3 as well) were installed newly. There were not problems running Python scripts interactively in main editor. However, when I tried to select *View > Tool Windows > Python Console*, it generates the following error messages and more. Basically, I couldn't bring up a console window in Pycharm.
```
C:\Users\user\Anaconda3\python.exe -u C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py 56743 56744
Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 21, in <module>
from SimpleXMLRPCServer import SimpleXMLRPCServer
ImportError: No module named 'SimpleXMLRPCServer'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevconsole.py", line 20, in <module>
import pydevd_vars
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_vars.py", line 9, in <module>
from pydevd_xml import *
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydevd_xml.py", line 7, in <module>
from pydev_imports import quote
File "C:\Program Files (x86)\JetBrains\PyCharm 4.0.5\helpers\pydev\pydev_imports.py", line 23, in <module>
from xmlrpc.server import SimpleXMLRPCServer
File "C:\Users\user\Anaconda3\lib\xmlrpc\server.py", line 108, in <module>
from http.server import BaseHTTPRequestHandler
File "C:\Users\user\Anaconda3\lib\http\server.py", line 660, in <module>
class SimpleHTTPRequestHandler(BaseHTTPRequestHandler):
File "C:\Users\user\Anaconda3\lib\http\server.py", line 851, in SimpleHTTPRequestHandler
mimetypes.init() # try to read system mime.types
File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 348, in init
db.read_windows_registry()
File "C:\Users\user\Anaconda3\lib\mimetypes.py", line 255, in read_windows_registry
with _winreg.OpenKey(hkcr, subkeyname) as subkey:
TypeError: OpenKey() argument 2 must be str without null characters or None, not str
Process finished with exit code 1
Couldn't connect to console process.
```
-----------------these messages were showed up in "Python Console"-------------
|
2015/03/22
|
[
"https://Stackoverflow.com/questions/29192068",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4018111/"
] |
You need to change your working directory. Go to `File->Settings->Build, Execution, Deployment->Console->Python Console` and then change or provide a directory where you have read and write access in the `Working directory` box.
|
The configuring of pycharm in the presence of various development configurations is a bit of a black art IMHO.
The most effective mechanism I've found for pinning this down is put random strings into the various settings dialogs, Interpreters, consoles, tests , servers and observe the command lines submitted to the interpreter VERY carefully.
Hardly a satisfactory approach but it will sort out what is going where and to a certain degree what effects what.
The other think that helps me are screenshots of the settings and testing dialogs of working installations.
Again, a bit rough and ready but it has got me up and running again after a long period of successful debugging followed by pycharm amnesia.
| 14,847
|
11,782,147
|
I am trying to implement the algorithm found [here](http://www.m.cs.osakafu-u.ac.jp/cbdar2007/proceedings/papers/O1-1.pdf) in python with OpenCV.
I am trying to implement the part of the algorithm that remove irrelevant edge boundaries based on the number of interior boundaries that they have.
* If the current edge boundary has exactly one or two interior edge boundaries, the internal boundaries can be ignored
* If the current edge boundary has more than two interior edge boundaries, it can be ignored
I am having trouble determining the tree structure of the contours I have extracted from the image.
My current source:
```
import cv2
# Load the image
img = cv2.imread('test.png')
cv2.copyMakeBorder(img, 50,50,50,50,cv2.BORDER_CONSTANT, img, (255,255,255))
# Split out each channel
blue = cv2.split(img)[0]
green = cv2.split(img)[1]
red = cv2.split(img)[2]
# Run canny edge detection on each channel
blue_edges = cv2.Canny(blue, 1, 255)
green_edges = cv2.Canny(green, 1, 255)
red_edges = cv2.Canny(red, 1, 255)
# Join edges back into image
edges = blue_edges | green_edges | red_edges
# Find the contours
contours,hierarchy = cv2.findContours(edges.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
# For each contour, find the bounding rectangle and draw it
for cnt in contours:
x,y,w,h = cv2.boundingRect(cnt)
cv2.rectangle(edges,(x,y),(x+w,y+h),(200,200,200),2)
# Finally show the image
cv2.imshow('img',edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
I assumed that using `cv2.RETR_TREE` would give me a nice nested array of the contours but that doesn't seem to be the case. How do I retrieve the tree structure of my contours?
|
2012/08/02
|
[
"https://Stackoverflow.com/questions/11782147",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1128407/"
] |
The main confusion here is probably the fact that the hierarchy returned is a numpy array with more dimensions than necessary. On top of that, it looks like the Python FindContours function returns a tuple that is a LIST of contours, and and NDARRAY of the hierarchy...
You can get a sensible array of hierarchy information that is more in line with the C docs by just taking hierarchy[0]. It would then be an appropriate shape to zip, for example, with the contours.
Below is an example that, will draw the outermost rectangles in green and the innermost rectangles in red on this image:

Output:

Note, by the way, that the wording in the OpenCV docs is a little ambiguous, but `hierarchyDataOfAContour[2]` describes the children of that contour (if it is negative then that is an inner contour), and `hierarchyDataOfAContour[3]` describes the parents of that contour (if it is negative then that is an exterior contour).
Also note: I looked into implementing the algorithm that you referred to in the OCR paper, and I saw that FindContours was giving me a lot of repeats of near-identical contours. This would complicate the finding of "Edge Boxes" as the paper describes. This may be because the Canny thresholds were too low (note that I was playing around with them as described in the paper), but there may be some way to reduce that effect or just look at the average deviation of the four corners of all the boxes and eliminate duplicates...
```
import cv2
import numpy
# Load the image
img = cv2.imread("/ContourTest.PNG")
# Split out each channel
blue, green, red = cv2.split(img)
def medianCanny(img, thresh1, thresh2):
median = numpy.median(img)
img = cv2.Canny(img, int(thresh1 * median), int(thresh2 * median))
return img
# Run canny edge detection on each channel
blue_edges = medianCanny(blue, 0.2, 0.3)
green_edges = medianCanny(green, 0.2, 0.3)
red_edges = medianCanny(red, 0.2, 0.3)
# Join edges back into image
edges = blue_edges | green_edges | red_edges
# Find the contours
contours,hierarchy = cv2.findContours(edges, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
hierarchy = hierarchy[0] # get the actual inner list of hierarchy descriptions
# For each contour, find the bounding rectangle and draw it
for component in zip(contours, hierarchy):
currentContour = component[0]
currentHierarchy = component[1]
x,y,w,h = cv2.boundingRect(currentContour)
if currentHierarchy[2] < 0:
# these are the innermost child components
cv2.rectangle(img,(x,y),(x+w,y+h),(0,0,255),3)
elif currentHierarchy[3] < 0:
# these are the outermost parent components
cv2.rectangle(img,(x,y),(x+w,y+h),(0,255,0),3)
# Finally show the image
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
|
**Understanding Contour Hierarchies**
When finding contours in a binary image using [`cv2.findContours()`](https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html#findcontours), you can use contour hierarchy to select and extract specific contours within the image. Specifically, you can choose a [contour retrieval mode](https://docs.opencv.org/4.2.0/d3/dc0/group__imgproc__shape.html#ga819779b9857cc2f8601e6526a3a5bc71) to optionally return an output vector containing information about the image topology. There are the four possible modes:
* `cv2.RETR_EXTERNAL` - retrieves only the extreme outer contours (no hierarchy)
* `cv2.RETR_LIST` - retrieves all of the contours without establishing any hierarchical relationships
* `cv2.RETR_CCOMP` - retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level
* `cv2.RETR_TREE` - retrieves all of the contours and reconstructs a full hierarchy of nested contours
**Contour Tree Structure**
So with this information, we can use `cv2.RETR_CCOMP` or `cv2.RETR_TREE` to return a hierarchy list. Take for example this image:
[](https://i.stack.imgur.com/JcY09.jpg)
When we use the `cv2.RETR_TREE` parameter, the contours are arranged in a hierarchy, with the outermost contours for each object at the top. Moving down the hierarchy, each new level of contours represents the next innermost contour for each object. In the image above, the contours in the image are colored to represent the hierarchical structure of the returned contours data. The outermost contours are red, and they are at the top of the hierarchy. The next innermost contours -- the dice pips, in this case -- are green.
We can get that information about the contour hierarchies via the hierarchy array from the `cv2.findContours` function call. Suppose we call the function like this:
```
(_, contours, hierarchy) = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
```
The third return value, saved in the `hierarchy` variable in this code, is a three-dimensional NumPy array, with one row, `X` columns, and a "depth" of 4. The `X` columns correspond to the number of contours found by the function. The `cv2.RETR_TREE` parameter causes the function to find the internal contours as well as the outermost contours for each object. Column zero corresponds to the first contour, column one the second, and so on.
Each of the columns has a four-element array of integers, representing indices of other contours, according to this scheme:
```
[next, previous, first child, parent]
```
The *next* index refers to the next contour in this contour's hierarchy level, while the *previous* index refers to the previous contour in this contour's hierarchy level. The *first child* index refers to the first contour that is contained inside this contour. The *parent* index refers to the contour containing this contour. In all cases, an value of `-1` indicates that there is no *next*, *previous*, *first child*, or *parent* contour, as appropriate. For a more concrete example, here are some example `hierarchy` values. The values are in square brackets, and the indices of the contours precede each entry.
If you printed out the hierarchy array you will get something like this
```
0: [ 6 -1 1 -1] 18: [19 -1 -1 17]
1: [ 2 -1 -1 0] 19: [20 18 -1 17]
2: [ 3 1 -1 0] 20: [21 19 -1 17]
3: [ 4 2 -1 0] 21: [22 20 -1 17]
4: [ 5 3 -1 0] 22: [-1 21 -1 17]
5: [-1 4 -1 0] 23: [27 17 24 -1]
6: [11 0 7 -1] 24: [25 -1 -1 23]
7: [ 8 -1 -1 6] 25: [26 24 -1 23]
8: [ 9 7 -1 6] 26: [-1 25 -1 23]
9: [10 8 -1 6] 27: [32 23 28 -1]
10: [-1 9 -1 6] 28: [29 -1 -1 27]
11: [17 6 12 -1] 29: [30 28 -1 27]
12: [15 -1 13 11] 30: [31 29 -1 27]
13: [14 -1 -1 12] 31: [-1 30 -1 27]
14: [-1 13 -1 12] 32: [-1 27 33 -1]
15: [16 12 -1 11] 33: [34 -1 -1 32]
16: [-1 15 -1 11] 34: [35 33 -1 32]
17: [23 11 18 -1] 35: [-1 34 -1 32]
```
The entry for the first contour is `[6, -1, 1, -1]`. This represents the first of the outermost contours; note that there is no particular order for the contours, e.g., they are not stored left to right by default. The entry tells us that the next dice outline is the contour with index six, that there is no previous contour in the list, that the first contour inside this one has index one, and that there is no parent for this contour (no contour containing this one). We can visualize the information in the `hierarchy` array as seven trees, one for each of the dice in the image.
[](https://i.stack.imgur.com/dC4Bn.png)
The seven outermost contours are all those that have no parent, i.e., those with an value of `-1` in the fourth field of their `hierarchy` entry. Each of the child nodes beneath one of the "roots" represents a contour inside the outermost contour. Note how contours 13 and 14 are beneath contour 12 in the diagram. These two contours represent the innermost contours, perhaps noise or some lost paint in one of the pips.
---
Once we understand how contours are arranged into a hierarchy, we can perform more sophisticated tasks, such as counting the number of contours within a shape in addition to the number of objects in an image. Depending on the retrieval mode you select, you will have full access to the topology of an image and be able to control the tree structure of the contours.
| 14,852
|
37,811,767
|
I like to use python interpreter, as it shows the result instantly. But I sometimes make mistakes. Like misspelling or typing 'enter' twice during writing class or function. It's really annoying work to rewrite the code.
Is it possible to add some code to a predefined class or function in the interpreter?
|
2016/06/14
|
[
"https://Stackoverflow.com/questions/37811767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5894129/"
] |
When you want to declare an id in XML you do it as `android:id="@+id/myId"`
R is Java class. When you include the above line for an XML view a `public static final int myId` field gets included into the R class. You can reference this from your own classes.
`findViewById(int)` accepts an integer as a parameter. The R class contains Integers and not the strings you entered as the XML id.
Here is a sample from an R class.
```
public final class R {
public static final class id {
public static final int ReflectionsLevelText=0x7f0d00af;
public static final int about=0x7f0d01b3;
public static final int action0=0x7f0d014d;
public static final int action_bar=0x7f0d005f;
public static final int action_bar_activity_content=0x7f0d0000;
public static final int action_bar_container=0x7f0d005e;
}
}
```
So if you want to access the view with the id `action_bar` you have to call `findViewById(R.id.action_bar)`
In the same way R class also includes drawables, dimensions and basically all the resources. They are exactly inner static classes inside the R class.
For an example when you add a drawable `ic_my_pic.png` to `res/drawable` a field gets generated in the R class. It would look like,
```
public final class R{
public static final class drawable{
public static final int ic_my_pic=0x7f020000;
}
}
```
Now you can access this image from your classes by,
```
imageView.setImageResource(R.drawable.ic_my_pic);
```
You can find more info [here](https://stackoverflow.com/questions/6804053/understand-the-r-class-in-android) and [here](http://knowledgefolders.com/akc/display?url=DisplayNoteIMPURL&reportId=2883&ownerUserId=satya).
|
When gradle builds your app, it generates a "
```
<EditText
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:id="@+id/myEditText" />
```
You would then reference it in your code using the generated R class:
```
EditText myEditText = (EditText) findViewById(R.id.myEditText);
```
| 14,853
|
48,250,092
|
How do I upload a excel `.xlsx` file to python flask from angular2?
I upload something but it can't be read when I open the excel file.
html for upload dialog:
```
<mat-form-field>
<input matInput placeholder="Filename" [(ngModel)]="filename">
</mat-form-field>
<button type="button" mat-raised-button (click)="imgFileInput.click()" [disabled]="!is_file">Upload file</button>
<input hidden type="file" #imgFileInput (change)="fileChange($event)" accept=".xlsx"/>
<button type="button" mat-raised-button [disabled]="is_file">Submit</button>
```
ts code for sending post to the api:
```
fileChange(event){
let reader = new FileReader();
let file = event.target.files[0];
this.filename = file.name;
reader.readAsDataURL(file);
reader.onload = () => {
let data = atob(reader.result.toString().split(',')[1]);
this.is_file=false;
var token = localStorage.getItem('token')
let headers = new Headers({
'Content-Type': 'application/json',
'Accept': 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet',
'Access-Control-Allow-Origin':'*',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Allow-Methods': 'POST',
'Authorization': 'bearer ' + token
});
let options = new RequestOptions({ headers: headers });
options.responseType = ResponseContentType.Blob;
let ran = Math.floor(Math.random() * 10000) + 1;
this.http.post(environment.API_URL + '/test/model/expsoure/excel/' + 'model_id' + '?cache=' + ran.toString(),data,options)
.subscribe(service_data => {
});
}
}// end fileChange
```
Python code for saving data:
```
class exposureExcelId(Resource):
method_decorators = [jwt_required()]
def post(self,mid):
user_id = str(current_identity.id)
f = open('/Users/data/wtf.xlsx','wb')
f.write(request.data)
f.close()
```
Error I get when I try and load the file from the upload in excel is:
>
> excel could not open wtf.xlsx because some of the content is unreadable
>
>
>
|
2018/01/14
|
[
"https://Stackoverflow.com/questions/48250092",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1203556/"
] |
```
temp = (ds_list)(malloc(sizeof(ds_list)));
```
will be
```
temp = malloc(sizeof(*temp)));
```
You want to allocate memory for `struct ds_list_element` not `struct ds_list_element*`. Don't hide pointers behind typedef name. It rarely helps.
Also you should check the return value of `malloc` and the casting is not needed.
|
Use `ds_list` as structure not a pointer
```
typedef struct ds_list_element {
char value[MAX];
struct ds_list_element *next;
}ds_list;
```
and allocate memory for the structure not a pointer.
Working program:
```
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
#define MAX 100
typedef struct ds_list_element {
char value[MAX];
struct ds_list_element *next;
}ds_list;
int ds_list_empty(ds_list *id) { // id listy
if (id == NULL) return 1;
else return 0;
}
ds_list * ds_list_add(ds_list *id, char add[MAX]) {
ds_list *temp;
temp = (malloc(sizeof(ds_list)));
strcpy(temp->value,add);
temp->next = id;
return temp;
}
void ds_list_print(ds_list *id) {
if (ds_list_empty(id) == 0) {
printf("%s\n",id->value);
ds_list_print(id->next);
}
}
int main () {
ds_list *my_list = NULL;
my_list = ds_list_add(my_list,"one");
my_list = ds_list_add(my_list,"two");
my_list = ds_list_add(my_list,"three");
my_list = ds_list_add(my_list,"four");
ds_list_print(my_list);
return 0;
}
```
Output:
```
four
three
two
one
```
| 14,855
|
2,211,706
|
I've always preferred these:
```
not 'x' in 'abc'
not 'x' is 'a'
```
(assuming, of course that everyone knows `in` and `is` out-prioritize `not` -- I probably should use parentheses) over the more (English) grammatical:
```
'x' not in 'abc'
'x' is not 'a'
```
but didn't bother to think why until I realized they do not make syntactical sense
```
'x' == not 'a'
'x' not == 'a'
```
both of course throw a syntax error.
so I figured they were both two-word operators. However, the [documentation](http://docs.python.org/library/operator.html) only references `is not` and makes no mention of `not in` as an operator. Am I perhaps misinterpreting the syntax?
If they both are operators, then are they at all different (even subtlety) from their non-grammatical counterparts?
If they are the same, then why do they exist? It seems to be impious to the [Zen of Python](http://www.python.org/dev/peps/pep-0020/) (.."one -- and preferably only one -- obvious way"..)
I apologize if this has been discussed to death already, I just had little luck finding it with search terms like "is not".
|
2010/02/06
|
[
"https://Stackoverflow.com/questions/2211706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31676/"
] |
From the python 2.6.4 docs at: <http://docs.python.org/reference/expressions.html>
>
>
> The operator not in is defined to have
> the inverse true value of in.
>
>
> The operators is and is not test for
> object identity: x is y is true if and
> only if x and y are the same object. x
> is not y yields the inverse truth
> value.
>
>
>
eg: "x not in y" is exactly the same as "not x in y" and "x is not y" is the same as "not x is y".
"x not == y" doesn't parse, but "x != y" does, so there's an equivalence there too ...
HTH.
|
The remainder of your question has been answered above, but I'll address the last question: the Zen of Python bit.
"There should only be one way to do it" isn't mean't in a mathematical sense. If it were, there'd be no `!=` operator, since that's just the inversion of `==`. Similarly, no `and` and `or` --- you can, after all, just use a single `nand` command. There is a limit to the "one way" mantra: there should only be one high-level way of doing it. Of course, that high-level way can be decomposed --- you can write your own `math.tan`, and you never need to use `urllib` --- `socket` is always there for you. But just as `urllib.open` is a higher-level encapsulation of raw `socket` operations, so `not in` is a higher-level encapsulation of `not` and `in`. That's a bit banal, you might say. But you use `x != y` instead of `not (x == y)`.
| 14,856
|
12,633,618
|
I am new to python. I am working on some other's project but when i tried to run the code it give me the error said above. My all pages are working properly except those in which i had images. Is there any library required for the same??
Any help will be appreciable.
Thanks
|
2012/09/28
|
[
"https://Stackoverflow.com/questions/12633618",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1696228/"
] |
you need to have `cropresize` package `http://pypi.python.org/pypi/cropresize/` installed on your device.
If it is not there install it from the link
Do `easy_install cropresize` or `pip install cropresize`
|
Just do [`easy_install cropresize`](http://pypi.python.org/pypi/cropresize/).
| 14,864
|
33,168,308
|
I have the following directory structure
```
-----root
|___docpd
|__docpd (contains settings.py, wsgi.py , uwsgi.ini)
|__static
```
During my vanilla django setup in dev environment , everything was fine (all static files used to load). But now after setting up uwsgi, i found that none of my static files are being loaded(I get a 404 error).
**What have i tried ?**
```
1. Read alot of stack overflow questions about this error and none could solve my problem.
2. I adjusted the python path with paths to my project and they seem to get added as i printed them out in the settings.py file, but still no luck.
```
**Some code**
```
uwsgi.ini
[uwsgi]
chdir=/home/ubuntu/docpad/Docpad/
wsgi-file=/home/ubuntu/docpad/Docpad/Docpad/wsgi.py
master=True
pidfile=/tmp/project-master.pid
vacuum=True
max-requests=5000
daemonize=/var/log/uwsgi/docpad.log
```
**wsgi.py**
```
import os,sys
sys.path.append('/home/ubuntu/docpad/Docpad/')
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "Docpad.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
```
**settings.py**
```
from sys import path
from os.path import abspath, dirname, join, normpath
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
print "Base dir = " , BASE_DIR
DJANGO_ROOT = dirname(abspath(__file__))
print "Root =",DJANGO_ROOT
SITE_ROOT = dirname(DJANGO_ROOT)
print "SITE =", SITE_ROOT
path.append(DJANGO_ROOT)
print "Path = ",path
print BASE_DIR
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'SECRET'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
PROJECT_PATH = os.path.join(BASE_DIR, os.pardir)
PROJECT_PATH = os.path.abspath(PROJECT_PATH)
TEMPLATE_PATH = os.path.join(PROJECT_PATH, 'Docpad/templates/')
print "TEMPLATES = ",TEMPLATE_PATH
# TODO make separate template_paths
TEMPLATE_DIRS = (
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
TEMPLATE_PATH,
)
...
```
After i run my application, i get 404 for all static resources like css/js files.
**UPDATE**
When i do
```
python manage.py runserver 0.0.0.0:8000
```
the server starts serving the static files
But , this command
```
uwsgi --ini uwsgi.ini --http :8000
```
gives me the problem(not loading the static files).
I am totally clueless right now,have been trying various alternatives but no luck.
If anybody could help, it'll be great.
|
2015/10/16
|
[
"https://Stackoverflow.com/questions/33168308",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1270865/"
] |
in settings
```
STATIC_URL = '/static/'
STATICFILES_DIRS = (
os.path.normpath(os.path.join(BASE_DIR, "static")),
)
```
in urls.py
```
urlpatterns = [
#urls
]+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
```
|
You could also do it like so:
```
uwsgi --ini uwsgi.ini --http :8000 --static-map /static=/path/to/staticfiles/
```
or just add this to your .ini file:
```
static-map = /static=/path/to/staticfiles
```
| 14,869
|
58,471,984
|
Could anyone here tell me how to properly append series of missing values onto a python list?
for example,
```
> ls=[1,2,3]
> ls += []*2
> ls
[1,2,3]
```
but this is not the outcome I want. I want:
```
[1,2,3, , ]
```
where the blanks denotes for the missing values.
(note: also what I DON'T want is:
```
> ls
[1,2,3,'','']
```
)
Thanks,
|
2019/10/20
|
[
"https://Stackoverflow.com/questions/58471984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12205961/"
] |
Where are you set dataSource and Deleagte methods of TableView?
use this code
```
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
self.tableView.dataSource = self
self.tableView.delegate = self
self.tableView.register(UITableViewCell.self, forCellReuseIdentifier: "cell")
}
```
|
2 possible reasons:
1. If the cell is designed as prototype cell you **must not** register the cell.
2. `dataSource` and `delegate` of the table view must be connected to the controller in Interface Builder or set in code.
| 14,874
|
59,622,544
|
My Data frame is below
I am trying to find the age
```
customer_Id DOB
0 268408 1970-02-01
1 268408 1970-02-01
2 268408 1970-02-01
3 268408 1970-02-01
4 268408 1970-02-01
```
shape is of `207518`
while converting the data i got `ValueError: unconverted data remains: 5`
code is below to convert in to age
```
def cal_age(dob=str):
x = dt.datetime.strptime(dob, "%Y-%d-%m")
y = dt.date.today()
age = y.year - x.year - ((y.month, x.day) < (y.month, x.day))
return age
```
`df_n_4['DOB'] = pd.to_datetime(df_n_4['DOB'],errors='coerce')`
`df_n_4['DOB'] = df_n_4['DOB'].astype('str')`
`df_n_4['Age'] = df_n_4.DOB.apply(lambda z: cal_age(z))`
Error is below
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-113-90ef9d4c7002> in <module>
----> 1 df_n_4['Age'] = df_n_4.DOB.apply(lambda z: cal_age(z))
~\Anaconda3\lib\site-packages\pandas\core\series.py in apply(self, func, convert_dtype, args, **kwds)
4040 else:
4041 values = self.astype(object).values
-> 4042 mapped = lib.map_infer(values, f, convert=convert_dtype)
4043
4044 if len(mapped) and isinstance(mapped[0], Series):
pandas\_libs\lib.pyx in pandas._libs.lib.map_infer()
<ipython-input-113-90ef9d4c7002> in <lambda>(z)
----> 1 df_n_4['Age'] = df_n_4.DOB.apply(lambda z: cal_age(z))
<ipython-input-111-98546f386b50> in cal_age(dob)
1 def cal_age(dob=str):
----> 2 x = dt.datetime.strptime(dob, "%Y-%d-%m")
3 y = dt.date.today()
4 age = y.year - x.year - ((y.month, x.day) < (y.month, x.day))
5 return age
~\Anaconda3\lib\_strptime.py in _strptime_datetime(cls, data_string, format)
575 """Return a class cls instance based on the input string and the
576 format string."""
--> 577 tt, fraction, gmtoff_fraction = _strptime(data_string, format)
578 tzname, gmtoff = tt[-2:]
579 args = tt[:6] + (fraction,)
~\Anaconda3\lib\_strptime.py in _strptime(data_string, format)
360 if len(data_string) != found.end():
361 raise ValueError("unconverted data remains: %s" %
--> 362 data_string[found.end():])
363
364 iso_year = year = None
ValueError: unconverted data remains: 5
```
|
2020/01/07
|
[
"https://Stackoverflow.com/questions/59622544",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
You can change your function for working with datetimes, also convert to strings is not necessary:
```
def cal_age(x):
y = dt.date.today()
age = y.year - x.year - ((y.month, x.day) < (y.month, x.day))
return age
df_n_4['DOB'] = pd.to_datetime(df_n_4['DOB'],errors='coerce')
df_n_4['Age'] = df_n_4.DOB.apply(cal_age)
```
|
try this code,
```
df['current_date']=pd.datetime.now()
df['age']=(df.current_date-pd.to_datetime(df.DOB)).dt.days/365
```
| 14,875
|
66,484,870
|
So I was developing a python program for my school project that asks a customer for their details such as their Firstname,Lastname,Age etc. So I made a function called customer details.
```
def customerdetails():
Firstname = input("Enter your First name:")
Lastname = input("Enter your last name:")
Age = input("Age:")
Address =input("Enter your address:")
Postcode = input("Enter your Postcode:")
Email = input("Email:")
Phone = int(input("Phone Number:"))
customerdetails()
```
Now how can I print those variables such as Firstname, Lastname, Age, Address etc. I tried using the same logic we use to print normal variables, but it didn’t work.This is the code.
```
print("Please check your details")
print("***********")
print(Firstname)
print("***********")
```
It shows me an error that says “NameError: name ‘Firstname’ is not defined.”
What do I do ? Any help will be appreciated
|
2021/03/05
|
[
"https://Stackoverflow.com/questions/66484870",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13290732/"
] |
You need to return the information from your `customerdetails` function. Since you have a bunch of different variables, you'll be returning a collection -- a tuple, a list, a dict, etc.
Using a tuple or a list means you need to keep track of the order of all the elements, and using a dict means you need to keep track of the exact string literals that correspond to each variable. Personally, I prefer using `NamedTuple` for simple collections like this; it makes for code that's easy to read and type-check.
```
from typing import NamedTuple
class CustomerDetails(NamedTuple):
first_name: str
last_name: str
age: str
address: str
postcode: str
email: str
phone: int
def get_customer_details() -> CustomerDetails:
"""Get customer details from the user."""
return CustomerDetails(
input("Enter your First name:"),
input("Enter your last name:"),
input("Age:"),
input("Enter your address:"),
input("Enter your Postcode:"),
input("Email:"),
int(input("Phone Number:")),
)
details = get_customer_details()
print("Please check your details")
print("***********")
print(details.first_name)
print("***********")
```
|
By default, variables declared within a function are local only to that function.
If you want a function to give back multiple values, you can `return` a tuple of values:
```py
def customerdetails():
Firstname = input("Enter your First name:")
Lastname = input("Enter your last name:")
Age = input("Age:")
Address =input("Enter your address:")
Postcode = input("Enter your Postcode:")
Email = input("Email:")
Phone = int(input("Phone Number:"))
return (Firstname, Lastname, Age, Address, Postcode, Email, Phone)
(fn, ln, age, addr, postcode, email, phone) = customerdetails()
```
| 14,876
|
41,994,645
|
Is there a more functional way to create a 2d array in Javascript than what I have here? Perhaps using `.apply`?
```
generatePuzzle(size) {
let puzzle = [];
for (let i = 0; i < size; i++) {
puzzle[i] = [];
for (let j = 0; j < size; j++) {
puzzle[i][j] = Math.floor((Math.random() * 200) + 1);
}
}
return puzzle;
}
```
For instance, in python, you can do something like `[[0]*4]*4` to create a 4x4 list
|
2017/02/02
|
[
"https://Stackoverflow.com/questions/41994645",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1718122/"
] |
```
const repeat = (fn, n) => Array(n).fill(0).map(fn);
const rand = () => Math.floor((Math.random() * 200) + 1);
const puzzle = n => repeat(() => repeat(rand, n), n);
```
And then `puzzle(3)`, eg, will return a 3x3 matrix filled with random numbers.
|
With lodash just like below:
```
const _ = require('lodash');
function generatePuzzle(size) {
return _.times(size, () => _.times(size, () => (Math.random() * 200) + 1));
}
```
| 14,879
|
64,169,510
|
I am using XEN hypervisor. For managing virtual Machine I am using **virt-manager** whenever I want to start to Virtual Machine at last when everything is ready and I click the create Button I get the following error
```
Unable to complete install: 'An error occurred, but the cause is unknown'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/createvm.py", line 2089, in _do_async_install
guest.installer_instance.start_install(guest, meter=meter)
File "/usr/share/virt-manager/virtinst/install/installer.py", line 542, in start_install
domain = self._create_guest(
File "/usr/share/virt-manager/virtinst/install/installer.py", line 491, in _create_guest
domain = self.conn.createXML(install_xml or final_xml, 0)
File "/usr/lib/python3/dist-packages/libvirt.py", line 4034, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirt.libvirtError: An error occurred, but the cause is unknow
```
|
2020/10/02
|
[
"https://Stackoverflow.com/questions/64169510",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12966156/"
] |
You can iterate over the result of the dynamic query directly:
```
create or replace function gapandoverlapdetection ( table_name text, entity_ids bigint[])
returns table (entity_id bigint, valid tsrange, causes_overlap boolean, causes_gap boolean)
as $$
declare
var_r record;
begin
for var_r in EXECUTE format('select entity_id, valid
from %I
where entity_id = any($1)
and registration > now()::timestamp
order by valid ASC', table_name)
using entity_ids
loop
... do something with var_r
-- return a row for the result
-- this does not end the function
-- it just appends this row to the result
return query
select entity_id, true, false;
end loop;
end
$$ language plpgsql;
```
The `%I` injects an identifier into a string and the `$1` inside the dynamic SQL is then populated through passing the argument with the `using` keyword
|
Firstly, decide whether you want to pass the table's name or oid. If you want to identify the table by name, then the parameter should be of `text` type and not `regclass`.
Secondly, if you want the table name to change between executions then you need to [execute the SQL statement dynamically](https://www.postgresql.org/docs/13/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN) with the `EXECUTE` statement.
| 14,880
|
60,404,756
|
```
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
```
I am currently reading the book hands-on ML and I am having some issues with this code. I don't have much experience with python so that might be a reason but let me make my confusion clearer. In the book, the housing problem requires us to create stratums so the dataset has sufficient instances of each, and we do this with code that I didn't copy here, the code I am showing is used to create the test and train sets, using the specific income categories. The 1st and 2nd lines of code are clear, the 3rd is where I get lost. We create a split of test 0.2 train 0.8 but what exactly is happening from then on, what is the for loop used for?
I have looked in a couple of pages for info but haven't really found anything that made the situation clear, so I would really appreciate the help.
Thank you in advance for your answers.
|
2020/02/25
|
[
"https://Stackoverflow.com/questions/60404756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12946446/"
] |
That for loop is just taking the indices being used for the split and calling those rows of the original data to form the training and test sets.
|
StratifiedShuffleSplit is better if you are using a K-fold cross-validation, where you divide the training and testing data in different ways and then calculate the mean of a result in K iterations.
`n_splits` must equals the *K* value and in your case *K* is one, which makes no sense for cross-validation. I think you'd better use sklearn.model\_selection.train\_test\_split, which makes more sense.
| 14,881
|
70,310,388
|
I have a list of nested dictionaries (python 3.9) that looks something like this:
```
records = [
{'Total:': {'Owner:': {'Available:': {'15 to 34 years': 1242}}}},
{'Total:': {'Owner:': {'Available:': {'35 to 64 years': 5699}}}},
{'Total:': {'Owner:': {'Available:': {'65 years and over': 2098}}}},
{'Total:': {'Owner:': {'No Service:': {'15 to 34 years': 43}}}},
{'Total:': {'Owner:': {'No Service:': {'35 to 64 years': 64}}}},
{'Total:': {'Owner:': {'No Service:': {'65 years and over': 5}}}},
{'Total:': {'Renter:': {'Available:': {'15 to 34 years': 1403}}}},
{'Total:': {'Renter:': {'Available:': {'35 to 64 years': 2059}}}},
{'Total:': {'Renter:': {'Available:': {'65 years and over': 395}}}},
{'Total:': {'Renter:': {'No Service:': {'15 to 34 years': 16}}}},
{'Total:': {'Renter:': {'No Service:': {'35 to 64 years': 24}}}},
{'Total:': {'Renter:': {'No Service:': {'65 years and over': 0}}}},
]
```
The levels of nesting is not always consistent. The example above has 4 levels (total, owner/renter, available/no service, age group) but there are some examples with a single level and others with as many as 5.
I would like to merge the data in a way that doesn't replace the final dictionary like `update()` or `{*dict_a, **dict_b}` does.
The final output should look something like this:
```
combined = {
'Total': {
'Owner': {
'Available': {
'15 to 34 years': 1242,
'35 to 64 years': 5699,
'65 years and over': 2098
},
'No Service:': {
'15 to 34 years': 43,
'35 to 64 years': 64,
'65 years and over': 5
}
},
'Renter': {
'Available': {
'15 to 34 years': 1403,
'35 to 64 years': 2059,
'65 years and over': 395
},
'No Service:': {
'15 to 34 years': 16,
'35 to 64 years': 24,
'65 years and over': 0
}
},
}
}
```
|
2021/12/10
|
[
"https://Stackoverflow.com/questions/70310388",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12413845/"
] |
Recursion is an easy way to navigate and operate on arbitrarily nested structures:
```
def combine_into(d: dict, combined: dict) -> None:
for k, v in d.items():
if isinstance(v, dict):
combine_into(v, combined.setdefault(k, {}))
else:
combined[k] = v
combined = {}
for record in records:
combine_into(record, combined)
print(combined)
```
```
{'Total:': {'Owner:': {'Available:': {'15 to 34 years': 1242, '35 to 64 years': 5699, '65 years and over': 2098}, 'No Service:': {'15 to 34 years': 43, '35 to 64 years': 64, '65 years and over': 5}}, 'Renter:': {'Available:': {'15 to 34 years': 1403, '35 to 64 years': 2059, '65 years and over': 395}, 'No Service:': {'15 to 34 years': 16, '35 to 64 years': 24, '65 years and over': 0}}}}
```
The general idea here is that each call to `combine_into` takes one dict and combines it into the `combined` dict -- each value that is itself a dict results in another recursive call, while other values just get copied into `combined` as-is.
Note that this will raise an exception (or clobber some data) if some of the `records` have disagreements about whether a particular node is a leaf or not!
|
Quick solution: `res = NDict.from_list_of_dict(records).raw_dict`
Test:
```py
>>> from naapc import NDict
>>> records = [
... {'Total:': {'Owner:': {'Available:': {'15 to 34 years': 1242}}}},
... {'Total:': {'Owner:': {'Available:': {'35 to 64 years': 5699}}}},
... {'Total:': {'Owner:': {'Available:': {'65 years and over': 2098}}}},
... {'Total:': {'Owner:': {'No Service:': {'15 to 34 years': 43}}}},
... {'Total:': {'Owner:': {'No Service:': {'35 to 64 years': 64}}}},
... {'Total:': {'Owner:': {'No Service:': {'65 years and over': 5}}}},
... {'Total:': {'Renter:': {'Available:': {'15 to 34 years': 1403}}}},
... {'Total:': {'Renter:': {'Available:': {'35 to 64 years': 2059}}}},
... {'Total:': {'Renter:': {'Available:': {'65 years and over': 395}}}},
... {'Total:': {'Renter:': {'No Service:': {'15 to 34 years': 16}}}},
... {'Total:': {'Renter:': {'No Service:': {'35 to 64 years': 24}}}},
... {'Total:': {'Renter:': {'No Service:': {'65 years and over': 0}}}},
... ]
>>> a = NDict.from_list_of_dict(records)
>>> print(a)
{
"Total:": {
"Owner:": {
"Available:": {
"15 to 34 years": 1242,
"35 to 64 years": 5699,
"65 years and over": 2098
},
"No Service:": {
"15 to 34 years": 43,
"35 to 64 years": 64,
"65 years and over": 5
}
},
"Renter:": {
"Available:": {
"15 to 34 years": 1403,
"35 to 64 years": 2059,
"65 years and over": 395
},
"No Service:": {
"15 to 34 years": 16,
"35 to 64 years": 24,
"65 years and over": 0
}
}
}
}
>>> type(a)
<class 'naapc.ndict.NDict'>
>>> normal_dict = a.raw_dict
>>> type(normal_dict)
<class 'dict'>
>>> normal_dict is not a.raw_dict
True
>>>
```
Doc: <https://github.com/eiphy/naapc>
| 14,882
|
9,638,826
|
I'm using python to simulate some automation models, and with the help of matplotlib I'm producing plots like the one shown below.

I'm currently plotting with the following command:
```
ax.imshow(self.g, cmap=map, interpolation='nearest')
```
where `self.g` is the binary map (`0` -> blue, `1` -> red in my current plots).
However, to include this in my report I would like the plot to be with black dots on white background instead of red on blue. How do I accomplish that?
|
2012/03/09
|
[
"https://Stackoverflow.com/questions/9638826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/38055/"
] |
You can change the color map you are using via the `cmap` keyword. The color map `'Greys'` provides the effect you want. You can find a list of [available maps on the scipy website](http://scipy-cookbook.readthedocs.io/items/Matplotlib_Show_colormaps.html).
```
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(101)
g = np.floor(np.random.random((100, 100)) + .5)
plt.subplot(211)
plt.imshow(g)
plt.subplot(212)
plt.imshow(g, cmap='Greys', interpolation='nearest')
plt.savefig('blkwht.png')
plt.show()
```
which results in:

|
There is an alternative method to Yann's answer that gives you finer control. Matplotlib's [imshow](http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.imshow) can take a `MxNx3` matrix where each entry is the RGB color value - just set them to white `[1,1,1]` or black `[0,0,0]` accordingly. If you want three colors it's easy to expand this method.
```
import matplotlib.pyplot as plt
import numpy as np
# Z is your data set
N = 100
Z = np.random.random((N,N))
# G is a NxNx3 matrix
G = np.zeros((N,N,3))
# Where we set the RGB for each pixel
G[Z>0.5] = [1,1,1]
G[Z<0.5] = [0,0,0]
plt.imshow(G,interpolation='nearest')
plt.show()
```

| 14,883
|
18,080,556
|
I have written a project in python which I am now in the process of moving to google app engine. The problem that occurs is when I run this code on GAE:
```
import requests
from google.appengine.api import urlfetch
def retrievePage(url, id):
response = 'http://online2.citybreak.com/Book/Package/Result.aspx?onlineid=%s' % id
# Set the timeout to 60 seconds
urlfetch.set_default_fetch_deadline(60)
# Send the first request
r1 = requests.get(url)
cookies = r1.cookies
print 'Cookies: %s' % r1.cookies
# Retrieve the content
r2 = requests.get(response, cookies=cookies)
return r2.text
```
When running the code on GAE the cookies from the first request are missing. That is to say, `r1.cookies` is just an empty cookie jar. The same code works just fine on my django server where the cookies should contain a asp.net session id.
The reason i have two requests is because the first one redirects the user and will only retrieve the correct page if the session cookie is the same.
print output on GAE:
```
Cookies: <<class 'requests.cookies.RequestsCookieJar'>[]>
```
print output on Django:
```
Cookies: <<class 'requests.cookies.RequestsCookieJar'>[<Cookie ASP.NET_SessionId=dhmk1vt3ujgmhhhmbwsclukb for online2.citybreak.com/>]>
```
Anyone know what the problem might be? Is GAE stripping away the cookie information? I am also opened to any suggestions on another way to retrieve the page, i just found that the requests module was easier than the alternatives i found.
|
2013/08/06
|
[
"https://Stackoverflow.com/questions/18080556",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2656606/"
] |
I think this will do it:
```
var text = "aaa bbb eee ccc <br>ddd eee fff ggg hhh iii jjj kkk";
var search = /eee [^e<>]*ggg/g;
console.log(text.search(search)); // a)
console.log(text.replace(search, "newinsertedstring $&")); // b)
```
|
```
var text = "whatever".Replace("eee fff ggg", "newinsertedstring eee fff ggg");
```
| 14,884
|
12,558,878
|
>
> **Possible Duplicate:**
>
> [Python 3.2.3 programming…Almost had it working](https://stackoverflow.com/questions/12558717/python-3-2-3-programming-almost-had-it-working)
>
>
>
```
x = float(input("What is/was the cost of the meal?"))
y = float(input("What is/was the sales tax?"))
z = float(input("What percentage tip would you like to leave?"))
print ("Original Food Charge: ${}"
.format(x)))
print ("Sales Tax: ${}"
.format((y/100)*x)))
print ("Tip: ${}"
.format(x*(z/100)))
print ("Total Charge For Food: ${}"
.format(x+((y/100)*x)+((z/100)*x)))
```
output error:
line 10, in
Syntax Error: .format(x))):line 1015
Someone report that this worked in their earlier version of python (v2.6 I think). I'm using the later 3.2.3 and racking my darn brain as to why this is not working to this version. It makes sense to me someone please ENLIGHTEN me.
|
2012/09/24
|
[
"https://Stackoverflow.com/questions/12558878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1693293/"
] |
There is a parentheses missing after `.format(x*(z/100))` that belongs to the preceding `print`.
It should be:
```
print ("Tip: ${}".format(x*(z/100)))
```
**UPDATE:** Not sure if you got it working or not yet, here is the complete code after fixing your unbalanced parentheses...
```
x = float(input("What is/was the cost of the meal?"))
y = float(input("What is/was the sales tax?"))
z = float(input("What percentage tip would you like to leave?"))
print ("Original Food Charge: ${}"
.format(x))
print ("Sales Tax: ${}"
.format((y/100)*x))
print ("Tip: ${}"
.format(x*(z/100)))
print ("Total Charge For Food: ${}"
.format(x+((y/100)*x)+((z/100)*x)))
```
|
You missed an end bracket after the third print: `.format(x*(z/100)))`
Here is what appears to be a working version after I fixed the brackets:
```
x = float(input("What is/was the cost of the meal?"))
y = float(input("What is/was the sales tax?"))
z = float(input("What percentage tip would you like to leave?"))
print("Original Food Charge: ${}".format(x))
print("Sales Tax: ${}".format((y/100)*x))
print("Tip: ${}".format(x*(z/100)))
print("Total Charge For Food: ${}".format(x+((y/100)*x)+((z/100)*x)))
```
Also there is no need for newlines if the line width is less than 79.
| 14,887
|
13,221,021
|
I'm trying to create a basic blogging application in Python using Web.Py. I have started without a direcotry structure, but soon I needed one. So I created this structure:
```
Blog/
├── Application/
│ ├── App.py
│ └── __init__.py
|
├── Engine/
│ ├── Connection/
│ │ ├── __init__.py
│ │ └── MySQLConnection.py
│ ├── Errors.py
│ └── __init__.py
├── __init__.py
├── Models/
│ ├── BlogPostModel.py
│ └── __init__.py
├── start.py
└── Views/
├── Home.py
└── __init__.py
```
`start.py` imports `Application.App`, which contains Web.Py stuff and imports `Blog.Models.BlogPostModel`, which imports `Blog.Engine.Connection.MySQLConnection`.
`Application.App` also imports `Engine.Errors` and `Views.Home`. All these imports happen inside contructors, and all code inside all files are in classes. When I run `python start.py`, which contains these three lines of code:
```
from Application import App
app = App.AppInstance()
app.run()
```
The following stack trace is printed:
```
Blog $ python start.py
Traceback (most recent call last):
File "start.py", line 2, in <module>
Blog = App.AppInstance()
File "/home/goktug/code/Blog/Application/App.py", line 4, in __init__
from Blog.Views import Home
ImportError: No module named Blog.Views
```
But according to what I understand from some research, this should run, at least until it reaches something after App.py. *Can anyone tell where I made the mistake?* (I can provide more code on request, but for now I'm stopping here, as this one is getting messier and messier).
|
2012/11/04
|
[
"https://Stackoverflow.com/questions/13221021",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
No, you simply can't. AutoBean requires everything to be *statically typed*: no polymorphism, and no mixed-typed lists of maps.
You might be interested by RequestFactory's built-in support for JSON-RPC though.
|
Why do your params all need to be passed back in a list? Surely you're not going to do the same thing with a `String`, an `Integer`, and another `Object`! Just send them all back separately.
Further, you're not sending a custom `Object` over the JSON, you're sending the `objid` of that object... so just send the `Integer id` and let the server handle it.
| 14,889
|
34,759,366
|
I have a python script that uses a while true and continues to run. In the while loop I have several if statements. Some of the if statements have a `time.sleep` in them. After using this for a while I notice that all of the if statements below the first `time.sleep` are waiting.
Is there way to have the if statements below the first `time.sleep` to run or will they always wait?
I could put all the if statements with the `time.sleep` on the bottom so all the if statements without a `time.sleep` get executed first. Unless there is another way I will try to build this on the fly script that way if I can.
```
while True:
temp1 = tempRead1()
if temp1 < 65:
GPIO.output(17, False)
else:
GPIO.output(17, True)
if temp1 > 70 and GPIO.input(17):
time.sleep(120)
GPIO.output(27, False)
else:
GPIO.output(27, True)
if temp1 > 72 and GPIO.input(17):
time.sleep(120)
GPIO.output(22, False)
else:
GPIO.output(22, True)
if temp1 > 80 and GPIO.input(17):
time.sleep(120)
GPIO.output(5, False)
else:
GPIO.output(5, True)
if temp1 < 55:
GPIO.output(6, False)
else:
GPIO.output(6, True)
time.sleep(60)
```
|
2016/01/13
|
[
"https://Stackoverflow.com/questions/34759366",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5729013/"
] |
I have used a Javascript timer in the past to poll the server to test if the session is still active. If I receive a 403 then redirect to the login page.
```
var AutoLogout = {};
AutoLogout.pollInterval = 60000; // Default is 1 minute.
// Force auto logout when session expires
AutoLogout.start = function (heartBeatUrl, sessionExpiredUrl, interval) {
if (interval) AutoLogout.pollInterval = interval;
var timer = $.timer(function() {
checkSession(heartBeatUrl, sessionExpiredUrl, timer);
});
timer.set({ time: AutoLogout.pollInterval, autostart: true });
};
// Check the session serverside to see if we need to auto-logout
// if the clientActivity flag is set then the session will be extended before checking.
// if the session is still alive then set the next timer interval to be the time returned from the server.
function checkSession(sessionUrl, sessionExpiredUrl, timer) {
$.ajax(sessionUrl,
{ type: "post",
contentType: "application/json",
success: function (result)
{
// update the timer poll interval based on return value.
try {
var r = ko.toJS(result);
timer.set({
time: r.TimeoutMilliseconds ? r.TimeoutMilliseconds : AutoLogout.pollInterval, autostart: true
});
}
catch(e) { }
},
error: function(e)
{
// do nothing
},
statusCode:
{
403: function ()
{
window.location.href = sessionExpiredUrl;
},
401: function ()
{
window.location.href = sessionExpiredUrl;
}
}
});
}
```
Then when you page loads call AutoLogout.start with the necessary url's for you application.
Notice: I have used Knockout JS in this example to parse the data returned from the server request but that is up to you.
|
I will try to solve your problem
I will suggest to put your logic in action filter
```
public class AuthorizeActionFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(FilterExecutingContext filterContext)
{
HttpSessionStateBase session = filterContext.HttpContext.Session;
Controller controller = filterContext.Controller as Controller;
if (controller != null)
{
if (session != null && session ["authstatus"] == null)
{
filterContext.Result =
new RedirectToRouteResult(
new RouteValueDictionary{{ "controller", "Login" },
{ "action", "Index" }
});
}
}
base.OnActionExecuting(filterContext);
}
}
```
also please refer this
[Redirect From Action Filter Attribute](https://stackoverflow.com/questions/5453338/redirect-from-action-filter-attribute/5453371#5453371)
| 14,890
|
35,475,770
|
Every forum I have looked at says that:
```
pip install pillow
```
remedies issues with installing pyautogui, however, I have installed pillow and I am still receiving:
```
python setup.py egg_info failed with error code 1
```
Any suggestions? I also tried installing PIL but that failed as well with the same error.
|
2016/02/18
|
[
"https://Stackoverflow.com/questions/35475770",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5365094/"
] |
Some of the possible solutions.
1. Use an ajax method to get your resource by passing a key.
2. Use Hidden input fields and load values.
3. Use a dedicated jsp page to declare js variables or even a js function to get values according to key.
like this.
>
>
> ```
> <script type="text/javascript">
>
> var messageOne = '<%=bundle.getString("line1") %>';
> var messageTwo = '<%=bundle.getString("line2") %>';
>
> </script>
>
> ```
>
>
|
It is normally bad practice to use scriplets `<% %>` inside your `jsp` files.
You can use the `fmt` tag from the jstl core library to fetch information from your resource bundles.
```
<fmt:bundle basename="bundle">
<fmt:message var="variableName" key="bundleKey" />
</fmt:bundle>
<input type="hidden" name="hiddenLine2" id="hiddenLine2" value="${variableName}">
```
should work
infact, i think you can also directly embed it into the javascript with EL aswell
```
var line2 = ${variableName}; //instead of getting it from document.getElement(...)
```
| 14,891
|
63,378,402
|
I am trying to stream Elasticsearch data into Snowflake. I am testing a python script which ultimately will be deployed as a cloud function/docker app on AWS. For historical I am using the `scroll` API to write x amount of objects into a string and the string to a file. I've used Snowflake's `PUT file://file.json.gz @stage` but that implies I need to write the file temporary to disk before storing on a stage. I have an insanely large amount of data to pull and am trying to eliminate as many steps as possible. Is there a cheeky way I can write files to the stage directly?
|
2020/08/12
|
[
"https://Stackoverflow.com/questions/63378402",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9972301/"
] |
If you create a Snowflake Stage linked to an S3 when you save to your S3, with whatever you decide to use, it will be automatically on your Snowflake Stage, this way, you can just send a COPY INTO command and save a step or two.
In my opinion, it's a simple and handy solution.
if you need the steps, let me know and I'll be glad to post those here.
|
You can use snowpipe. You need to create smaller files continuously and using snowpipe, keep on uploading them. You can use Amazon Kinesis Firehose to manage the batches.
Refer to documentation at <https://docs.snowflake.com/en/user-guide/data-load-considerations-prepare.html#continuous-data-loads-i-e-snowpipe-and-file-sizing> and <https://docs.aws.amazon.com/firehose/latest/dev/create-configure.html>
| 14,896
|
17,160,968
|
Can inputting and checking be done in the same line in python?
Eg) in C we have
```
if (scanf("%d",&a))
```
The above statement if block works if an integer input is given. But similarly,
```
if a=input():
```
Doesn't work in python. Is there a way to do it?
|
2013/06/18
|
[
"https://Stackoverflow.com/questions/17160968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2046858/"
] |
No, Python can't do assignment as part of the condition of an `if` statement. The only way to do it is on two lines:
```
a=input()
if a:
// Your code here
pass
```
This is by design, as it means that assignment is maintained as an atomic action, independent of comparison. This can help with readability of the code, which in turn limits the potential introduction of bugs.
|
You can't do it. This was a deliberate design choice for Python because this construct is good for causing hard to find bugs.
see @Jonathan's comment on the question for an example
| 14,897
|
57,860,405
|
My understanding of Python asserts is that are meant for debugging and that they don't get executed for "optimized" Python code (`python -O`).
For production app engine code, is `-O` used and thus stripping asserts or will asserts get executed?
|
2019/09/09
|
[
"https://Stackoverflow.com/questions/57860405",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/136598/"
] |
I ran a test on the platforms I use to know for sure. Asserts do get executed for:
* GAE standard first generation
* GAE flexible
|
As far as i understand from Python asserts, once you set the global assertions to -0 they all become "null-operations" as in, they will get compiled but won't be evaluated or have they're conditional expressions executed.
They get set like that at the Python interpreter level so i don't think that GAE actually affects that.
| 14,898
|
25,888,828
|
I'm trying to make an script which takes all rows starting by 'HELIX', 'SHEET' and 'DBREF' from a .txt, from that rows takes some specifical columns and then saves the results on a new file.
```
#!/usr/bin/python
import sys
if len(sys.argv) != 3:
print("2 Parameters expected: You must introduce your pdb file and a name for output file.")`
exit()
for line in open(sys.argv[1]):
if 'HELIX' in line:
helix = line.split()
cols_h = helix[0], helix[3:6:2], helix[6:9:2]
elif 'SHEET'in line:
sheet = line.split()
cols_s = sheet[0], sheet[4:7:2], sheet[7:10:2], sheet [12:15:2], sheet[16:19:2]
elif 'DBREF' in line:
dbref = line.split()
cols_id = dbref[0], dbref[3:5], dbref[8:10]
modified_data = open(sys.argv[2],'w')
modified_data.write(cols_id)
modified_data.write(cols_h)
modified_data.write(cols_s)
```
My problem is that when I try to write my final results it gives this error:
```
Traceback (most recent call last):
File "funcional2.py", line 21, in <module>
modified_data.write(cols_id)
TypeError: expected a character buffer object
```
When I try to convert to a string using ''.join() it returns another error
```
Traceback (most recent call last):
File "funcional2.py", line 21, in <module>
modified_data.write(' '.join(cols_id))
TypeError: sequence item 1: expected string, list found
```
What am I doing wrong?
Also, if there is some easy way to simplify my code, it'll be great.
PS: I'm no programmer so I'll probably need some explanation if you do something...
Thank you very much.
|
2014/09/17
|
[
"https://Stackoverflow.com/questions/25888828",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4027271/"
] |
cols\_id, cols\_h and cols\_s seems to be lists, not strings.
You can only write a string in your file so you have to convert the list to a string.
```
modified_data.write(' '.join(cols_id))
```
and similar.
`'!'.join(a_list_of_things)` converts the list into a string separating each element with an exclamation mark
EDIT:
```
#!/usr/bin/python
import sys
if len(sys.argv) != 3:
print("2 Parameters expected: You must introduce your pdb file and a name for output file.")`
exit()
cols_h, cols_s, cols_id = []
for line in open(sys.argv[1]):
if 'HELIX' in line:
helix = line.split()
cols_h.append(''.join(helix[0]+helix[3:6:2]+helix[6:9:2]))
elif 'SHEET'in line:
sheet = line.split()
cols_s.append( ''.join(sheet[0]+sheet[4:7:2]+sheet[7:10:2]+sheet[12:15:2]+sheet[16:19:2]))
elif 'DBREF' in line:
dbref = line.split()
cols_id.append(''.join(dbref[0]+dbref[3:5]+dbref[8:10]))
modified_data = open(sys.argv[2],'w')
cols = [cols_id,cols_h,cols_s]
for col in cols:
modified_data.write(''.join(col))
```
|
You need to convert the tuple created on RHS in your assignments to string.
```
# Replace this with statement given below
cols_id = dbref[0], dbref[3:5], dbref[8:10]
# Create a string out of the tuple
cols_id = ''.join((dbref[0], dbref[3:5], dbref[8:10]))
```
| 14,899
|
7,234,518
|
what is the bes way tho check if two words are ordered in sentence and how many times it occurs in python.
For example: I like to eat maki sushi and the best sushi is in Japan.
words are: [maki, sushi]
Thanks.
The code
```
import re
x="I like to eat maki sushi and the best sushi is in Japan"
x1 = re.split('\W+',x)
l1 = [i for i,m in enumerate(x1) if m == "maki"]
l2 = [i for i,m in enumerate(x1) if m == "sushi"]
ordered = []
for i in l1:
for j in l2:
if j == i+1:
ordered.append((i,j))
print ordered
```
|
2011/08/29
|
[
"https://Stackoverflow.com/questions/7234518",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/461736/"
] |
According to the added code, you mean that words are adjacent?
Why not just put them together:
```
print len(re.findall(r'\bmaki sushi\b', sent))
```
|
```
def ordered(string, words):
pos = [string.index(word) for word in words]
return pos == sorted(pos)
s = "I like to eat maki sushi and the best sushi is in Japan"
w = ["maki", "sushi"]
ordered(s, w) #Returns True.
```
Not exactly the most efficient way of doing it but simpler to understand.
| 14,902
|
64,348,889
|
There's a code I found in internet that says it gives my machines local network IP address:
```
hostname = socket.gethostname()
local_ip = socket.gethostbyname(hostname)
```
but the IP it returns is 192.168.94.2 but my IP address in WIFI network is actually 192.168.1.107
How can I only get wifi network local IP address with only python?
I want it to work for windows,linux and macos.
|
2020/10/14
|
[
"https://Stackoverflow.com/questions/64348889",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3848316/"
] |
You can use this code:
```
import socket
hostname = socket.getfqdn()
print("IP Address:",socket.gethostbyname_ex(hostname)[2][1])
```
or this to get public ip:
```
import requests
import json
print(json.loads(requests.get("https://ip.seeip.org/jsonip?").text)["ip"])
```
|
Here's code from the `whatismyip` Python module that can grab it from public websites:
```
import urllib.request
IP_WEBSITES = (
'https://ipinfo.io/ip',
'https://ipecho.net/plain',
'https://api.ipify.org',
'https://ipaddr.site',
'https://icanhazip.com',
'https://ident.me',
'https://curlmyip.net',
)
def getIp():
for ipWebsite in IP_WEBSITES:
try:
response = urllib.request.urlopen(ipWebsite)
charsets = response.info().get_charsets()
if len(charsets) == 0 or charsets[0] is None:
charset = 'utf-8' # Use utf-8 by default
else:
charset = charsets[0]
userIp = response.read().decode(charset).strip()
return userIp
except:
pass # Network error, just continue on to next website.
# Either all of the websites are down or returned invalid response
# (unlikely) or you are disconnected from the internet.
return None
print(getIp())
```
Or you can install `pip install whatismyip` and then call `whatismyip.whatismyip()`.
| 14,912
|
5,484,098
|
I'm new to Python. I'm writing a simple class but I'm with an error.
My class:
```
import config # Ficheiro de configuracao
import twitter
import random
import sqlite3
import time
import bitly_api #https://github.com/bitly/bitly-api-python
class TwitterC:
def logToDatabase(self, tweet, timestamp):
# Will log to the database
database = sqlite3.connect('database.db') # Create a database file
cursor = database.cursor() # Create a cursor
cursor.execute("CREATE TABLE IF NOT EXISTS twitter(id_tweet INTEGER AUTO_INCREMENT PRIMARY KEY, tweet TEXT, timestamp TEXT);") # Make a table
# Assign the values for the insert into
msg_ins = tweet
timestamp_ins = timestamp
values = [msg_ins, timestamp_ins]
# Insert data into the table
cursor.execute("INSERT INTO twitter(tweet, timestamp) VALUES(?, ?)", values)
database.commit() # Save our changes
database.close() # Close the connection to the database
def shortUrl(self, url):
bit = bitly_api.Connection(config.bitly_username, config.bitly_key) # Instanciar a API
return bit.shorten(url) # Encurtar o URL
def updateTwitterStatus(self, update):
short = self.shortUrl(update["url"]) # Vou encurtar o URL
update = update["msg"] + short['url']
# Will post to twitter and print the posted text
twitter_api = twitter.Api(consumer_key=config.twitter_consumer_key,
consumer_secret=config.twitter_consumer_secret,
access_token_key=config.twitter_access_token_key,
access_token_secret=config.twitter_consumer_secret)
status = twitter_api.PostUpdate(update) # Fazer o update
msg = status.text # Vou gravar o texto enviado para a variavel 'msg'
# Vou gravar p a Base de Dados
self.logToDatabase(msg, time.time())
print msg
x = TwitterC()
x.updateTwitterStatus([{"url": "http://xxxx.com/?cat=31", "msg": "See some strings..., "}])
```
The error is:
```
Traceback (most recent call last):
File "C:\Documents and Settings\anlopes\workspace\redes_soc\src\twitterC.py", line 42, in <module>
x.updateTwitterStatus([{"url": "http://xxxx.com/?cat=31", "msg": "See some strings..., "}])
File "C:\Documents and Settings\anlopes\workspace\redes_soc\src\twitterC.py", line 28, in updateTwitterStatus
short = self.shortUrl(update["url"]) # Vou encurtar o URL
TypeError: list indices must be integers, not str
```
Any clues on how to solve it?
Best Regards,
|
2011/03/30
|
[
"https://Stackoverflow.com/questions/5484098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/488735/"
] |
It looks like your call to updateTwitterStatus just needs to lose the square brackets:
```
x.updateTwitterStatus({"url": "http://xxxx.com/?cat=31", "msg": "See some strings..., "})
```
You were passing a list with a single dictionary element. It looks as though the method just requires a dictionary with "url" and "msg" keys.
In Python, `{...}` creates a dictionary, and `[...]` creates a list.
|
The error message tells you everything you need to know. It says "list indices must be integers, not str" and points to the code `short = self.shortUrl(update["url"])`. So obviously the python interpreter thinks `update` is a list, and `"url"` is not a valid index into the list.
Since `update` is passed in as a parameter we have to see where it came from. It looks like `[{...}]`, which means it's a list with a single dictionary inside. Presumably you intended to pass just the dictionary, so remove the square brackets when calling `x.updateTwitterStatus`
The first rule of debugging is to assume that the error message is correct, and that you should take it literally.
| 14,913
|
7,606,062
|
For example, if a python script will spit out a string giving the path of a newly written file that I'm going to edit immediately after running the script, it would be very nice to have it directly sent to the system clipboard rather than `STDOUT`.
|
2011/09/30
|
[
"https://Stackoverflow.com/questions/7606062",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/560844/"
] |
You can use an external program, [`xsel`](http://www.vergenet.net/~conrad/software/xsel/):
```
from subprocess import Popen, PIPE
p = Popen(['xsel','-pi'], stdin=PIPE)
p.communicate(input='Hello, World')
```
With `xsel`, you can set the clipboard you want to work on.
* `-p` works with the `PRIMARY` selection. That's the middle click one.
* `-s` works with the `SECONDARY` selection. I don't know if this is used anymore.
* `-b` works with the `CLIPBOARD` selection. That's your `Ctrl + V` one.
Read more about X's clipboards [here](http://standards.freedesktop.org/clipboards-spec/clipboards-latest.txt) and [here](https://superuser.com/questions/200444/why-do-we-have-3-types-of-x-selections-in-linux).
A quick and dirty function I created to handle this:
```
def paste(str, p=True, c=True):
from subprocess import Popen, PIPE
if p:
p = Popen(['xsel', '-pi'], stdin=PIPE)
p.communicate(input=str)
if c:
p = Popen(['xsel', '-bi'], stdin=PIPE)
p.communicate(input=str)
paste('Hello', False) # pastes to CLIPBOARD only
paste('Hello', c=False) # pastes to PRIMARY only
paste('Hello') # pastes to both
```
---
You can also try pyGTK's [`clipboard`](http://www.pygtk.org/docs/pygtk/class-gtkclipboard.html) :
```
import pygtk
pygtk.require('2.0')
import gtk
clipboard = gtk.clipboard_get()
clipboard.set_text('Hello, World')
clipboard.store()
```
This works with the `Ctrl + V` selection for me.
|
This is not really a Python question but a shell question. You already can send the output of a Python script (or any command) to the clipboard instead of standard out, by piping the output of the Python script into the `xclip` command.
```
myscript.py | xclip
```
If `xclip` is not already installed on your system (it isn't by default), this is how you get it:
```
sudo apt-get install xclip
```
If you wanted to do it directly from your Python script I guess you could shell out and run the xclip command using `os.system()` which is simple but deprecated. There are a number of ways to do this (see the `subprocess` module for the current official way). The command you'd want to execute is something like:
```
echo -n /path/goes/here | xclip
```
Bonus: Under Mac OS X, you can do the same thing by piping into `pbcopy`.
| 14,914
|
46,062,117
|
Following [the plotly directions](https://plot.ly/python/distplot/), I would like to plot something similar to the following code:
```
import plotly.plotly as py
import plotly.figure_factory as ff
import numpy as np
# Add histogram data
x1 = np.random.randn(200) - 2
x2 = np.random.randn(200)
x3 = np.random.randn(200) + 2
x4 = np.random.randn(200) + 4
# Group data together
hist_data = [x1, x2, x3, x4]
group_labels = ['Group 1', 'Group 2', 'Group 3', 'Group 4']
# Create distplot with custom bin_size
fig = ff.create_distplot(hist_data, group_labels, bin_size = [.1, .25, .5, 1])
# Plot!
py.iplot(fig, filename = 'Distplot with Multiple Bin Sizes')
```
However, I have a real world dataset that is uneven in sample size (i.e. count of group 1 is different than count in group 2, etc.). Furthermore, it's in name-value pair format.
Here is some dummy data to illustrate:
```
# Add histogram data
x1 = pd.DataFrame(np.random.randn(100))
x1['name'] = 'x1'
x2 = pd.DataFrame(np.random.randn(200) + 1)
x2['name'] = 'x2'
x3 = pd.DataFrame(np.random.randn(300) - 1)
x3['name'] = 'x3'
df = pd.concat([x1, x2, x3])
df = df.reset_index(drop = True)
df.columns = ['value', 'names']
df
```
As you can see, each name (x1, x2, x3) has a different count, and also the "names" column is what I would like to use as the color.
Does anyone know how I can plot this in plotly?
FYI in R, it's very simple, I would simply call ggplot, and in `aes(fill = names)`.
Any help would be appreciated, thank you!
|
2017/09/05
|
[
"https://Stackoverflow.com/questions/46062117",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3604836/"
] |
You could try slicing your dataframe and then putting it into in Ploty.
```
fig = ff.create_distplot([df[df.names == a].value for a in df.names.unique()], df.names.unique(), bin_size=[.1, .25, .5, 1])
```
---
[](https://i.stack.imgur.com/IsX83.png)
```
import plotly
import pandas as pd
plotly.offline.init_notebook_mode()
x1 = pd.DataFrame(np.random.randn(100))
x1['name']='x1'
x2 = pd.DataFrame(np.random.randn(200)+1)
x2['name']='x2'
x3 = pd.DataFrame(np.random.randn(300)-1)
x3['name']='x3'
df=pd.concat([x1,x2,x3])
df=df.reset_index(drop=True)
df.columns = ['value','names']
fig = ff.create_distplot([df[df.names == a].value for a in df.names.unique()], df.names.unique(), bin_size=[.1, .25, .5, 1])
plotly.offline.iplot(fig, filename='Distplot with Multiple Bin Sizes')
```
|
The [example](https://plot.ly/python/distplot/) in [`plotly`](https://plot.ly/python/)'s documentation works out of the box for uneven sample sizes too:
```
#!/usr/bin/env python
import plotly
import plotly.figure_factory as ff
plotly.offline.init_notebook_mode()
import numpy as np
# data with different sizes
x1 = np.random.randn(300)-2
x2 = np.random.randn(200)
x3 = np.random.randn(4000)+2
x4 = np.random.randn(50)+4
# Group data together
hist_data = [x1, x2, x3, x4]
# use custom names
group_labels = ['x1', 'x2', 'x3', 'x4']
# Create distplot with custom bin_size
fig = ff.create_distplot(hist_data, group_labels, bin_size=.2)
# change that if you don't want to plot offline
plotly.offline.plot(fig, filename='Distplot with Multiple Datasets')
```
The above script will produce the following result:
---
[](https://i.stack.imgur.com/0b3Ar.png)
| 14,920
|
74,513,701
|
I am an R User that is trying to learn more about Python.
I found this Python library that I would like to use for address parsing: <https://github.com/zehengl/ez-address-parser>
I was able to try an example over here:
```
from ez_address_parser import AddressParser
ap = AddressParser()
result = ap.parse("290 Bremner Blvd, Toronto, ON M5V 3L9")
print(results)
[('290', 'StreetNumber'), ('Bremner', 'StreetName'), ('Blvd', 'StreetType'), ('Toronto', 'Municipality'), ('ON', 'Province'), ('M5V', 'PostalCode'), ('3L9', 'PostalCode')]
```
I have the following file that I imported:
```
df = pd.read_csv(r'C:/Users/me/OneDrive/Documents/my_file.csv', encoding='latin-1')
name address
1 name1 290 Bremner Blvd, Toronto, ON M5V 3L9
2 name2 291 Bremner Blvd, Toronto, ON M5V 3L9
3 name3 292 Bremner Blvd, Toronto, ON M5V 3L9
```
I then applied the above function and export the file and everything works:
```
df['Address_Parse'] = df['ADDRESS'].apply(ap.parse)
df = pd.DataFrame(df)
df.to_csv(r'C:/Users/me/OneDrive/Documents/python_file.csv', index=False, header=True)
```
**Problem:** I now have another file (similar format) - but this time, I am getting an error:
```
df1 = pd.read_csv(r'C:/Users/me/OneDrive/Documents/my_file1.csv', encoding='latin-1')
df1['Address_Parse'] = df1['ADDRESS'].apply(ap.parse)
AttributeError: 'float' object has no attribute 'replace'
```
I am confused as to why the same code will not work for this file. As I am still learning Python, I am not sure where to begin to debug this problem. My guesses are that perhaps there are special characters in the second file, formatting issues or incorrect variable types that are preventing this `ap.parse` function from working, but I am still not sure.
Can someone please show me what to do?
Thank you!
|
2022/11/21
|
[
"https://Stackoverflow.com/questions/74513701",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13203841/"
] |
It's not possible to completely remove the authorization prompt but you could make it appear only one time for each user by publishing your script as an Editor add-on.
1. Create a Google Cloud standard project (GCSP) and add the OAuth consent screen.
2. Link the GCSP to the Google Apps Script project.
3. Deploy the script as an Editor Add-on.
4. On the GCSP add the Google Workspace Marketplace SDK, configure it and publish to the Google Workspace Marketplace.
Related
* [Deploy and use Google Sheets add-on with Google Apps Script](https://stackoverflow.com/q/22664144/1595451)
* [Is it possible to publish an Add-on for internal use without approval process?](https://stackoverflow.com/q/28990006/1595451)
* [Publish an add-on privately](https://stackoverflow.com/q/45888142/1595451)
Reference
* [Authorization for Google Services](https://developers.google.com/apps-script/guides/services/authorization)
* [OAuth Client Verification](https://developers.google.com/apps-script/guides/client-verification)
* [Publish an add-on](https://developers.google.com/apps-script/add-ons/how-tos/publish-add-on-overview)
|
It's just a routine security procedure. If they trust you, then there's no issue in them accepting it, it's just a warning if you don't know the coder
| 14,921
|
52,683,832
|
```
Revenue = [400000000,10000000,10000000000,10000000]
s1 = []
for x in Revenue:
message = (','.join(['{:,.0f}'.format(x)]).split())
s1.append(message)
print(s1)
The output I am getting is something like this [['400,000,000'], ['10,000,000'], ['10,000,000,000'], ['10,000,000']] and I want it should be like this -> [400,000,000, 10,000,000, 10,000,000,000, 10,000,000]
```
Can someone please help me on this, I am new to python
|
2018/10/06
|
[
"https://Stackoverflow.com/questions/52683832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10431728/"
] |
If your goal is to just add in the commas you will be stuck with the `' '` due to the fact its going to be a `str` but you can eliminate that nesting by using a simpler *list comprehension*
```
Revenue = [400000000,10000000,10000000000,10000000]
l = ['{:,}'.format(i) for i in Revenue]
# ['400,000,000', '10,000,000', '10,000,000,000', '10,000,000']
```
You could also unpack the list into variables and then print each variable without `quotes`
```
v, w, x, y = l
print(v)
# 400,000,000
```
You can `print` the unpacked list but that will just be output
```
print(*l)
# 400,000,000 10,000,000 10,000,000,000 10,000,000
```
Expanded Loop:
```
l = []
for i in Revenue:
l.append('{:,}'.format(i))
```
|
I'm not sure why you want the output you've shown, because it is hard to read, but here is how to make it:
```
>>> Revenue = [400000000,10000000,10000000000,10000000]
>>> def revenue_formatted(rev):
... return "[" + ", ".join("{:,d}".format(n) for n in rev) + "]"
...
>>> print(revenue_formatted(Revenue))
[400,000,000, 10,000,000, 10,000,000,000, 10,000,000]
```
| 14,922
|
56,840,250
|
I am making an adventure game in python 3.7.3, and I am using F strings for some of my print statements. When running it in the terminal and sublime text, F strings give me an error.
```
import time
from time import sleep
import sys
def printfast(str):
for letter in str:
sys.stdout.write(letter)
sys.stdout.flush()
time.sleep(0.04)
name = input("\nWhat is your name?\n\n")
printfast(f("You are the mighty hero {name}. In front of you, there is a grand palace, containing twisting marble spires and spiraling dungeons.\n")
```
|
2019/07/01
|
[
"https://Stackoverflow.com/questions/56840250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11723707/"
] |
You're doing it wrong. `f` isn't a function, it's more of an syntactic identifier. Whereas regular quotes `"` indicate the beginning of a regular string (or the end of any type of string), the token `f"` indicates the beginning of a format string in particular. The same idea goes for raw strings, indicated by `r"`, or binary strings, indicated by `b"`.
Instead of
```
f("You are...")
```
do
```
f"You are..."
```
|
In `f("You ...)`, you're calling a function named `f` with the string as input parameter, and you don't have such a function hence the error.
You need to drop the enclosing parentheses `()` to make it a f-string:
```
f"You are the mighty hero {name}. In front of you, there is a grand palace, containing twisting marble spires and spiraling dungeons.\n"
```
| 14,923
|
36,985,391
|
I am creating a sql query in python of the sort:
```
select lastupdatedatetime from auth_principal_entity where lastupdateddatetime < '02-05-16 03:46:51:527000000 PM'
```
When it is executed, there are escape sequences that are being added which doesn't return me answers.
Although when we print it in stdout, it looks perfect, but for python's understanding it has escape sequences which I don't want in the execution command
```
'select lastupdatedatetime from auth_principal_entity where lastupdateddatetime < \\'02-05-16 03:50:14:388000000 PM\\''
```
|
2016/05/02
|
[
"https://Stackoverflow.com/questions/36985391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2576170/"
] |
The escape sequences won't cause any problem in the cursor.execute(query)
The real issue lies in the date that is being sent as a string is being used to compare and return values from db which are in date-object format.
so something like this should work.
```
query = "SELECT LASTUPDATEDDATETIME FROM AUTH_PRINCIPAL_ENTITY WHERE LASTUPDATEDDATETIME < to_date('03-May-16', 'dd-mon-yy')"
```
Or
```
date_ = datetime.datetime.now().strftime('%d-%b-%y')
query = "SELECT LASTUPDATEDDATETIME FROM AUTH_PRINCIPAL_ENTITY WHERE LASTUPDATEDDATETIME < to_date('{}', 'dd-mon-yy')".format(date_)
```
Try that. Should work for you :-)
|
For me, I wrap my SQL statements in triple quotes so it doesn't run into these issues when I execute:
```
query = """
select lastupdatedatetime from auth_principal_entity where lastupdateddatetime < '02-05-16 03:46:51:527000000 PM'
"""
```
| 14,925
|
46,247,340
|
I am currently running tests between XGBoost/lightGBM for their ability to rank items. I am reproducing the benchmarks presented here: <https://github.com/guolinke/boosting_tree_benchmarks>.
I have been able to successfully reproduce the benchmarks mentioned in their work. I want to make sure that I am correctly implementing my own version of the ndcg metric and also understanding the ranking problem correctly.
My questions are:
1. When creating the validation for the test set using ndcg - there is a test.group file that says the first X rows are group 0, etc. To get the recommendations for the group, I get the predicted values and known relevance scores and sort that list by descending predicted values for each group?
2. In order to get the final ndcg scores from the lists created above - do I get the ndcg scores and take the mean over all the scores? Is this the same evaluation methodology that XGBoost/lightGBM in the evaluation phase?
Here is my methodology for evaluating the test set after the model has finished training.
For the final tree when I run `lightGBM` I obtain these values on the validation set:
```
[500] valid_0's ndcg@1: 0.513221 valid_0's ndcg@3: 0.499337 valid_0's ndcg@5: 0.505188 valid_0's ndcg@10: 0.523407
```
My final step is to take the predicted output for the test set and calculate the ndcg values for the predictions.
Here is my python code for calculating ndcg:
```
import numpy as np
def dcg_at_k(r, k):
r = np.asfarray(r)[:k]
if r.size:
return np.sum(np.subtract(np.power(2, r), 1) / np.log2(np.arange(2, r.size + 2)))
return 0.
def ndcg_at_k(r, k):
idcg = dcg_at_k(sorted(r, reverse=True), k)
if not idcg:
return 0.
return dcg_at_k(r, k) / idcg
```
After I get the predictions for the test set for a particular group (**GROUP-0**) I have these predictions:
```
query_id predict
0 0 (2.0, -0.221681199441)
1 0 (1.0, 0.109895548348)
2 0 (1.0, 0.0262799346312)
3 0 (0.0, -0.595343431322)
4 0 (0.0, -0.52689043426)
5 0 (0.0, -0.542221350664)
6 0 (1.0, -0.448015576024)
7 0 (1.0, -0.357090949646)
8 0 (0.0, -0.279677741045)
9 0 (0.0, 0.2182200869)
```
**NOTE**
**Group-0** actually has about 112 rows.
I then sort the list of tuples in descending order which provides a list of relevance scores:
```
def get_recommendations(x):
sorted_list = sorted(list(x), key=lambda i: i[1], reverse=True)
return [k for k, _ in sorted_list]
relavance = evaluation.groupby('query_id').predict.apply(get_recommendations)
query_id
0 [4.0, 2.0, 2.0, 3.0, 2.0, 2.0, 2.0, 2.0, 2.0, ...
1 [4.0, 2.0, 2.0, 2.0, 1.0, 1.0, 3.0, 2.0, 1.0, ...
2 [2.0, 3.0, 2.0, 2.0, 1.0, 0.0, 2.0, 2.0, 1.0, ...
3 [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, ...
4 [1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, ...
```
Finally, for each query id I calculated the ndcg scores on the relevance list and then take the mean of all the ndcg scores calculated for each query id:
```
relavance.apply(lambda x: ndcg_at_k(x, 10)).mean()
```
The value I obtain is `~0.497193`.
|
2017/09/15
|
[
"https://Stackoverflow.com/questions/46247340",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2800840/"
] |
Cross-posting my Cross Validated answer to this cross-posted question:
<https://stats.stackexchange.com/questions/303385/how-does-xgboost-lightgbm-evaluate-ndcg-metric-for-ranking/487487#487487>
---
I happened across this myself, and finally dug into the code to figure it out.
The difference is the handling of a missing IDCG. Your code returns 0, while [LightGBM is treating that case as a 1](https://github.com/microsoft/LightGBM/blob/ac5f5e56d012b1f435f1a52cbd6600f100ffa187/src/metric/rank_metric.hpp#L97).
The following code produced matching results for me:
```py
import numpy as np
def dcg_at_k(r, k):
r = np.asfarray(r)[:k]
if r.size:
return np.sum(np.subtract(np.power(2, r), 1) / np.log2(np.arange(2, r.size + 2)))
return 0.
def ndcg_at_k(r, k):
idcg = dcg_at_k(sorted(r, reverse=True), k)
if not idcg:
return 1. # CHANGE THIS
return dcg_at_k(r, k) / idcg
```
|
I think the problem is caused by data in the same query that have same labels.
In that case, Both XGBoost and LightGBM will produce ndcg 1 for that query.
| 14,926
|
15,854,257
|
I am new to python. As part of writing a module to scrape URLs I noticed that what I get using the python requests module could be different from what I get if I load the URL in a browser. This is because the page could contain JS code which is executed and the result is hat I see in the browser.
My questions -
1. how do I deal with such sites.
1. Is python or any other module limited to just getting static pages or pages completely rendered on the server side?
2. How to deal with pages that do an Ajax style queries to load pages?
I am assuming that there probably isn't a library for this and I have to do something on my own. I hope I don't have to build in something like webkit into my code :)
Thanks for any help.
|
2013/04/06
|
[
"https://Stackoverflow.com/questions/15854257",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1645536/"
] |
The best way to go about this might be, to use [css-gradients](https://developer.mozilla.org/en-US/docs/CSS/gradient) instead of shadows.
I have done a little demo on [jsfiddle](http://jsfiddle.net/Qjgps/1/). I am not sure this is what you are looking for though. Here is the css I used:
```
background: rgb(254,255,255); /* Old browsers */
background: -moz-linear-gradient(top, rgba(254,255,255,1) 69%, rgba(226,226,226,1) 100%); /* FF3.6+ */
background: -webkit-gradient(linear, left top, left bottom, color-stop(69%,rgba(254,255,255,1)), color-stop(100%,rgba(226,226,226,1))); /* Chrome,Safari4+ */
background: -webkit-linear-gradient(top, rgba(254,255,255,1) 69%,rgba(226,226,226,1) 100%); /* Chrome10+,Safari5.1+ */
background: -o-linear-gradient(top, rgba(254,255,255,1) 69%,rgba(226,226,226,1) 100%); /* Opera 11.10+ */
background: -ms-linear-gradient(top, rgba(254,255,255,1) 69%,rgba(226,226,226,1) 100%); /* IE10+ */
background: linear-gradient(to bottom, rgba(254,255,255,1) 69%,rgba(226,226,226,1) 100%); /* W3C */
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#feffff', endColorstr='#e2e2e2',GradientType=0 ); /* IE6-9
```
As generated by [this](http://www.colorzilla.com/gradient-editor/) tool
|
I came up with this using [this CSS3 Generator](http://css3generator.com/)
```
-webkit-box-shadow: inset 0px -350px 200px -250px rgba(5, 5, 5, 1);
box-shadow: inset 0px -350px 200px -250px rgba(5, 5, 5, 1);
```
This is a very cross-browser friendly method and if you apply a background color it will achieve what I believe is your desired result.
Check out this [jsFiddle](http://jsfiddle.net/Qjgps/4/)
**Source(s)**
[CSS3 Generator](http://css3generator.com/)
| 14,927
|
15,671,875
|
I seem to have some difficulty getting what I want to work. Basically, I have a series of variables that are assigned strings with some quotes and \ characters. I want to remove the quotes to embed them inside a json doc, since json hates quotes using python dump methods.
I figured it would be easy. Just determine how to remove the characters easy and then write a simple for loop for the variable substitution, well it didn't work that way.
Here is what I want to do.
There is a variable called "MESSAGE23", it contains the following "com.centrify.tokend.cac", I want to strip out the quotes, which to me is easy, a simple `echo $opt | sed "s/\"//g"`. When I do this from the command line:
```
$> MESSAGE23="com."apple".cacng.tokend is present"
$> MESSAGE23=`echo $MESSAGE23 | sed "s/\"//g"`
$> com.apple.cacng.tokend is present
```
This works. I get the properly formatted string.
When I then try to throw this into a loop, all hell breaks loose.
```
for i to {1..25}; do
MESSAGE$i=`echo $MESSAGE$i | sed "s/\"//g"`
done
```
This doesn't work (either it throws a bunch of indexes out or nothing), and I'm pretty sure I just don't know enough about `arg` or `eval` or other bash substitution variables.
But basically I want to do this for another set of variables with the same problems, where I strip out the quotes and incidentally the "\" too.
Any help would be greatly appreciated.
|
2013/03/28
|
[
"https://Stackoverflow.com/questions/15671875",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/933693/"
] |
You can't do that. You could make it work using `eval`, but that introduces another level of quoting you have to worry about. Is there some reason you can't use an array?
```
MESSAGE=("this is MESSAGE[0]" "this is MESSAGE[1]")
MESSAGE[2]="I can add more, too!"
for (( i=0; i<${#MESSAGE[@]}; ++i )); do
echo "${MESSAGE[i]}"
done
```
Otherwise you need something like this:
```
eval 'echo "$MESSAGE'"$i"'"'
```
and it just gets worse from there.
|
First, a couple of preliminary problems: `MESSAGE23="com."apple".cacng.tokend is present"` will not embed double-quotes in the variable value, use `MESSAGE23="com.\"apple\".cacng.tokend is present"` or `MESSAGE23='com."apple".cacng.tokend is present'` instead. Second, you should almost always put double-quotes around variable expansions (e.g. `echo "$MESSAGE23"`) to prevent parsing oddities.
Now, the real problems: the shell doesn't allow variable substitution on the left side of an assignment (i.e. `MESSAGE$i=something` won't work). Fortunately, it does allow this in a `declare` statement, so you can use that instead. Also, when the sees `$MESSAGE$i` it replaces it will the value of `$MESSAGE` followed by the value of `$i`; for this you need to use indirect expansion (`${!metavariable}').
```
for i in {1..25}; do
varname="MESSAGE$i"
declare $varname="$(echo "${!varname}" | tr -d '"')"
done
```
(Note that I also used `tr` instead of `sed`, but that's just my personal preference.)
(Also, note that @Mark Reed's suggestion of an array is really the better way to do this sort of thing.)
| 14,928
|
46,721,993
|
I have a shell script that I use to export Env variables. This script calls a python script to get certain values from a web service that I need to store before running my primary python script.
I've tried using a `RUN . /bot/env/setenv.sh`, but this doesn't seem to make the env variables available in the final container. I've tried putting the contents in an `entrypoint.sh` file that ends in calling `python jbot.py`, but the container never completes its setup (I assume because the script inside the entrypoint is a continuous loop?)
My `entrypoint.sh` looks like this:
```
#!/bin/bash
. /jirabot/env/setenv.sh
python jbot.py
```
And the `setenv.sh` is just:
```
#!/bin/bash
export SLACK_BOT_TOKEN="xoxb-token"
export BOT_ID=`python env/print_bot_id.py ${SLACK_BOT_TOKEN}`
```
My full Dockerfile is:
```
FROM python:2
COPY jirabot/ /jirabot/
RUN pip install slackclient schedule jira
WORKDIR /jirabot
#CMD [ "python", "jbot.py" ]
ENTRYPOINT [ "/jirabot/entrypoint.sh" ]
```
When I do `docker run bot`, I can verify that the application is running (the bot responds to my requests appropriately). However, all of the `print()` statements within `jbot.py` are absent from the output -- so I have two primary questions:
1. Why does my `entrypoint.sh` just hang the container from returning? I do `docker run bot`, and I'm never returned control of the terminal. However, the bot seems to startup fine.
2. Why do I not get any of my print statements from `jbot.py` when I open a second terminal and do `docker logs <container>`?
fwiw, my `jbot.py` is a `while True:` loop, monitoring for input.
|
2017/10/13
|
[
"https://Stackoverflow.com/questions/46721993",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1143724/"
] |
1. You are not running the docker container [as a daemon](https://docs.docker.com/engine/reference/run/#detached--d).
>
> docker run -d bot
>
>
>
2. In my experience, print messages don't make it to the logs without input buffering disabled in python.
>
> python -u jbot.py
>
>
>
|
For your first question, you should check the documentation of `docker run`. Shortly, you are attaching with container so you will never return to your terminal. In order to detach, you need to add option `-d`.
The common used command to launch a container is `docker run -idt <container>`.
For your second question, the information is not enough to identify the problem, sorry. Maybe you can try again after you launching a container properly.
| 14,929
|
52,911,986
|
I am working with ros on ubuntu 16.04. Because of this I am working with a virtual environment for python 2.7 and the ros python modules (rospy for example). The "python.pythonPath" is set to the virtual environment and the ros modules are linked through "python.autoComplete.extraPaths".
This leads to the issue where the python linter raises an error for import rospy claiming that it can not import it. However, the python intellisense is still able detect and help with the rospy module (which makes sense due to the python.autoComplete.extraPaths setting).
Is there a way to include the extra paths for autoComplete for the linter as well? At this point, no longer including the virtual environment for the python path is not a desirable option so I am looking for a way to have the linter include the extra paths for ros python modules and the modules in the virtual environment.
|
2018/10/21
|
[
"https://Stackoverflow.com/questions/52911986",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10534965/"
] |
I've never used probuilder, but in other 3d max app you will need to delete the inner polygons that to be able to cap that face. select all the polygons in the "well" and delete, you will now have a hole to fill
[GIF]
[][1](https://i.stack.imgur.com/W3My9.gif)
|
update
I find a complex but accurate way to do this
unity editor > Tools > Probuilder > editor > open vertex position editor
when i select vertex, position editor
put selected positions to txt file
unity editor > tools > probuilder > editor > open new shape editor > shape selector > custom
build face with positions in txt
**attention**: "Custom" point order is very strange, for ex, "Custom" to build cube, the point order is, top-left, top-right, bottom-left, bottom-right, in my understanding, the correct order is:
in x-z axis face, image put point in Scene to build more near "parallel" x axis edge
put edge point from x axis < 0(left) side to x axis > 0(right) side, and then use same way to build other "parallel" x axis edge
select my created face and old object, probuilder toolbar > Merge Objects to merge them into one
above way I use do all with ProBuilder API since it's opensource but not document, but filling face is rare case, so I think do it in probuilder gui is enongh
**old answer**
my tmp solution is use probuilder > New Poly Shape to use mouse click 4 vertexs and manually make face, but it's not perfect, it has seam in edge
| 14,930
|
56,443,552
|
I will service my django project with uwsgi on Ubuntu Server, but It doesn't run.
I am using python 3.6 but the uwsgi shows me it's 2.7
I changed default python to python3.6 but uwsgi still doesn't work.
This is my command :
```
uwsgi --http :8001 --home /home/ubuntu/repository/env --chdir
/home/ubuntu/repository/project -w project.wsgi
```
This is Error message :
```
*** Starting uWSGI 2.0.18 (64bit) on [Tue Jun 4 21:03:58 2019] ***
compiled with version: 5.4.0 20160609 on 04 June 2019 11:39:14
os: Linux-4.4.0-1079-aws #89-Ubuntu SMP Tue Mar 26 15:25:52 UTC 2019
nodename: ip-172-31-18-239
machine: x86_64
clock source: unix
detected number of CPU cores: 2
current working directory: /home/ubuntu/repository/charteredbus
*** running under screen session 1636.sbus ***
detected binary path: /usr/local/bin/uwsgi
!!! no internal routing support, rebuild with pcre support !!!
chdir() to /home/ubuntu/repository/charteredbus
*** WARNING: you are running uWSGI without its master process manager ***
your processes number limit is 15738
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uWSGI http bound on :8001 fd 4
spawned uWSGI http 1 (pid: 8402)
uwsgi socket 0 bound to TCP address 127.0.0.1:39614 (port auto-assigned) fd 3
Python version: 2.7.12 (default, Nov 12 2018, 14:36:49) [GCC 5.4.0 20160609]
Set PythonHome to /home/ubuntu/repository/env
ImportError: No module named site
```
|
2019/06/04
|
[
"https://Stackoverflow.com/questions/56443552",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11598754/"
] |
Unfortunately, uWSGI has to be compiled with python version matching with your virtualenv. That means: if uWSGI was compiled with python 2.7, you cannot use python 3.6 in your virtualenv (and in your Django app).
Fortunately, there are some methods to fix that:
* Installing uWSGI inside your virtualenv and using that uWSGI binary to run Django.
* Using Python as a plugin to uWSGI.
First one is pretty straightforward. All you need to do is change path to uWSGI binary in your startup script to point to uWSGI installed in your virtualenv. (If you're starting uWSGI using systemd, I recommend systemd user units. Just don't forget to run `loginctl enable-linger`)
Second one is not that complicated. First you have to install uWSGI without python plugin, then install separate plugins for all python versions you will need. More on that you can find [here](https://uwsgi-docs.readthedocs.io/en/latest/WSGIquickstart.html#bonus-multiple-python-versions-for-the-same-uwsgi-binary). There are probably ready plugins in your system package repository if you're using uWSGI from it.
|
The log tells there is no module named site
>
> ImportError: No module named site
>
>
>
I assume site is an django app.
did you register this in your INSTALLED\_APPS (settings.py)
Otherwise you may need to register your app. (apps.py in the site app)
Please let me know if I helped you.
Jasper
| 14,932
|
20,200,307
|
I am using GAE long time but can not find what is maximum length of ListProperty.
I was read [documentation](https://developers.google.com/appengine/docs/python/datastore/typesandpropertyclasses#ListProperty) but not found solution I want to create ListProperty(long) to keep about 30 values of long or more. I want use this field as filter - can I use it similar to StringListProperty?
What is size limits of ListProperty(long)?
|
2013/11/25
|
[
"https://Stackoverflow.com/questions/20200307",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/665926/"
] |
I've a list of 20K strings (not indexed though). I don't think there is limitation on the length, but there is limitation on each entity size. Be careful on indexing multi value properties, it could be expensive.
|
30 will be fine.
Guido's answer about related question: <https://stackoverflow.com/a/15418435/1279005>
So up to 100 repeated values will be fine.
Repeated properties much easier to understand by using NDB as I think. You should try it.
It's does not matter if you using it with Long or String properties - if the property is indexed you'll be able to filter by it.
| 14,934
|
58,260,903
|
I tried various programs to get the required pattern (Given below). The program which got closest to the required result is given below:
**Input:**
```
for i in range(1,6):
for j in range(i,i*2):
print(j, end=' ')
print( )
```
**Output:**
```
1
2 3
3 4 5
4 5 6 7
5 6 7 8 9
```
**Required Output:**
```
1
2 3
4 5 6
7 8 9 10
```
Can I get some hint to get the required output?
Note- A newbie to python.
|
2019/10/06
|
[
"https://Stackoverflow.com/questions/58260903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9957954/"
] |
Store the printed value outside of the loop, then increment after its printed
```
v = 1
lines = 4
for i in range(lines):
for j in range(i):
print(v, end=' ')
v += 1
print( )
```
|
If you don't want to keep track of the count and solve this mathematically and be able to directly calculate any n-th line, the formula you are looking for is the one for, well, [triangle numbers](https://en.wikipedia.org/wiki/Triangular_number):
```
triangle = lambda n: n * (n + 1) // 2
for line in range(1, 5):
t = triangle(line)
print(' '.join(str(x+1) for x in range(t-line, t)))
# 1
# 2 3
# 4 5 6
# 7 8 9 10
```
| 14,937
|
43,043,437
|
I'm trying to create a wordcloud from csv file. The csv file, as an example, has the following structure:
```
a,1
b,2
c,4
j,20
```
It has more rows, more or less 1800. The first column has string values (names) and the second column has their respective frequency (int). Then, the file is read and the key,value row is stored in a dictionary (d) because later on we will use this to plot the wordcloud:
```py
reader = csv.reader(open('namesDFtoCSV', 'r',newline='\n'))
d = {}
for k,v in reader:
d[k] = v
```
Once we have the dictionary full of values, I try to plot the wordcloud:
```sh
#Generating wordcloud. Relative scaling value is to adjust the importance of a frequency word.
#See documentation: https://github.com/amueller/word_cloud/blob/master/wordcloud/wordcloud.py
wordcloud = WordCloud(width=900,height=500, max_words=1628,relative_scaling=1,normalize_plurals=False).generate_from_frequencies(d)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
But an error is thrown:
Traceback (most recent call last):
File ".........../script.py", line 19, in <module>
wordcloud = WordCloud(width=900,height=500, max_words=1628,relative_scaling=1,normalize_plurals=False).generate_from_frequencies(d)
File "/usr/local/lib/python3.5/dist-packages/wordcloud/wordcloud.py", line 360, in generate_from_frequencies
for word, freq in frequencies]
File "/usr/local/lib/python3.5/dist-packages/wordcloud/wordcloud.py", line 360, in <listcomp>
for word, freq in frequencies]
TypeError: unsupported operand type(s) for /: 'str' and 'float
```
Finally, the documentation says:
```py
def generate_from_frequencies(self, frequencies, max_font_size=None):
"""Create a word_cloud from words and frequencies.
Parameters
----------
frequencies : dict from string to float
A contains words and associated frequency.
max_font_size : int
Use this font-size instead of self.max_font_size
Returns
-------
self
```python
So, I don't understand why is trowing me this error if I met the requirements of the function. I hope someone can help me, thanks.
**Note**
I work with worldcloud 1.3.1
```
|
2017/03/27
|
[
"https://Stackoverflow.com/questions/43043437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7658581/"
] |
This is because the values in your dictionary are strings but wordcloud expects integer or floats.
After I run your code then inspect your dictionary `d` I get the following.
```
In [12]: d
Out[12]: {'a': '1', 'b': '2', 'c': '4', 'j': '20'}
```
Note the `' '` around the numbers means these are really strings.
A hacky way to resolve this is to cast `v` to an `int` in your `FOR` loop like:
```
d[k] = int(v)
```
I say this is hacky since it'll work on integers but if you have floats in your input then it may cause problems.
Also, Python errors can be difficult to read. Your error above can be interpreted as
```
script.py", line 19
TypeError: unsupported operand type(s) for /: 'str' and 'float
```
>
> "There's a type error on or before line 19 of my file. Let me look at
> my data types to see if there is any mismatch between string and
> float..."
>
>
>
The code below works for me:
```
import csv
from wordcloud import WordCloud
import matplotlib.pyplot as plt
reader = csv.reader(open('namesDFtoCSV', 'r',newline='\n'))
d = {}
for k,v in reader:
d[k] = int(v)
#Generating wordcloud. Relative scaling value is to adjust the importance of a frequency word.
#See documentation: https://github.com/amueller/word_cloud/blob/master/wordcloud/wordcloud.py
wordcloud = WordCloud(width=900,height=500, max_words=1628,relative_scaling=1,normalize_plurals=False).generate_from_frequencies(d)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.show()
```
|
```
# LEARNER CODE START HERE
file_c=""
for index, char in enumerate(file_contents):
if(char.isalpha()==True or char.isspace()):
file_c+=char
file_c=file_c.split()
file_w=[]
for word in file_c:
if word.lower() not in uninteresting_words and word.isalpha()==True:
file_w.append(word)
frequency={}
for word in file_w:
if word.lower() not in frequency:
frequency[word.lower()]=1
else:
frequency[word.lower()]+=1
#wordcloud
cloud = wordcloud.WordCloud()
cloud.generate_from_frequencies(frequency)
return cloud.to_array()
```
| 14,938
|
5,719,545
|
I am working on web-crawler [using python].
Situation is, for example, I am behind server-1 and I use proxy setting to connect to the Outside world. So in Python, using proxy-handler I can fetch the urls.
Now thing is, I am building a crawler so I cannot use only one IP [otherwise I will be blocked]. To solve this, I have bunch of Proxies, I want to shuffle through.
My question is: This is two level proxy, one to connect to main server-1, I use proxy and then after to shuffle through proxies, I want to use proxy. How can I achieve this?
|
2011/04/19
|
[
"https://Stackoverflow.com/questions/5719545",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/715600/"
] |
**Update** Sounds like you're looking to connect to proxy A and from there initiate HTTP connections via proxies B, C, D which are outside of A. You might look into the [proxychains project](http://proxychains.sourceforge.net/) which says it can "tunnel any protocol via a user-defined chain of TOR, SOCKS 4/5, and HTTP proxies".
Version 3.1 is available as a package in Ubuntu Lucid. If it doesn't work directly for you, the [proxychains source code](http://prdownloads.sourceforge.net/proxychains/proxychains-3.1.tar.gz?download) may provide some insight into how this capability could be implemented for your app.
**Orig answer**:
Check out the [urllib2.ProxyHandler](http://docs.python.org/library/urllib2.html#urllib2.ProxyHandler). Here is an example of how you can use several different proxies to open urls:
```
import random
import urllib2
# put the urls for all of your proxies in a list
proxies = ['http://localhost:8080/']
# construct your list of url openers which each use a different proxy
openers = []
for proxy in proxies:
opener = urllib2.build_opener(urllib2.ProxyHandler({'http': proxy}))
openers.append(opener)
# select a url opener randomly, round-robin, or with some other scheme
opener = random.choice(openers)
req = urllib2.Request(url)
res = opener.open(req)
```
|
I recommend you take a look at CherryProxy. It lets you send a proxy request to an intermediate server (where CherryProxy is running) and then forward your HTTP request to a proxy on a second level machine (e.g. squid proxy on another server) for processing. Viola! A two-level proxy chain.
<http://www.decalage.info/python/cherryproxy>
| 14,939
|
66,775,948
|
Some python packages wont work in python 3.7 . So wanted to downgrade the default python version in google colab.Is it possible to do? If so how to proceed.Please guide me..
|
2021/03/24
|
[
"https://Stackoverflow.com/questions/66775948",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15069182/"
] |
You could install python 3.6 with `miniconda`:
```
%%bash
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
```
And add to path:
```
import sys
_ = (sys.path.append("/usr/local/lib/python3.6/site-packages"))
```
|
The following code snippet below will download Python 3.6 without any Colab pre-installed libraries (such as Tensorflow). You can install them later with pip, like `!pip install tensorflow`. Please note that this won't downgrade your default python in colab, rather it would provide a workaround to work with other python versions in colab. To run any python scripts with 3.6 version, use `!python3.6` instead of `!python`
```
!add-apt-repository ppa:deadsnakes/ppa
!apt-get update
!apt-get install python3.6
!apt-get install python3.6-dev
!wget https://bootstrap.pypa.io/get-pip.py && python3.6 get-pip.py
import sys
sys.path[2] = '/usr/lib/python36.zip'
sys.path[3] = '/usr/lib/python3.6'
sys.path[4] = '/usr/lib/python3.6/lib-dynload'
sys.path[5] = '/usr/local/lib/python3.6/dist-packages'
sys.path[7] ='/usr/local/lib/python3.6/dist-packages/IPython/extensions'
```
| 14,940
|
53,972,642
|
I'm a lawyer and python beginner, so I'm both (a) dumb and (b) completely out of my lane.
I'm trying to apply a regex pattern to a text file. The pattern can sometimes stretch across multiple lines. I'm specifically interested in these lines from the text file:
```
Considered and decided by Hemingway, Presiding Judge; Bell,
Judge; and \n
\n
Dickinson, Emily, Judge.
```
I'd like to individually hunt for, extract, and then print the judges' names. My code so far looks like this:
```
import re
def judges():
presiding = re.compile(r'by\s*?([A-Z].*),\s*?Presiding\s*?Judge;', re.DOTALL)
judge2 = re.compile(r'Presiding\s*?Judge;\s*?([A-Z].*),\s*?Judge;', re.DOTALL)
judge3 = re.compile(r'([A-Z].*), Judge\.', re.DOTALL)
with open("text.txt", "r") as case:
for lines in case:
presiding_match = re.search(presiding, lines)
judge2_match = re.search(judge2, lines)
judge3_match = re.search(judge3, lines)
if presiding_match or judge2_match or judge3_match:
print(presiding_match.group(1))
print(judge2_match.group(1))
print(judge3_match.group(1))
break
```
When I run it, I can get Hemingway and Bell, but then I get an "AttributeError: 'NoneType' object has no attribute 'group'" for the third judge after the two line breaks.
After trial-and-error, I've found that my code is only reading the first line (until the "Bell, Judge; and") then quits. I thought the re.DOTALL would solve it, but I can't seem to make it work.
I've tried a million ways to capture the line breaks and get the whole thing, including re.match, re.DOTALL, re.MULTILINE, "".join, "".join(lines.strip()), and anything else I can throw against the wall to make stick.
After a couple days, I've bowed to asking for help. Thanks for anything you can do.
(As an aside, I've had no luck getting the regex to work with the ^ and $ characters. It also seems to hate the . escape in the judge3 regex.)
|
2018/12/29
|
[
"https://Stackoverflow.com/questions/53972642",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10823652/"
] |
You are passing in **single lines**, because you are iterating over the open file referenced by `case`. The regex is never passed anything other than a single line of text. Your regexes can each match *some* of the lines, but they don't all together match the same single line.
You'd have to read in more than one line. If the file is small enough, just read it as one string:
```
with open("text.txt", "r") as case:
case_text = case.read()
```
then apply your regular expressions to that one string.
Or, you could test each of the match objects individually, not as a group, and only print those that matched:
```
if presiding_match:
print(presiding_match.group(1))
elif judge2_match:
print(judge2_match.group(1))
elif judge3_match:
print(judge3_match.group(1))
```
but then you'll have to create additional logic to determine when you are done reading from the file and break out of the loop.
Note that the patterns you are matching are not broken across lines, so the `DOTALL` flag is not actually needed here. You do match `.*` text, so you are running the risk of matching *too much* if you use `DOTALL`:
```
>>> import re
>>> case_text = """Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and
...
... Dickinson, Emily, Judge.
... """
>>> presiding = re.compile(r'by\s*?([A-Z].*),\s*?Presiding\s*?Judge;', re.DOTALL)
>>> judge2 = re.compile(r'Presiding\s*?Judge;\s*?([A-Z].*),\s*?Judge;', re.DOTALL)
>>> judge3 = re.compile(r'([A-Z].*), Judge\.', re.DOTALL)
>>> presiding.search(case_text).groups()
('Hemingway',)
>>> judge2.search(case_text).groups()
('Bell',)
>>> judge3.search(case_text).groups()
('Considered and decided by Hemingway, Presiding Judge; Bell, Judge; and \n\nDickinson, Emily',)
```
I'd at least replace `[A-Z].*` with `[A-Z][^;\n]+`, to *at least* exclude matching `;` semicolons and newlines, and only match names at least 2 characters long. Just drop the `DOTALL` flags altogether:
```
>>> presiding = re.compile(r'by\s*?([A-Z][^;]+),\s+?Presiding\s+?Judge;')
>>> judge2 = re.compile(r'Presiding\s+?Judge;\s+?([A-Z][^;]+),\s+?Judge;')
>>> judge3 = re.compile(r'([A-Z][^;]+), Judge\.')
>>> presiding.search(case_text).groups()
('Hemingway',)
>>> judge2.search(case_text).groups()
('Bell',)
>>> judge3.search(case_text).groups()
('Dickinson, Emily',)
```
You can combine the three patterns into one:
```
judges = re.compile(
r'(?:Considered\s+?and\s+?decided\s+?by\s+?)?'
r'([A-Z][^;]+),\s+?(?:Presiding\s+?)?Judge[.;]'
)
```
which can find all the judges in your input in one go with `.findall()`:
```
>>> judges.findall(case_text)
['Hemingway', 'Bell', 'Dickinson, Emily']
```
|
Instead of multiple `re.search`, you could use [`re.findall`](https://docs.python.org/3.7/library/re.html#re.findall) with a really short and simple pattern to find all judges at once:
```
import re
text = """Considered and decided by Hemingway, Presiding Judge; Bell,
Judge; and \n
\n
Dickinson, Emily, Judge."""
matches = re.findall(r"(\w+,)?\s(\w+),(\s+Presiding)?\s+Judge", text)
print(matches)
```
Which prints:
```
[('', 'Hemingway', ' Presiding'), ('', 'Bell', ''), ('Dickinson,', 'Emily', '')]
```
All the raw information is there: first name, last name and "presiding attribute" (if Presiding Judge or not) of each judge. Afterwards, you can feed this raw information into a data structure which satisfies your needs, for example:
```
judges = []
for match in matches:
if match[0]:
first_name = match[1]
last_name = match[0]
else:
first_name = ""
last_name = match[1]
presiding = "Presiding" in match[2]
judges.append((first_name, last_name, presiding))
print(judges)
```
Which prints:
```
[('', 'Hemingway', True), ('', 'Bell', False), ('Emily', 'Dickinson,', False)]
```
As you can see, now you have a list of tuples, where the first element is the first name (if specified in the text), the second element is the last name and the third element is a `bool` whether the judge is the presiding judge or not.
Obviously, the pattern works for your provided example. However, since `(\w+,)?\s(\w+),(\s+Presiding)?\s+Judge` is such a simple pattern, there are some edge cases to be aware of, where the pattern might return the wrong result:
* Only one first name will be matched. A name like `Dickinson, Emily Mary` will result in `Mary` detected as the last name.
* Last names like `de Broglie` will result in only `Broglie` matched, so `de` gets lost.
* ...
You will have to see if this fits your needs or provide more information to your question about your data.
| 14,941
|
10,512,026
|
I'm new to python and am trying to read "blocks" of data from a file. The file is written something like:
```
# Some comment
# 4 cols of data --x,vx,vy,vz
# nsp, nskip = 2 10
# 0 0.0000000
# 1 4
0.5056E+03 0.8687E-03 -0.1202E-02 0.4652E-02
0.3776E+03 0.8687E-03 0.1975E-04 0.9741E-03
0.2496E+03 0.8687E-03 0.7894E-04 0.8334E-03
0.1216E+03 0.8687E-03 0.1439E-03 0.6816E-03
# 2 4
0.5056E+03 0.8687E-03 -0.1202E-02 0.4652E-02
0.3776E+03 0.8687E-03 0.1975E-04 0.9741E-03
0.2496E+03 0.8687E-03 0.7894E-04 0.8334E-03
0.1216E+03 0.8687E-03 0.1439E-03 0.6816E-03
# 500 0.99999422
# 1 4
0.5057E+03 0.7392E-03 -0.6891E-03 0.4700E-02
0.3777E+03 0.9129E-03 0.2653E-04 0.9641E-03
0.2497E+03 0.9131E-03 0.7970E-04 0.8173E-03
0.1217E+03 0.9131E-03 0.1378E-03 0.6586E-03
and so on
```
Now I want to be able specify and read only one block of data out of these many blocks. I'm using `numpy.loadtxt('filename',comments='#')` to read the data but it loads the whole file in one go. I searched online and someone has created a patch for the numpy io routine to specify reading blocks but it's not in mainstream numpy.
It's much easier to choose blocks of data in gnuplot but I'd have to write the routine to plot the distribution functions. If I can figure out reading specific blocks, it would be much easier in python. Also, I'm moving all my visualization codes to python from IDL and gnuplot, so it'll be nice to have everything in python instead of having things scattered around in multiple packages.
I thought about calling gnuplot from within python, plotting a block to a table and assigning the output to some array in python. But I'm still starting and I could not figure out the syntax to do it.
Any ideas, pointers to solve this problem would be of great help.
|
2012/05/09
|
[
"https://Stackoverflow.com/questions/10512026",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1325437/"
] |
A quick basic read:
```
>>> def read_blocks(input_file, i, j):
empty_lines = 0
blocks = []
for line in open(input_file):
# Check for empty/commented lines
if not line or line.startswith('#'):
# If 1st one: new block
if empty_lines == 0:
blocks.append([])
empty_lines += 1
# Non empty line: add line in current(last) block
else:
empty_lines = 0
blocks[-1].append(line)
return blocks[i:j + 1]
>>> for block in read_blocks(s, 1, 2):
print '-> block'
for line in block:
print line
-> block
0.5056E+03 0.8687E-03 -0.1202E-02 0.4652E-02
0.3776E+03 0.8687E-03 0.1975E-04 0.9741E-03
0.2496E+03 0.8687E-03 0.7894E-04 0.8334E-03
0.1216E+03 0.8687E-03 0.1439E-03 0.6816E-03
-> block
0.5057E+03 0.7392E-03 -0.6891E-03 0.4700E-02
0.3777E+03 0.9129E-03 0.2653E-04 0.9641E-03
0.2497E+03 0.9131E-03 0.7970E-04 0.8173E-03
0.1217E+03 0.9131E-03 0.1378E-03 0.6586E-03
>>>
```
Now I guess you can use numpy to read the lines...
|
The following code should probably get you started. You will probably need the re module.
You can open the file for reading using:
```
f = open("file_name_here")
```
You can read the file one line at a time by using
```
line = f.readline()
```
To jump to the next line that starts with a "#", you can use:
```
while not line.startswith("#"):
line = f.readline()
```
To parse a line that looks like "# i j", you could use the following regular expression:
```
is_match = re.match("#\s+(\d+)\s+(\d+)",line)
if is_match:
i = is_match.group(1)
j = is_match.group(2)
```
See the documentation for the "re" module for more information on this.
To parse a block, you could use the following bit of code:
```
block = [[]] # block[i][j] will contain element i,j in your block
while not line.isspace(): # read until next blank line
block.append(map(float,line.split(" ")))
# splits each line at each space and turns all elements to float
line = f.readline()
```
You can then turn your block into a numpy array if you want:
```
block = np.array(block)
```
Provided you have imported numpy as np.
If you want to read multiple blocks between i and j, just put the above code to read one block into a function and use it multiple times.
Hope this helps!
| 14,943
|
53,949,017
|
I'm trying to develop a lambda that has to work with S3 and dynamoDB.
The thing is that because I am not familiar with the SDK of aws for go I will have lots of tests and tries.
Each time I will change the code is another time I have to compile the project and upload it to aws.
Is there any way to do it locally? pass some kind of configuration that lets me call the services of aws locally, from my computer?
Thanks!
*This has to do mostly with golang, other languages like python can run directly on the aws lambda function page, and node has `cloud9` support.*
|
2018/12/27
|
[
"https://Stackoverflow.com/questions/53949017",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8885009/"
] |
You can use the lambci docker image(s) to execute your code locally using the same Lambda runtimes that are used on AWS.
<https://github.com/lambci/docker-lambda>
You can also run dynamo DB locally in another container as well
<https://hub.docker.com/r/amazon/dynamodb-local/>
To simulate credentials/roles that would be available on Lambda, just pass in your Api creds VIA environment variables. ( for s3 access )
Cheers
-JH
|
You could use this [aws-lambda-go-test](https://github.com/yogeshlonkar/aws-lambda-go-test) module which can run lambda locally and can be used to test the actual response from lambda
full disclosure I forked and upgraded this module
| 14,944
|
59,001,784
|
I am building a webscrape that will run over and over that will insert new data or update data based on ID. `if 'id' == 'id':` My goal is to avoid duplicates. MySQL table is ready and built. What is the best Pythonic way to check your python list before inserting/updating it in MySQL DB using SQLAlchemy?
Below are my dependenices:
```
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
import requests
from bs4 import BeautifulSoup
from time import sleep
from datetime import datetime
import time
engine = create_engine("mysql+pymysql:///blah")
```
I use a function to assign each `<td>` from scraped data:
```
def functionscrape( **kwargs ):
scrape = {
'id': '',
'owner': '',
'street': '',
'city': '',
'state': '',
}
scrape.update(kwargs)
return (scrape)
```
The list below is an example, but would be changing constantly with each webscrape.
```
myList =
[{
'id': '111',
'owner': 'Bob',
'street': '1212 North',
'city': 'Anywhere',
'state': 'TX',
},
{
'id': '222',
'owner': 'Mary',
'street': '333 South',
'city': 'Overthere',
'state': 'AZ',
}]
```
|
2019/11/22
|
[
"https://Stackoverflow.com/questions/59001784",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11396478/"
] |
I am using a helper function to create the dynamic sql update queries:
```
def construct_update(table_name, where_vals, update_vals):
query = table_name.update()
for k, v in where_vals.items():
query = query.where(getattr(table_name.c, k) == v)
return query.values(**update_vals)
```
basically you pass the function the table and 2 dictionaries. The first would just be {'id': id} in your case, and the second is all the values you want to update, like
```
{
'owner': 'Bob',
'street': '1212 North',
'city': 'Anywhere',
etc...
}
```
the helper function then returns the query which can be executed with
```
my_session = Session(engine)
my_session.execute(query)
```
Unfortunately, using this method, you'll have to update every single row individually (no bulk update) - but if you can live with that this works fine
otherwise here's a similar post about bulk updates:
[Bulk update in SQLAlchemy Core using WHERE](https://stackoverflow.com/questions/25694234/bulk-update-in-sqlalchemy-core-using-where)
|
You can try using <https://marshmallow.readthedocs.io/en/stable/> library to make validation
Build `Schema` and define fields with types you need. You can also use `@pre_load` and `@post_load` decorators to manipulate your data
| 14,945
|
2,105,508
|
I'm new to Cython and I'm trying to use Cython to wrap a C/C++ static library. I made a simple example as follow.
**Test.h:**
```
#ifndef TEST_H
#define TEST_H
int add(int a, int b);
int multipy(int a, int b);
#endif
```
**Test.cpp**
```
#include "test.h"
int add(int a, int b)
{
return a+b;
}
int multipy(int a, int b)
{
return a*b;
}
```
Then I used g++ to compile and build it.
```
g++ -c test.cpp -o libtest.o
ar rcs libtest.a libtest.o
```
So now I got a static library called `libtest.a`.
**Test.pyx:**
```
cdef extern from "test.h":
int add(int a,int b)
int multipy(int a,int b)
print add(2,3)
```
**Setup.py:**
```
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
ext_modules = [Extension("test",
["test.pyx"],
language='c++',
include_dirs=[r'.'],
library_dirs=[r'.'],
libraries=['libtest']
)]
setup(
name = 'test',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
)
```
The I called:
```
python setup.py build_ext --compiler=mingw32 --inplace
```
The output was:
```
running build_ext
cythoning test.pyx to test.cpp
building 'test' extension
creating build
creating build\temp.win32-2.6
creating build\temp.win32-2.6\Release
C:\Program Files\pythonxy\mingw\bin\gcc.exe -mno-cygwin -mdll -O -Wall -I. -IC:\
Python26\include -IC:\Python26\PC -c test.cpp -o build\temp.win32-2.6\Release\test.o
writing build\temp.win32-2.6\Release\test.def
C:\Program Files\pythonxy\mingw\bin\g++.exe -mno-cygwin -mdll -static --entry _D
llMain@12 --output-lib build\temp.win32-2.6\Release\libtest.a --def build\temp.w
in32-2.6\Release\test.def -s build\temp.win32-2.6\Release\test.o -L. -LC:\Python
26\libs -LC:\Python26\PCbuild -ltest -lpython26 -lmsvcr90 -o test.pyd
g++: build\temp.win32-2.6\Release\libtest.a: No such file or directory
error: command 'g++' failed with exit status 1
```
I also tried to use `libraries=['test']` instead of `libraries=['libtest']`. It gave me the same errors.
Any clue about this?
|
2010/01/20
|
[
"https://Stackoverflow.com/questions/2105508",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/150324/"
] |
If your C++ code is only used by the wrapper, another option is to let the setup compile your .cpp file, like this:
```
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
ext_modules = [Extension("test",
["test.pyx", "test.cpp"],
language='c++',
)]
setup(
name = 'test',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
)
```
For linking to a static library you have to use the [extra\_objects](http://docs.python.org/distutils/apiref.html) argument in your `Extension`:
```
from distutils.core import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
ext_modules = [Extension("test",
["test.pyx"],
language='c++',
extra_objects=["libtest.a"],
)]
setup(
name = 'test',
cmdclass = {'build_ext': build_ext},
ext_modules = ext_modules
)
```
|
I think you can fix this specific problem by specifying the right `library_dirs` (where you actually **put** libtest.a -- apparently it's not getting found), but I think then you'll have another problem -- your entry points are not properly declared as `extern "C"`, so the function's names will have been "mangled" by the C++ compiler (look at the names exported from your libtest.a and you'll see!), so any other language except C++ (including C, Cython, etc) will have problems getting at them. The fix is to declare them as `extern "C"`.
| 14,946
|
74,179,391
|
I am responsible for a series of exercises for nonlinear optimization.
I thought it would be cool to start with some examples of optimization problems and solve them with `pyomo` + some black box solvers.
However, as the students learn more about optimization algorithms I wanted them to also implement some simple methods and test there implementation for the same examples. I hoped there would be an "easy" way to add a custom solver to `pyomo` however I cannot find any information about this.
Basically that would allow the students to check their implementation by just changing a single line in there code and compare to a well tested solver.
I would also try to implement a **simple** wrapper myself but I do not know anything about the `pyomo` internals.
**Q: Can I add my own solvers written in python to `pyomo`? Solver could have an interface like the ones of `scipy.optimize`.**
Ty for reading,
Franz
---
Related:
* [Pyomo-Solver Communication](https://stackoverflow.com/questions/51631899/pyomo-solver-communication)
* [Call scipy.optimize inside pyomo](https://stackoverflow.com/questions/47821346/call-scipy-optimize-inside-pyomo)
|
2022/10/24
|
[
"https://Stackoverflow.com/questions/74179391",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11785620/"
] |
You said using jQuery , so no need to loops and addEventListener , all you need is just specifying displayed data inside link using data attribute (like data-text in below snippet )
use the [hover](https://api.jquery.com/hover/) listener then access current hovered by using **`$(this)`** key word , then display the data , that's all
See below snippet :
```js
const $firstul = $('#span_Lan');
$("#ul_box li").hover(function() {
$firstul.html( $(this).find('a').data("text") )
})
```
```css
#ul_box li {
border:1px solid black;
}
#ul_box li:hover {
border-color:red;
}
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div>
<ul>
<li id="li_box"> <span id="span_Lan"></span></li>
</ul>
<ul id="ul_box">
<li><a id="lnk1" data-text="111" class="">aaa</a></li>
<li><a id="lnk2" data-text="222" class="">bbb</a></li>
<li><a id="lnk3" data-text="333" class="">ccc</a></li>
<li><a id="lnk3" data-text="444" class="">ddd</a></li>
</ul>
</div>
```
|
There's a few issues with your code
* you need to use `innerText` or `innerHtml` instead of `value`
* next you need to pass the event into your mouse over and use the current target instead of `boxLi[i]`
* finally, move your ids to the li as that is what the mouse over is on
Also this isn't jQuery
```js
const firstul = document.getElementById('span_Lan');
const boxLi = document.getElementById('ul_box').children;
for (let i = 0; i < boxLi.length; i++) {
boxLi[i].addEventListener('mouseover', e => {
firstul.innerText = e.currentTarget.textContent; // not sure if you want this line - it's in your code but your question says nothing about having the letters in the first ul
if (e.currentTarget.id == "lnk1") firstul.innerText += "111";
else if (e.currentTarget.id == "lnk2") firstul.innerText += "222";
else if (e.currentTarget.id == "lnk3") firstul.innerText += "333";
})
}
```
```html
<div>
<ul>
<li id="li_box"> <span id="span_Lan">111</span></li>
</ul>
<ul id="ul_box">
<li id="lnk1"><a class="">aaa</a></li>
<li id="lnk2"><a class="">bbb</a></li>
<li id="lnk3"><a class="">ccc</a></li>
</ul>
</div>
```
| 14,949
|
18,874,387
|
Just a little question : it's possible to force a build in Buildbot via a python script or command line (and not via the web interface) ?
Thank you!
|
2013/09/18
|
[
"https://Stackoverflow.com/questions/18874387",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1330954/"
] |
If you have a PBSource configured in your master.cfg, you can send a change from the command line:
```
buildbot sendchange --master {MASTERHOST}:{PORT} --auth {USER}:{PASS}
--who {USER} {FILENAMES..}
```
|
You can make a python script using the urlib2 or requests library to simulate a POST to the web UI
```
import urllib2
import urllib
import cookielib
import uuid
import unittest
import sys
from StringIO import StringIO
class ForceBuildApi():
MAX_RETRY = 3
def __init__(self, server):
self.server = server
cookiejar = cookielib.CookieJar()
self.urlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
def login(self, user, passwd):
data = urllib.urlencode(dict(username=user,
passwd=passwd))
url = self.server + "login"
request = urllib2.Request(url, data)
res = self.urlOpener.open(request).read()
if res.find("The username or password you entered were not correct") > 0:
raise Exception("invalid password")
def force_build(self, builder, reason, **kw):
"""Create a buildbot build request
several attempts are created in case of errors
"""
reason = reason + " ID="+str(uuid.uuid1())
kw['reason'] = reason
data_str = urllib.urlencode(kw)
url = "%s/builders/%s/force" % (self.server, builder)
print url
request = urllib2.Request(url, data_str)
file_desc = None
for i in xrange(self.MAX_RETRY):
try:
file_desc = self.urlOpener.open(request)
break
except Exception as e:
print >>sys.stderr, "error when doing force build", e
if file_desc is None:
print >>sys.stderr, "too many errors, giving up"
return None
for line in file_desc:
if 'alert' in line:
print >>sys.stderr, "invalid arguments", url, data_str
return None
if 'Authorization Failed' in line:
print >>sys.stderr, "Authorization Failed"
return
return reason
class ForceBuildApiTest(unittest.TestCase):
def setUp(self):
from mock import Mock # pip install mock for test
self.api = ForceBuildApi("server/")
self.api.urlOpener = Mock()
urllib2.Request = Mock()
uuid.uuid1 = Mock()
uuid.uuid1.return_value = "myuuid"
sys.stderr = StringIO()
def test_login(self):
from mock import call
self.api.login("log", "pass")
self.assertEquals(len(self.api.urlOpener.open.call_args_list), 1)
req = urllib2.Request.call_args_list
self.assertEquals([call('server/login', 'passwd=pass&username=log')], req)
def test_force(self):
from mock import call
self.api.urlOpener.open.return_value = ["blabla"]
r = self.api.force_build("builder1", reason="reason", param1="foo", param2="bar")
self.assertEquals(len(self.api.urlOpener.open.call_args_list), 1)
req = urllib2.Request.call_args_list
self.assertEquals([call('server//builders/builder1/force', 'reason=reason+ID%3Dmyuuid¶m2=bar¶m1=foo')], req)
self.assertEquals(r, "reason ID=myuuid")
def test_force_fail1(self):
from mock import call
self.api.urlOpener.open.return_value = ["alert bla"]
r = self.api.force_build("builder1", reason="reason", param1="foo", param2="bar")
self.assertEquals(len(self.api.urlOpener.open.call_args_list), 1)
req = urllib2.Request.call_args_list
self.assertEquals([call('server//builders/builder1/force', 'reason=reason+ID%3Dmyuuid¶m2=bar¶m1=foo')], req)
self.assertEquals(sys.stderr.getvalue(), "invalid arguments server//builders/builder1/force reason=reason+ID%3Dmyuuid¶m2=bar¶m1=foo\n")
self.assertEquals(r, None)
def test_force_fail2(self):
from mock import call
def raise_exception(*a, **kw):
raise Exception("oups")
self.api.urlOpener.open = raise_exception
r = self.api.force_build("builder1", reason="reason", param1="foo", param2="bar")
req = urllib2.Request.call_args_list
self.assertEquals([call('server//builders/builder1/force', 'reason=reason+ID%3Dmyuuid¶m2=bar¶m1=foo')], req)
self.assertEquals(sys.stderr.getvalue(), "error when doing force build oups\n"*3 + "too many errors, giving up\n")
self.assertEquals(r, None)
def test_force_fail3(self):
from mock import call
self.api.urlOpener.open.return_value = ["bla", "blu", "Authorization Failed"]
r = self.api.force_build("builder1", reason="reason", param1="foo", param2="bar")
req = urllib2.Request.call_args_list
self.assertEquals([call('server//builders/builder1/force', 'reason=reason+ID%3Dmyuuid¶m2=bar¶m1=foo')], req)
self.assertEquals(sys.stderr.getvalue(), "Authorization Failed\n")
self.assertEquals(r, None)
if __name__ == '__main__':
unittest.main()
```
| 14,951
|
60,913,598
|
I am trying to train EfficientNetB1 on Google Colab and constantly running into different issues with correct import statements from Keras or Tensorflow.Keras, currently this is how my imports look like
```
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.python.keras.layers.pooling import AveragePooling2D
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import SGD
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from imutils import paths
import matplotlib.pyplot as plt
import numpy as np
import argparse
import pickle
import cv2
import os
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
import efficientnet.keras as enet
from tensorflow.keras.layers import Dense, Dropout, Activation, BatchNormalization, Flatten, Input
```
and this is how my model looks like
```
load the ResNet-50 network, ensuring the head FC layer sets are left
# off
baseModel = enet.EfficientNetB1(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)), pooling='avg')
# Adding 2 fully-connected layers to B0.
x = baseModel.output
x = BatchNormalization()(x)
x = Dropout(0.7)(x)
x = Dense(512)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Dropout(0.5)(x)
x = Dense(512)(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
# Output layer
predictions = Dense(len(lb.classes_), activation="softmax")(x)
model = Model(inputs = baseModel.input, outputs = predictions)
# loop over all layers in the base model and freeze them so they will
# *not* be updated during the training process
for layer in baseModel.layers:
layer.trainable = False
```
But for the life of me I can't figure out why I am getting the below error
```
AttributeError Traceback (most recent call last)
<ipython-input-19-269fe6fc6f99> in <module>()
----> 1 baseModel = enet.EfficientNetB1(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)), pooling='avg')
2
3 # Adding 2 fully-connected layers to B0.
4 x = baseModel.output
5 x = BatchNormalization()(x)
5 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py in _collect_previous_mask(input_tensors)
1439 inbound_layer, node_index, tensor_index = x._keras_history
1440 node = inbound_layer._inbound_nodes[node_index]
-> 1441 mask = node.output_masks[tensor_index]
1442 masks.append(mask)
1443 else:
AttributeError: 'Node' object has no attribute 'output_masks'
```
|
2020/03/29
|
[
"https://Stackoverflow.com/questions/60913598",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9201770/"
] |
The problem is the way you import the efficientnet.
You import it from the `Keras` package and not from the `TensorFlow.Keras` package.
Change your efficientnet import to
```
import efficientnet.tfkeras as enet
```
|
Not sure, but this error maybe caused by wrong TF version. Google Colab for now comes with TF 1.x by default. Try this to change the TF version and see if this resolves the issue.
```
try:
%tensorflow_version 2.x
except:
print("Failed to load")
```
| 14,952
|
43,109,167
|
Below is my data set
```
Date Time
2015-05-13 23:53:00
```
I want to convert date and time into floats as separate columns in a python script.
The output should be like date as `20150513` and time as `235300`
|
2017/03/30
|
[
"https://Stackoverflow.com/questions/43109167",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7521618/"
] |
If all you need is to strip the hyphens and colons, [*str.replace()*](https://docs.python.org/2.7/library/stdtypes.html#str.replace) should do the job:
```
>>> s = '2015-05-13 23:53:00'
>>> s.replace('-', '').replace(':', '')
'20150513 235300'
```
For mort sophisticated reformatting, parse the input with [*time.strptime()*](https://docs.python.org/2.7/library/time.html#time.strptime) and then reformat with [*time.strftime()*](https://docs.python.org/2.7/library/time.html#time.strftime):
```
>>> import time
>>> t = time.strptime('2015-05-13 23:53:00', '%Y-%m-%d %H:%M:%S')
>>> time.strftime('%Y%m%d %H%M%S', t)
'20150513 235300'
```
|
If you have a datetime you can use [strftime()](http://strftime.org/)
```
your_time.strftime('%Y%m%d.%H%M%S')
```
And if your variables are string, You can use replace()
```
dt = '2015-05-13 23:53:00'
date = dt.split()[0].replace('-','')
time = dt.split()[1].replace(':','')
fl = float(date+ '.' + time)
```
| 14,953
|
31,244,525
|
In python, one can attribute some values to some of the keywords that are already predefined in python, unlike other languages. Why?
This is not all, some.
```
> range = 5
> range
> 5
```
But for
```
> def = 5
File "<stdin>", line 1
def = 5
^
SyntaxError: invalid syntax
```
One possible hypothesis is - Lazy coders with unique parsing rules.
For those new to python, yeah, this actually works, for keywords like True, False, range, len, so on.
I wrote a compiler for python in college and, if I remember correctly, the keywords list did not have them.
|
2015/07/06
|
[
"https://Stackoverflow.com/questions/31244525",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3302133/"
] |
While `range` is nothing but a built-in function, `def` is a keyword. (Most IDEs should indicate the difference with appropriate colors.)
Functions - whether built-in or not - can be redefined. And they don't have to remain functions, but can become integers like `range` in your example. But you can never redefine keywords.
If you wish, you can print the list of all Python keywords with the following lines of code (borrowed from [here](https://stackoverflow.com/a/14595949/3419103)):
```
import keyword
for keyword in keyword.kwlist:
print keyword
```
Output:
```
and
as
assert
break
class
continue
def
del
elif
else
except
exec
finally
for
from
global
if
import
in
is
lambda
not
or
pass
print
raise
return
try
while
with
yield
```
And for Python 3 (notice the absence of `print`):
```
False
None
True
and
as
assert
break
class
continue
def
del
elif
else
except
finally
for
from
global
if
import
in
is
lambda
nonlocal
not
or
pass
raise
return
try
while
with
yield
```
In contrast, the built-in functions can be found here: <https://docs.python.org/2/library/functions.html>
|
The keyword 'range' is a function, you can create some other vars as well as sum, max...
In the other hand, keyword 'def' expects a defined structure in order to create a function.
```
def <functionName>(args):
```
| 14,956
|
69,023,789
|
I'm working on automating changing image colors using python. The image I'm using is below, i'd love to move it from red to another range of colors, say green, keeping the detail and shading if possible. I've been able to convert *some* of the image to a solid color, losing all detail.
[](https://i.stack.imgur.com/09uYC.jpg)
The code I'm currently using is below, I can't quite figure out the correct range of red to make it work correctly, and also it only converts to a single color, again losing all detail and shade.
Any help is appreciated, thank you.
```python
import cv2
import numpy as np
import skimage.exposure
# load image and get dimensions
img = cv2.imread("test5.jpg")
# convert to hsv
hsv = cv2.cvtColor(img,cv2.COLOR_BGR2HSV)
## mask of upper red (170,50,50) ~ (180,255,255)
## mask of lower red (0,50,50) ~ (10,255,255)
# threshold using inRange
range1 = (0,50,50)
range2 = (1,255,255)
mask = cv2.inRange(hsv,range1,range2)
mask = 255 - mask
# apply morphology opening to mask
kernel = np.ones((3,3), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_ERODE, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# antialias mask
mask = cv2.GaussianBlur(mask, (0,0), sigmaX=3, sigmaY=3, borderType = cv2.BORDER_DEFAULT)
mask = skimage.exposure.rescale_intensity(mask, in_range=(127.5,255), out_range=(0,255))
result = img.copy()
result[mask==0] = (255,255,255)
# write result to disk
cv2.imwrite("test6.jpg", result)
```
|
2021/09/02
|
[
"https://Stackoverflow.com/questions/69023789",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13919173/"
] |
This is one way to approach the problem in Python/OpenCV. But for red, it is very hard to do because red spans 0 hue, which also is the hue for gray and white and black, which you have in your image. The other issue you have is that skin tones has red shades, so you cannot pick too large of ranges for your colors. Also when dealing with red ranges, you need two sets, one for hues up to 180 and another for hues above 0.
Input:
[](https://i.stack.imgur.com/fgW2w.jpg)
```
import cv2
import numpy as np
# load image
img = cv2.imread('red_clothes.jpg')
# convert to HSV
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
blue_hue = 120
red_hue = 0
# diff hue (blue_hue - red_hue)
diff_hue = blue_hue - red_hue
# create mask for red color in hsv
lower1 = (150,150,150)
upper1 = (180,255,255)
mask1 = cv2.inRange(hsv, lower1, upper1)
lower2 = (0,150,150)
upper2 = (30,255,255)
mask2 = cv2.inRange(hsv, lower2, upper2)
mask = cv2.add(mask1,mask2)
mask = cv2.merge([mask,mask,mask])
# apply morphology to clean mask
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9))
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# modify hue channel by adding difference and modulo 180
hnew = np.mod(h + diff_hue, 180).astype(np.uint8)
# recombine channels
hsv_new = cv2.merge([hnew,s,v])
# convert back to bgr
bgr_new = cv2.cvtColor(hsv_new, cv2.COLOR_HSV2BGR)
# blend with original using mask
result = np.where(mask==(255, 255, 255), bgr_new, img)
# save output
cv2.imwrite('red_clothes_mask.png', mask)
cv2.imwrite('red_clothes_hue_shift.png', bgr_new)
cv2.imwrite('red_clothes_red2blue.png', result)
# Display various images to see the steps
cv2.imshow('mask1',mask1)
cv2.imshow('mask2',mask2)
cv2.imshow('mask',mask)
cv2.imshow('bgr_new',bgr_new)
cv2.imshow('result',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
Mask:
[](https://i.stack.imgur.com/sXkS4.png)
Hue Shifted Image:
[](https://i.stack.imgur.com/oQUBP.png)
Blend between Input and Hue Shifted Image using Mask to blend:
[](https://i.stack.imgur.com/zliQd.png)
So the result is speckled because of the black mixed with the red and from limited ranges due to skin color.
|
You can start with red, but the trick is to invert the image so red is now at hue 90 in OpenCV range and for example blue is at hue 30. So in Python/OpenCV, you can do the following:
Input:
[](https://i.stack.imgur.com/sFW5v.jpg)
```
import cv2
import numpy as np
# load image
img = cv2.imread('red_clothes.jpg')
# invert image
imginv = 255 - img
# convert to HSV
hsv = cv2.cvtColor(imginv, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
blueinv_hue = 30 #(=120+180/2=210-180=30)
redinv_hue = 90 #(=0+180/2=90)
# diff hue (blue_hue - red_hue)
diff_hue = blueinv_hue - redinv_hue
# create mask for redinv color in hsv
lower = (80,150,150)
upper = (100,255,255)
mask = cv2.inRange(hsv, lower, upper)
mask = cv2.merge([mask,mask,mask])
# apply morphology to clean mask
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9,9))
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
# modify hue channel by adding difference and modulo 180
hnew = np.mod(h + diff_hue, 180).astype(np.uint8)
# recombine channels
hsv_new = cv2.merge([hnew,s,v])
# convert back to bgr
bgrinv_new = cv2.cvtColor(hsv_new, cv2.COLOR_HSV2BGR)
# invert
bgr_new = 255 -bgrinv_new
# blend with original using mask
result = np.where(mask==(255, 255, 255), bgr_new, img)
# save output
cv2.imwrite('red_clothes_mask.png', mask)
cv2.imwrite('red_clothes_hue_shift.png', bgr_new)
cv2.imwrite('red_clothes_red2blue.png', result)
# Display various images to see the steps
cv2.imshow('mask',mask)
cv2.imshow('bgr_new',bgr_new)
cv2.imshow('result',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
Mask:
[](https://i.stack.imgur.com/oeZGH.png)
Red to Blue before masking:
[](https://i.stack.imgur.com/IQOZM.png)
Red to Blue after masking:
[](https://i.stack.imgur.com/9GNi3.png)
However, one is still limited by the fact that red is close to skin tones, so the range for red is limited.
| 14,958
|
41,480,055
|
Using Python 2.7 and Django 1.10.4, I was trying to deploy my app to pythonanywhere, but I keep getting this error.
[](https://i.stack.imgur.com/1FGB1.png)
**Error Log**
[](https://i.stack.imgur.com/bEiKL.png)
**wsgi.py**
```
import os
import sys
path = '/home/hellcracker/First-Blog'
if path not in sys.path:
sys.path.append(path)
os.environ['DJANGO_SETTINGS_MODULE'] = 'mysite.settings'
from django.core.wsgi import get_wsgi_application
from django.contrib.staticfiles.handlers import StaticFilesHandler
application = StaticFilesHandler(get_wsgi_application())
```
I can't tell where the error is coming from.
Any help would be appreciated!
|
2017/01/05
|
[
"https://Stackoverflow.com/questions/41480055",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7248057/"
] |
First of all, check the link give in the error log - <https://help.pythonanywhere.com/pages/DebuggingImportError/>
You could also search for 'from django core wsgi no module named wsgi'. There are many answers already, and I think you should be able to find the answer to your problem there.
|
Make sure that your project name is "mysite" if not update this line
```
os.environ['DJANGO_SETTINGS_MODULE'] = '<your-project-name>.settings'
```
Project name will be the directory name parent to your app name, see that in your local machine.
| 14,960
|
32,814,489
|
I've read numerous tutorials and stackex questions/answers, but apparently my questions is too specific and my knowledge too limited to piece together a solution.
**[Edit]** *My confusion was mostly due to the fact that my project required both a shell script and a makefile to run a simple python program. I was not sure why that was necessary, as it seemed like such a roundabout way to do things. It looks like the makefile and script are likely just there to make the autograder happy, as the kind respondents below have mentioned. So, I guess this is something best clarified with the prof, then. I really appreciate the answers--thanks very much for the help!*
Basically, what I want to do is to run `program.py` (my source code) via `program.sh` (shell script), so that when I type the following into the command line
```
./program.sh arg1
```
it runs `program.py` and passes `arg1` into the program as though I had manually typed the following into the command line
```
$ python program.py arg1
```
I also need to automatically set `program.sh` to executable, so that `$ chmod +x program.sh` doesn't have to be typed beforehand.
I've read the solution presented [here](https://stackoverflow.com/questions/8073561/how-to-make-an-executable-to-use-in-a-shell-python), which was very helpful, but this seems to require that the file be executed with the `.py` extension, whereas my particular application requires a `.sh` extension, so as to be run as desired above.
Another reason as to why I'd like to run a `.sh` file is because I also need to somehow include a makefile to run the program, along with the script (so I'm assuming I'd have to include a `make` command in the script?).
Honestly, I am not sure why the makefile is necessary for python programs, but we were instructed by the prof that the makefile would simply be more convenient for the grading scripts, and that we should simply write the following for the makefile's contents:
```
all:
/bin/true
```
Thanks so much in advance for any help in this matter!
|
2015/09/28
|
[
"https://Stackoverflow.com/questions/32814489",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4876803/"
] |
To pass data between two otherwise unconnected View Controllers you'll need to use:
```
presentingViewController!.dismissViewControllerAnimated(true, completion: nil)
```
and transmit your data via viewWillDisappear like this:
```
override func viewWillDisappear(animated: Bool) {
super.viewWillDisappear(animated)
if self.isBeingDismissed() {
self.delegate?.acceptData(textFieldOutlet.text)
}
}
```
I've posted [a tutorial](https://www.codebeaulieu.com/36/Passing-data-with-the-protocol-delegate-pattern), that includes a working project file that you can download and inspect.
---
Heres an example of the pattern in context.
ViewController 2:
-----------------
```
// place the protocol in the view controller that is being presented
protocol PresentedViewControllerDelegate {
func acceptData(data: AnyObject!)
}
class PresentedViewController: UIViewController {
// create a variable that will recieve / send messages
// between the view controllers.
var delegate : PresentedViewControllerDelegate?
// another data outlet
var data : AnyObject?
@IBOutlet weak var textFieldOutlet: UITextField!
@IBAction func doDismiss(sender: AnyObject) {
if textFieldOutlet.text != "" {
self.presentingViewController!.dismissViewControllerAnimated(true, completion: nil)
}
}
override func viewDidLoad() {
super.viewDidLoad()
print("\(data!)")
// Do any additional setup after loading the view.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
override func viewWillDisappear(animated: Bool) {
super.viewWillDisappear(animated)
if self.isBeingDismissed() {
self.delegate?.acceptData(textFieldOutlet.text)
}
}
}
```
ViewController 1:
-----------------
```
class ViewController: UIViewController, PresentedViewControllerDelegate {
@IBOutlet weak var textOutlet: UILabel!
@IBAction func doPresent(sender: AnyObject) {
let pvc = storyboard?.instantiateViewControllerWithIdentifier("PresentedViewController") as! PresentedViewController
pvc.data = "important data sent via delegate!"
pvc.delegate = self
self.presentViewController(pvc, animated: true, completion: nil)
}
override func viewDidLoad() {
super.viewDidLoad()
}
func acceptData(data: AnyObject!) {
self.textOutlet.text = "\(data!)"
}
}
```
|
I have found an answer: using global variables.
Here is what I did:
In my first view controller (the view controller that is sending the string), I made a global variable above the class definition, like this:
```
import UIKit
var chosenClass = String()
class EntryViewController: UIViewController, UITableViewDelegate, UITableViewDataSource {
// other code here that isn't relevant to the topic at hand
}
```
then, in the same view controller, when a table cell was selected, I did this:
```
override func prepareForSegue(segue: UIStoryboardSegue, sender: AnyObject?) {
var row = self.tableView.indexPathForSelectedRow()?.row
chosenClass = array2[row!]
}
```
where
```
array2[row!]
```
is the string that I am wanting to pass.
in the third view controller, I made another string local variable, string to receive :
```
import UIKit
import Parse
import ParseUI
import Foundation
class FirstTableViewController: PFQueryTableViewController {
@IBOutlet weak var navigationBarTop: UINavigationItem!
var stringToRecieve = String()
}
```
In the viewDidLoad of the third view controller, I simply put:
```
stringToRecieve = chosenClass
```
and that is it. No additional code was needed for the second view controller, where the container view is.
| 14,961
|
59,302,243
|
I'm using this code with python and opencv for displaying about 100 images. But `imshow` function throws an error.
Here is my code:
```py
nn=[]
for j in range (187) :
nn.append(j+63)
images =[]
for i in nn:
path = "02291G0AR\\"
n1=cv2.imread(path +"Bi000{}".format(i))
images.append(n1)
cv2.imshow(images)
```
And here is the error:
```
imshow() missing required argument 'mat' (pos 2)
```
|
2019/12/12
|
[
"https://Stackoverflow.com/questions/59302243",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7415394/"
] |
1. You have to visualize one image at a time, while you are passing `images` which is a list
2. `cv2.imshow()` takes as first argument the name of the window
So you should iterate on your loaded images like:
```py
for image in images:
cv2.imshow('Image', image)
cv2.waitKey(0) # Wait for user interaction
```
---
You may want to take a look to the python opencv documentation about displaying images [here](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html#display-an-image).
|
You can use the following snippet to montage more than one image:
```
from imutils import build_montages
im_shape = (129,196)
montage_shape = (7,3)
montages = build_montages(images, im_shape, montage_shape)
```
`im_shape :` A tuple containing the width and height of each image in the montage. Here we indicate that all images in the montage will be resized to 129 x 196. Resizing every image in the montage to a fixed size is a requirement so we can properly allocate memory in the resulting NumPy array. Note: Empty space in the montage will be filled with black pixels.
`montage_shape :` A second tuple, this one specifying the number of columns and rows in the montage. Here we indicate that our montage will have 7 columns (7 images wide) and 3 rows (3 images tall).
| 14,962
|
53,647,426
|
How can we fetch details of particular row from mysql database using variable in python?
I want to print the details of particular row using variable from my database and I think I should use something like this:
```
data = cur.execute("SELECT * FROM loginproject.Pro WHERE Username = '%s';"% rob)
```
But this is showing only the index value, not the data. Please help me out.
|
2018/12/06
|
[
"https://Stackoverflow.com/questions/53647426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10333908/"
] |
Don't put html in laravel controller, you can return the $employees as data and then add html in your ajax success action
|
```
public function search(Request $request){
if($request->ajax())
{
$employees = DB::table('employeefms')->where('last_name','LIKE','%'.$request->search.'%')
->orWhere('first_name','LIKE','%'.$request->search.'%')->get();
if(!empty($employees))
{
return json_encode(array("msg"=>"success", "data"=>$employees));
}
return json_encode(array("msg"=>"error"));
}
```
}
\\\\\\\\\ Ajax
```
$.ajax({
type : 'get',
url : '{{ URL::to('admin/employeemaintenance/search') }}',
data : {'search':$value},
success:function(data){
var data1 = jQuery.parseJSON(data);
if(data1.msg == "success"){
$.each(eval(data1.data), function(){
//html here
})
},
//no data found
}
});
```
| 14,963
|
18,289,377
|
I'm using Code::Blocks and want to have gdb python-enabled. So I followed the C::B wiki <http://wiki.codeblocks.org/index.php?title=Pretty_Printers> to configure it.
My pp.gdb is the same as that in the wiki except that I replace the path with my path to printers.py.
```
python
import sys
sys.path.insert(0, 'C:/Program Files (x86)/mingw-builds/x32-4.8.1-posix-dwarf-rev3/mingw32/share/gcc-4.8.1/python/libstdcxx/v6')
from printers import register_libstdcxx_printers
register_libstdcxx_printers (None)
end
```
Then I tested it:
```
(gdb) source C:\Program Files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\bin\pp.gdb
```
And the error message showed:
```
Traceback (most recent call last):
File "<string>", line 4, in <module>
File "C:/Program Files (x86)/mingw-builds/x32-4.8.1-posix-dwarf-rev3/mingw32/
share/gcc-4.8.1/python/libstdcxx/v6/printers.py", line 911, in register_libstdcxx_printers
gdb.printing.register_pretty_printer(obj, libstdcxx_printer)
File "c:\program files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\
share\gdb/python/gdb/printing.py", line 146, in register_pretty_printer
printer.name)
RuntimeError: pretty-printer already registered: libstdc++-v6
C:\Program Files (x86)\mingw-builds\x32-4.8.1-posix-dwarf-rev3\mingw32\bin\pp.gd
b:6: Error in sourced command file:
Error while executing Python code.
```
How can I fix it?
|
2013/08/17
|
[
"https://Stackoverflow.com/questions/18289377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1549348/"
] |
Today, I also see this similar issue, after I update my libstdcxx's pretty from an old gcc4.7.x version to the gcc's trunk HEAD version to fix some other problems.
I'm also using Codeblocks, and I have those two lines in my customized gdb script.
```
from libstdcxx.v6.printers import register_libstdcxx_printers
register_libstdcxx_printers (None)
```
Note that I already have `-nx` option parssed to gdb when it started. After tweak for a while, I found that the libstdcxx's pretty printer is automatically loaded and get registered after the `from...import...` line. So, as a solution, you can just comment out the second line, and everything works just fine here.
```
from libstdcxx.v6.printers import register_libstdcxx_printers
#register_libstdcxx_printers (None)
```
Also, I think the GDB's official wiki [STLSupport - GDB Wiki](https://sourceware.org/gdb/wiki/STLSupport) and Codeblocks' officical wiki [Pretty Printers - CodeBlocks](http://wiki.codeblocks.org/index.php?title=Pretty_Printers) should be updated to state this issue.
EDIT:
I just see the file: libstdcxx\v6\_\_init\_\_.py from GCC svn trunk(maybe, it was added recently), and I see it has code:
```
# Load the pretty-printers.
from printers import register_libstdcxx_printers
register_libstdcxx_printers(gdb.current_objfile())
```
So, I think this code will automatically register the printers, so you don't need explicitly call `register_libstdcxx_printers (None)`.
|
You probably don't need to have this code. It seems like the libstdc++ prnters are preloaded -- which is normal in many setups... we designed printers to "just work", and the approach of using python code to explicitly load printers was a transitional thing.
One way to check is to run gdb -nx, start your C++ program, and then use "info pretty-printer".
| 14,964
|
23,480,431
|
I want to format a .py file (that generates a random face every time the page is refreshed) using HTML, in the Terminal, that can run in a browser. I have **chmod`ed** it so it should work, but whenever I run in it in a browser, I get an **internal service error**. Can someone help me figure out what it wrong?
```
#!/usr/bin/python
print "Content-Type: text/html\n"
print ""
<!DOCTYPE html>
<html>
<pre>
from random import choice
def facegenerator():
T = ""
hair = ["I I I I I","^ ^ ^ ^ ^"]
eyes = ["O O"," O O ",]
nose = [" O "," v "]
mouth = ["~~~~~","_____","-----"]
T += choice(hair)
T += "\n"
T += choice(eyes)
T += "\n"
T += choice(nose)
T += "\n"
T += choice(mouth)
T += "\n"
return T
print facegenerator()
</pre>
</html>
```
The code works in IDLE, but I can`t it to work on a webpage. Thanks in advance for any help!
|
2014/05/05
|
[
"https://Stackoverflow.com/questions/23480431",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2888249/"
] |
This is neither valid HTML nor valid Python. You can't simply mix in HTML tags into the middle of a Python script like that: you need at the very least to put them inside quotes so that they are a valid string.
```
#!/usr/bin/python
print "Content-Type: text/html\n"
print """
<!DOCTYPE html>
<html>
<pre>
"""
def ...
print facegenerator()
print """</pre>
</html>"""
```
|
You need a templating engine like jinja to have this <http://jinja.pocoo.org/>
| 14,967
|
28,979,898
|
I downloaded `Microsoft Visual C++ Compiler for Python 2.7` and it installed in
`C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\vcvarsall.bat`
However, I am getting the `error: Unable to find vcvarsall.bat` error when attempting to install "MySQL-python".
I added `C:\Users\user\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0;` to my Path.
I am using python 2.7.8
|
2015/03/11
|
[
"https://Stackoverflow.com/questions/28979898",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3412518/"
] |
Use the command prompt shortcut provided from installing the MSI.
This will launch the prompt with VCVarsall.bat activated for the targeted environment.
Depending on your installation, you can find this in the Start Menu under All Program -> Microsoft Visual C++ For Python -> then pick the command prompt based on x64 or x86.
Otherwise, press Windows Key and search for "Microsoft Visual C++ For Python".
|
this worked:
<https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows>
i uninstalled visual studio 2013 and net framework 4 first.
i didnt need visual studio. i only installed it because i was playing with c++.
this worked on a virtual environment:
add `C:\Program Files\Microsoft SDKs\Windows\v7.0\Bin;` to System Paths
```
Start SDK Command Prompt
"C:\Program Files\Microsoft SDKs\Windows\v7.0\SetEnv.Cmd"
Setting SDK environment relative to C:\Program Files\Microsoft SDKs\Windows\v7.0.
Targeting Windows Server 2008 x64 DEBUG
C:\Program Files\Microsoft SDKs\Windows\v7.0>setlocal enabledelayedexpansion
C:\Program Files\Microsoft SDKs\Windows\v7.0>set DISTUTILS_USE_SDK=1
C:\Program Files\Microsoft SDKs\Windows\v7.0>SetEnv.Cmd /x86 /release
Setting SDK environment relative to C:\Program Files\Microsoft SDKs\Windows\v7.0.
Targeting Windows Server 2008 x86 RELEASE
C:\Program Files\Microsoft SDKs\Windows\v7.0>cd "C:\Users\USR01\virtualenvs\env1"
C:\Program Files\Microsoft SDKs\Windows\v7.0>.\Scripts\activate.bat
(env1) C:\Users\USR01\virtualenvs\env1>
(env1) C:\Users\USR01\virtualenvs\env1>pip install <module>
(env1) C:\Users\USR01\virtualenvs\env1>deactivate
```
| 14,968
|
60,063,620
|
I am working with code that my client insists cannot be changed. It needs to be called using a python command like subprocess.call() ... The code includes a use of the exit() function. When exiting, the exit() function contains data as a parameter:
```
exit(data)
```
How can I capture the data parameter that the script is using when calling exit() without modifying the code to use a return or anything like that?
|
2020/02/04
|
[
"https://Stackoverflow.com/questions/60063620",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7188090/"
] |
I found this in 3 Swift files at the end of my code:
```
class UIImage {
private func newBorderMask(_ borderSize: Int, size: CGSize) -> CGImageRef? {
}
}
```
So I saw that the code was redeclaring `class UIImage` after the `extension UIImage`. In each case, I moved the `private func` into the `extension UIImage` and removed the `class UIImage` from the code. This removed all of the `'UIImage' is ambiguous for type lookup in this context` errors throughout my project.
|
You need
```
import UIKit
```
at the top of the file
| 14,970
|
10,188,165
|
I'm using mongodb for my python(2.7) project with django framework..when i give
python manage.py runserver it will work but if i sync the db (python manage.py syncdb) the following error displayed in terminal
```
Creating tables ...
Traceback (most recent call last):
File "manage.py", line 14, in <module>
execute_manager(settings)
File "/usr/lib/pymodules/python2.7/django/core/management/__init__.py", line 438, in execute_manager
utility.execute()
File "/usr/lib/pymodules/python2.7/django/core/management/__init__.py", line 379, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/pymodules/python2.7/django/core/management/base.py", line 191, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/lib/pymodules/python2.7/django/core/management/base.py", line 220, in execute
output = self.handle(*args, **options)
File "/usr/lib/pymodules/python2.7/django/core/management/base.py", line 351, in handle
return self.handle_noargs(**options)
File "/usr/lib/pymodules/python2.7/django/core/management/commands/syncdb.py", line 109, in handle_noargs
emit_post_sync_signal(created_models, verbosity, interactive, db)
File "/usr/lib/pymodules/python2.7/django/core/management/sql.py", line 190, in emit_post_sync_signal
interactive=interactive, db=db)
File "/usr/lib/pymodules/python2.7/django/dispatch/dispatcher.py", line 172, in send
response = receiver(signal=self, sender=sender, **named)
File "/usr/lib/pymodules/python2.7/django/contrib/auth/management/__init__.py", line 41, in create_permissions
"content_type", "codename"
File "/usr/lib/pymodules/python2.7/django/db/models/query.py", line 107, in _result_iter
self._fill_cache()
File "/usr/lib/pymodules/python2.7/django/db/models/query.py", line 772, in _fill_cache
self._result_cache.append(self._iter.next())
File "/usr/lib/pymodules/python2.7/django/db/models/query.py", line 959, in iterator
for row in self.query.get_compiler(self.db).results_iter():
File "/usr/local/lib/python2.7/dist-packages/djangotoolbox/db/basecompiler.py", line 229, in results_iter
for entity in self.build_query(fields).fetch(low_mark, high_mark):
File "/usr/local/lib/python2.7/dist-packages/djangotoolbox/db/basecompiler.py", line 290, in build_query
query.order_by(self._get_ordering())
File "/usr/local/lib/python2.7/dist-packages/djangotoolbox/db/basecompiler.py", line 339, in _get_ordering
raise DatabaseError("Ordering can't span tables on non-relational backends (%s)" % order)
```
and
```
django.db.utils.DatabaseError: Ordering can't span tables on non-relational backends (content_type__app_label)
```
How to solve this problem?
|
2012/04/17
|
[
"https://Stackoverflow.com/questions/10188165",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1095290/"
] |
You need to use Django-nonrel instead of Django.
|
I've used mongoengine with django but you need to create a file like mongo\_models.py for example. In that file you define your Mongo documents. You then create forms to match each Mongo document. Each form has a save method which inserts or updates whats stored in Mongo. Django forms are designed to plug into any data back end ( with a bit of craft )
BEWARE: If you have very well defined and structured data that can be described in documents or models then don't use Mongo. Its not designed for that and something like PostGreSQL will work much better.
* I use PostGreSQL for relational or well structured data because its good for that. Small memory footprint and good response.
* I use Redis to cache or operate in memory queues/lists because its very good for that. great performance providing you have the memory to cope with it.
* I use Mongo to store large JSON documents and to perform Map and reduce on them ( if needed ) because its very good for that. Be sure to use indexing on certain columns if you can to speed up lookups.
Don't circle to fill a square hole. It won't fill it.
I've seen too many posts where someone wanted to swap a relational DB for Mongo because Mongo is a buzz word. Don't get me wrong, Mongo is really great... when you use it appropriately. I love using Mongo appropriately
| 14,971
|
56,693,939
|
My dict (`cpc_docs`) has a structure like
```py
{
sym1:[app1, app2, app3],
sym2:[app1, app6, app56, app89],
sym3:[app3, app887]
}
```
My dict has 15K keys and they are unique strings. Values for each key are a list of app numbers and they can appear as values for more than one key.
I've looked here [[Python: Best Way to Exchange Keys with Values in a Dictionary?](https://stackoverflow.com/questions/1031851/python-best-way-to-exchange-keys-with-values-in-a-dictionary]), but since my value is a list, i get an error `unhashable type: list`
I've tried the following methods:
```py
res = dict((v,k) for k,v in cpc_docs.items())
```
```py
for x,y in cpc_docs.items():
res.setdefault(y,[]).append(x)
```
```py
new_dict = dict (zip(cpc_docs.values(),cpc_docs.keys()))
```
None of these work of course since my values are lists.
I want each unique element from the value lists and all of its keys as a list.
Something like this:
```py
{
app1:[sym1, sym2]
app2:[sym1]
app3:[sym1, sym3]
app6:[sym2]
app56:[sym2]
app89:[sym2]
app887:[sym3]
}
```
A bonus would be to order the new dict based on the len of each value list. So like:
```py
{
app1:[sym1, sym2]
app3:[sym1, sym3]
app2:[sym1]
app6:[sym2]
app56:[sym2]
app89:[sym2]
app887:[sym3]
}
```
|
2019/06/20
|
[
"https://Stackoverflow.com/questions/56693939",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5170800/"
] |
Your `setdefault` code is almost there, you just need an extra loop over the lists of values:
```
res = {}
for k, lst in cpc_docs.items():
for v in lst:
res.setdefault(v, []).append(k)
```
|
### First create a list of key, value tuples
```py
new_list=[]
for k,v in cpc_docs.items():
for i in range(len(v)):
new_list.append((k,v[i]))
```
### Then for each tuple in the list, add the key if it isn't in the dict and append the
```py
doc_cpc = defaultdict(set)
for tup in cpc_doc_list:
doc_cpc[tup[1]].add(tup[0])
```
Probably many better ways, but this works.
| 14,972
|
6,143,087
|
For one of my projects I have a python program built around the python [cmd](http://docs.python.org/library/cmd.html) class. This allowed me to craft a mini language around sql statements that I was sending to a database. Besides making it far easier to connect with python, I could do things that sql can't do. This was very important for several projects. However, I now need to add in if blocks for greater control flow.
My current thinking is that I will just add two new commands to the language, IF and END. These set a variable which determines whether or not to skip a line. I would like to know if anyone else has done this with the cmd module, and if so, is there a standard method I'm missing? Google doesn't seem to reveal anything, and the cmd docs don't reveal anything either.
For an idea that's similar to what I'm doing, go [here](http://blog.fogcreek.com/cheeky-python-a-redis-cli/). Questions and comments welcome. :)
Hmm, a little more complicated than what I was thinking, though having python syntax would be nice. I debated building a mini language for quite some time before I finally did it. The problem primarily comes in from the external limitations. I have a bunch of "data", which is being generous, to turn into sql. This is based on other "data" that won't pass through. It's also unique to each specific "version" of the problem. Doing straight data to sql would have been my first inclination, but was not practical.
For the curious, I spent a great deal of time going over the mini languages chapter in the art of unix programming, found [here](http://www.catb.org/~esr/writings/taoup/html/minilanguageschapter.html).
If I had built the thing in pure python, I wouldn't have had the flexibility I absolutely needed for the problem set.
|
2011/05/26
|
[
"https://Stackoverflow.com/questions/6143087",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/724357/"
] |
The limitations of making a "mini language" have become apparent.
Proper languages have a tree-like structure and more complex syntax than `cmd` can handle easily.
Sometimes it's actually easier to use Python directly than it is to invent your own DSL.
Currently, your DSL probably reads a script-like file of commands.
Because of the way `cmd` works, your little comments get a string argument, which must be parsed. Then the command gets executed. And, further, each command is a method of the `cmd.Cmd` subclass.
Here's what you can do.
Each `do_foo( self, args )` method becomes a stand-alone callable object. It will follow the **Command** design pattern. It will do **exactly** what the method function does now. No less. **Exactly** the same.
```
class Foo( object ):
def __init__( self, global_context ):
self.context= global_context
def __call__( self, args ):
... The rest of do_foo ...
```
Additionally, your existing `cmd.Cmd` subclass probably maintains some internal state.
All of the `self.this` and `self.that` instance variables must be changed to reference
and explicit context object.
```
class Context( object ): pass
```
Change `self.this` or `self.that` to `self.context.this` or `self.context.that`
Now, you can create your context and your various commands.
```
ctx = Context()
foo= Foo(ctx)
```
Your script changes syntax slightly. From:
```
foo argstring
bar argstring
```
to:
```
from mylanguage import foo, bar
foo( "argstring" )
bar( "argstring" )
```
This does **Exactly** what the CLI does now. No more. No less. **Exactly** the same. Slightly different syntax.
Now your script is no longer in a DSL that's hard to expand. It's in Python.
Having done that, you can now use Python syntax `if` statements.
You have the **Exact** functionality currently implemented in `cmd` with better syntax.
|
After examining the problem set some more, I've come to the conclusion that I can leave the minilanguage alone. It has all the features I need, and I don't have the time to rebuild the project from the ground up. This has been an interesting problem and I'm no longer sure I would build another minilanguage if I encountered the same situation. OTOH, it works very well here, and I am loathe to give up the advantages it has conferred.
| 14,973
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.