qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 29
22k
| response_k
stringlengths 26
13.4k
| __index_level_0__
int64 0
17.8k
|
|---|---|---|---|---|---|---|
57,643,746
|
I have a pipeline with a set of PTransforms and my method is getting very long.
I'd like to write my DoFns and my composite transforms in a separate package and use them back in my main method. With python it's pretty straightforward, how can I achieve that with Scio? I don't see any example of doing that. :(
```
withFixedWindows(
FIXED_WINDOW_DURATION,
options = WindowOptions(
trigger = groupedWithinTrigger,
timestampCombiner = TimestampCombiner.END_OF_WINDOW,
accumulationMode = AccumulationMode.ACCUMULATING_FIRED_PANES,
allowedLateness = Duration.ZERO
)
)
.sumByKey
// How to write this in an another file and use it here?
.transform("Format Output") {
_
.withWindow[IntervalWindow]
.withTimestamp
}
```
|
2019/08/25
|
[
"https://Stackoverflow.com/questions/57643746",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9726037/"
] |
You can try to use the encoding option "Latin-1" standard (also known as ISO 8859-1 or ISO/IEC 8859-1).
```
library(data.table)
type <- fread(file.path("C:/Users/Alonso/Desktop/Tesis_MGII/Avance_mayo/escrito/natural-disasters-by-type.csv", encoding = "Latin-1"))
```
|
Use the encoding option inside your read.csv code.
Following code sample is working for me:
```
file <- textConnection("# ---------------------
#
# ---------------------
Año, Número
2001, 3152
2002, 3200
2003, 3500
2004, 3700
2005, 3850
2006, 4200", encoding = c("UTF-8"))
file
# read data from textConnection
desaster <- read.csv(file, skip=3, head=TRUE, blank.lines.skip = TRUE, sep=",", encoding="UTF-8")
desaster
```
| 7,947
|
61,996,944
|
I'm completely new to the Python world, so I've been struggling with this issue for a couple days now. I thank you guys in advance.
I have been trying to separate a single Row and column text in three diferente ones. To explain myself better, here's where I am.
So this is my pandas dataframe from a csv:
In[2]:
```
df = pd.read_csv('raw_csv/consejo_judicatura_guerrero.csv', header=None)
df.columns = ["institution"]
df
```
Out[2]:
```
institution
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012)
```
Then, I try first to separate the **1.1.2.** in a new column called **number**, which I kind of nailed it:
In[3]:
```
new_df = pd.DataFrame(df['institution'].str.split('. ',1).tolist(),columns=['number', 'institution'])
```
Out[3]:
```
number institution
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012)
```
Finally, trying to split the **(CNCOO00012)** in a new column called **unit\_id** I get the following:
In[4]:
```
new_df['institution'] = pd.DataFrame(new_df['institution'].str.split('(').tolist(),columns=['institution', 'unit_id'])
```
Out[4]:
```
------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-70d13206881c> in <module>
----> 1 new_df['institution'] = pd.DataFrame(new_df['institution'].str.split('(').tolist(),columns=['institution', 'unit_id'])
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy)
472 if is_named_tuple(data[0]) and columns is None:
473 columns = data[0]._fields
--> 474 arrays, columns = to_arrays(data, columns, dtype=dtype)
475 columns = ensure_index(columns)
476
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py in to_arrays(data, columns, coerce_float, dtype)
459 return [], [] # columns if columns is not None else []
460 if isinstance(data[0], (list, tuple)):
--> 461 return _list_to_arrays(data, columns, coerce_float=coerce_float, dtype=dtype)
462 elif isinstance(data[0], abc.Mapping):
463 return _list_of_dict_to_arrays(
~/opt/anaconda3/lib/python3.7/site-packages/pandas/core/internals/construction.py in _list_to_arrays(data, columns, coerce_float, dtype)
491 else:
492 # list of lists
--> 493 content = list(lib.to_object_array(data).T)
494 # gh-26429 do not raise user-facing AssertionError
495 try:
pandas/_libs/lib.pyx in pandas._libs.lib.to_object_array()
TypeError: object of type 'NoneType' has no len()
```
What can I do to successfully achieve this task?
|
2020/05/25
|
[
"https://Stackoverflow.com/questions/61996944",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13611327/"
] |
You can use `assign` with `str.split` like below. But format of text should be fixed.
```
df.assign(number = df.institution.str.split().str[0], \
unit_id = df.institution.str.split().str[-1])
```
Output:
```
institution number unit_id
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012) 1.1.2. (CNCOO00012)
```
Or If you want to strip `()` from `unit_id` use
```
df.assign(number = df.institution.str.split().str[0], \
unit_id = df.institution.str.split().str[-1].str.strip('()'))
institution number unit_id
0 1.1.2. Consejo Nacional de Ciencias (CNCOO00012) 1.1.2. CNCOO00012
```
|
Just a thought, but what about using named capture groups in a regular expression. For example, use the following after you imported your CSV-file:
```
df.iloc[:,0].str.extract(r'^(?P<number>[\d.]*)\s+(?P<instituion>.*)\s+\((?P<unit_id>[A-Z\d]*)\)$')
```
This would expand your dataframe as such:
```
number instituion unit_id
0 1.1.2. Consejo Nacional de Ciencias CNCOO00012
```
---
About the regular expression's pattern:

* `^` - A start string ancor
* `^(?P<number>[\d.]*)` - A named capture group, number, made up of zero or more characters (greedy) in character class of dots and digits.
* `\s+` - One or more spaces.
* `(?P<instituion>.*)` - A named capture group, institution, made up of zero or more characters (greedy) other than new-line.
* `\s+\(` - One or more spaces followed by a literal opening paranthesis.
* `(?P<unit_id>[A-Z\d]*)` - A named capture group, unit\_id, made up of zero or more characters (greedy) in character class of uppercase letters and digits.
* `\)$` - Closing paranthesis followed by end of string ancor.
[Online Demo](https://regex101.com/r/ZliNc0/4)
| 7,949
|
36,002,647
|
Given an iterator `i`, I want an iterator that yields each element `n` times, i.e., the equivalent of this function
```
def duplicate(i, n):
for x in i:
for k in range(n):
yield x
```
Is there an one-liner for this?
Related question: [duplicate each member in a list - python](https://stackoverflow.com/questions/2449077/duplicate-each-member-in-a-list-python), but the `zip` solution doesn't work here.
|
2016/03/15
|
[
"https://Stackoverflow.com/questions/36002647",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2201385/"
] |
```
itertools.chain.from_iterable(itertools.izip(*itertools.tee(source, n)))
```
Example:
```
>>> x = (a**2 for a in xrange(5))
>>> list(itertools.chain.from_iterable(itertools.izip(*itertools.tee(x, 3))))
[0, 0, 0, 1, 1, 1, 4, 4, 4, 9, 9, 9, 16, 16, 16]
```
Another way:
```
itertools.chain.from_iterable(itertools.repeat(item, n) for item in source)
>>> x = (a**2 for a in xrange(5))
>>> list(itertools.chain.from_iterable(itertools.repeat(item, 3) for item in x))
[0, 0, 0, 1, 1, 1, 4, 4, 4, 9, 9, 9, 16, 16, 16]
```
|
Use a generator expression:
```
>>> x = (n for n in range(4))
>>> i = (v for v in x for _ in range(3))
>>> list(i)
[0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]
```
| 7,951
|
4,081,230
|
**Update 2010-11-02 7p:** Shortened description; posted initial bash solution.
---
**Description**
I'd like to create a semantic file structure to better organize my data. I don't want to go a route like recoll, strigi, or beagle; I want no gui and full control. The closest might be oyepa or even closer, [Tagsistant](http://www.tagsistant.net/index.php).
Here's the idea: one maintains a "regular" tree of their files. For example, mine are organized in project folders like this:
```
,---
| ~/proj1
| ---- ../proj1_file1[tag1-tag2].ext
| ---- ../proj1_file2[tag3]_yyyy-mm-dd.ext
| ~/proj2
| ---- ../proj2_file3[tag2-tag4].ext
| ---- ../proj1_file4[tag1].ext
`---
```
proj1, proj2 are very short abbreviations I have for my projects.
Then what I want to do is recursively go through the directory and get the following:
* proj ID
* tags
* extension
Each of these will be form a complete "tag list" for each file.
Then in a user-defined directory, a "semantic hierarchy" will be created based on these tags. This gets a bit long, so just take a look at the directory structure created for all files containing tag2 in the name:
```
,---
| ~/tag2
| --- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| ---../tag1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../tag4
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
| --- ../proj1
| ------- ../proj1_file1[tag1-tag2].ext -> ~/proj1/proj1_file1[tag1-tag2].ext
| --- ../proj2
| ------- ../proj2_file3[tag2-tag4].ext -> ~/proj2/proj2_file3[tag2-tag4].ext
`---
```
In other words, directories are created with all combinations of a file's tags, and each contains a symlink to the actual files having those tags. I have omitted the file type directories, but these would also exist. It looks really messy in type, but I think the effect would be very cool. One could then fine a given file along a number of "tag bread crumbs."
My thoughts so far:
* ls -R in a top directory to get all the file names
* identify those files with a [ and ] in the filename (tagged files)
* with what's left, enter a loop:
+ strip out the proj ID, tags, and extension
+ create all the necessary dirs based on the tags
+ create symlinks to the file in all of the dirs created
**First Solution! 2010-11-3 7p**
Here's my current working code. It only works on files in the top level directory, does not figure out extension types yet, and only works on 2 tags + the project ID for a total of 3 tags per file. It is a hacked manual chug solution but maybe it would help someone see what I'm doing and how this could be muuuuch better:
```
#!/bin/bash
########################
#### User Variables ####
########################
## set top directory for the semantic filer
## example: ~/semantic
## result will be ~/semantic/tag1, ~/semantic/tag2, etc.
top_dir=~/Desktop/semantic
## set document extensions, space separated
## example: "doc odt txt"
doc_ext="doc odt txt"
## set presentation extensions, space separated
pres_ext="ppt odp pptx"
## set image extensions, space separated
img_ext="jpg png gif"
#### End User Variables ####
#####################
#### Begin Script####
#####################
cd $top_dir
ls -1 | (while read fname;
do
if [[ $fname == *[* ]]
then
tag_names=$( echo $fname | sed -e 's/-/ /g' -e 's/_.*\[/ /' -e 's/\].*$//' )
num_tags=$(echo $tag_names | wc -w)
current_tags=( `echo $tag_names | sed -e 's/ /\n/g'` )
echo ${current_tags[0]}
echo ${current_tags[1]}
echo ${current_tags[2]}
case $num_tags in
3)
mkdir -p ./${current_tags[0]}/${current_tags[1]}/${current_tags[2]}
mkdir -p ./${current_tags[0]}/${current_tags[2]}/${current_tags[1]}
mkdir -p ./${current_tags[1]}/${current_tags[0]}/${current_tags[2]}
mkdir -p ./${current_tags[1]}/${current_tags[2]}/${current_tags[0]}
mkdir -p ./${current_tags[2]}/${current_tags[0]}/${current_tags[1]}
mkdir -p ./${current_tags[2]}/${current_tags[1]}/${current_tags[0]}
cd $top_dir/${current_tags[0]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[1]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[2]}/$fname
cd $top_dir/${current_tags[2]}
echo $PWD
ln -s $top_dir/$fname
ln -s $top_dir/$fname ./${current_tags[0]}/$fname
ln -s $top_dir/$fname ./${current_tags[1]}/$fname
cd $top_dir
;;
esac
fi
done
)
```
It's actually pretty neat. If you want to try it, do this:
* create a dir somewhere
* use touch to create a bunch of files with the format above: proj\_name[tag1-tag2].ext
* define the top\_dir variable
* run the script
* play around!
**ToDo**
* make this work using an "ls -R" in order to get into sub-dirs in my actual tree
* robustness check
* consider switching languages; hey, I've always wanted to learn perl and/or python!
Still open to any suggestions you have. Thanks!
|
2010/11/02
|
[
"https://Stackoverflow.com/questions/4081230",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/495152/"
] |
>
> A priviledged cleint can invoke the private constructor reflectively with the aid of the AccessibleObject.setAccessible method, If you need to defend this, modify the constructor. My question is: How exactly can a private constructor is invoked? and what is AccessibleObject.setAccessible??
>
>
>
Obviously a private constructor can be invoked by the class itself (e.g. from a static factory method). Reflectively, what Bloch is talking about is this:
```
import java.lang.reflect.Constructor;
public class PrivateInvoker {
public static void main(String[] args) throws Exception{
//compile error
// Private p = new Private();
//works fine
Constructor<?> con = Private.class.getDeclaredConstructors()[0];
con.setAccessible(true);
Private p = (Private) con.newInstance();
}
}
class Private {
private Private() {
System.out.println("Hello!");
}
}
```
>
> 2.What approach do you experts follow with singletons:
>
>
> ...
>
>
>
Typically, the first is favoured. The second (assuming you were to test if `TestInstance` is null before returning a new instance) gains lazy-loading at the cost of needing to be synchronized or being thread-unsafe.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, the above consideration is irrelevant.
>
> Isn't the second approach more flexible in case we have to make a check for new instance every time or same instance every time?
>
>
>
It's not about flexibility, it's about when the cost of creating the one (and only) instance is incurred. If you do option a) it's incurred at class loading time. That's typically fine since the class is only loaded once it's needed anyway.
I wrote the above when your second example didn't assign the instance to `TestInstance` at declaration. As stated now, in both cases the Singleton will be created at class load.
>
> What if I try to clone the class/object?
>
>
>
A singleton should not allow cloning for obvious reasons. A CloneNotSupportedException should be thrown, and will be automatically unless you for some reason implement `Cloneable`.
>
> a single-element enum type is the best way to implement a singleton. Why? and How?
>
>
>
Examples for this are in the book, as are justifications. What part didn't you understand?
|
>
> A priviledged cleint can invoke the private constructor reflectively with
> the aid of the
> AccessibleObject.setAccessible method,
> If you need to defend this, modify the
> constructor. My question is: How
> exactly can a private constructor is
> invoked? and what is
> AccessibleObject.setAccessible??
>
>
>
You can use java reflection to call private constructors.
>
> What approach do you experts follow with singletons:
>
>
>
I am a fan of using enum to actually do this. This is in the book also. If that's not an option, it is much simpler to do option a because you don't have to check or worry about instance being already created.
>
> What if I try to clone the
> class/object?
>
>
>
Not sure what you mean? do you mean clone() or something else that I don't know of?
>
> a single-element enum type is the best way to implement a singleton.
> Why? and How?
>
>
>
Ahh my own answer. haha. This is the best way because in this case, the java programming language guarantees a singleton instead of the developer having to check for a singleton. It's almost as if the singleton was part of the framework/language.
Edit:
I didn't see that it was a getter before. Updating my answer to this - it's better to use a function such getInstance because you can control what happens through a getter but you can't do the same if everybody is using a reference directly instead. Think of the future. If you ended up doing SomeClass.INTANCE and then later you wanted to make it lazy so it doesn't load right away then you would need change this everywhere that its being used.
| 7,954
|
8,342,891
|
I'm very new to Python and have a question. Currently I'm using this to calculate the times between a message goes out and the message comes in. The resulting starting time, time delta, and the Unique ID are then presented in file. As well, I'm using Python 2.7.2
Currently it is subtracting the two times and the result is 0.002677777777(the 0.00 are hours and minutes) which I don't need. How do I format it so that it would start with seconds (59.178533,00) with 59 being seconds. As well, sometimes python will display the number as 8.9999999 x e-05. How can I force it to always display the exact number.
```
def time_deltas(infile):
entries = (line.split() for line in open(INFILE, "r"))
ts = {}
for e in entries:
if " ".join(e[2:5]) == "OuchMsg out: [O]":
ts[e[8]] = e[0]
elif " ".join(e[2:5]) == "OuchMsg in: [A]":
in_ts, ref_id = e[0], e[7]
out_ts = ts.pop(ref_id, None)
yield (float(out_ts),ref_id[1:-1], float(in_ts) - float(out_ts))
INFILE = 'C:/Users/klee/Documents/Ouch20111130_cartprop04.txt'
import csv
with open('output_file.csv', 'w') as f:
csv.writer(f).writerows(time_deltas(INFILE))
```
Here's a small sample of the data:
```
61336.206267 - OuchMsg out: [O] enter order. RefID [F25Q282588] OrdID [X1F3500687 ]
```
|
2011/12/01
|
[
"https://Stackoverflow.com/questions/8342891",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1057243/"
] |
If `float(out_ts)` are hours, then `'{0:.3f}'.format(float(out_ts) * 3600.)` will give you the text representation of seconds with three digits after decimal point.
|
>
> How can I force it to always display the exact number.
>
>
>
This is sometimes not possible, because of floating point inaccuracies
What you *can* do on the other hand is to format your floats as you wish. See [here](http://docs.python.org/library/stdtypes.html#string-formatting-operations) for instructions.
E.G: replace your `yield` line with:
```
yield ("%.5f"%float(out_ts),"%.5f"%ref_id[1:-1], "%.5f"%(float(in_ts) - float(out_ts)))
```
to truncate all your values to 5 digits after the comma. There are other formatting options, for a lot of different formats.
| 7,963
|
2,036,260
|
>
> **Possible Duplicate:**
>
> [Django Unhandled Exception](https://stackoverflow.com/questions/1925898/django-unhandled-exception)
>
>
>
I'm randomly getting 500 server errors and trying to diagnose the problem. The setup is:
Apache + mod\_python + Django
My 500.html page is being served by Django, but I have no clue what is causing the error. My Apache access.log and error.log files don't contain any valuable debugging info, besides showing the request returned a 500.
Is there a mod\_python or general python error log somewhere (Ubuntu server)?
Thanks!
|
2010/01/10
|
[
"https://Stackoverflow.com/questions/2036260",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/87719/"
] |
Yes, you should have an entry in your apache confs as to where the error log is for the virtual server.
For instance the name of my virtual server is djangoserver, and in my /etc/apache2/sites-enabled/djangoserver file is the line
```
ErrorLog /var/log/apache2/djangoserver-errors.log
```
Although now that I reread your question, it appears you already have the apache log taken care of. I don't believe there is any separate log for mod\_python or python itself.
Is this a production set up?
If not you might wish to turn on [Debug mode](http://docs.djangoproject.com/en/1.1/ref/settings/#debug), and then Django produces very detailed screens with the error information. In your project's settings.py, set
```
DEBUG = True
TEMPLATE_DEBUG = DEBUG
```
to disable the generic 500 error screens and see the detailed breakdown.
|
Salsa makes several good suggestions. I would add that the Django development server is an excellent environment for tracking these things down. Sometimes I even run it from the production directory (gasp!) with `./manage.py runserver 0.0.0.0:8000` so I *know* I'm running the same code.
Admittedly sometimes something will fail under Apache and not under the dev server, but that is a hint in and of itself.
| 7,964
|
58,550,284
|
After an automatic update of [macOS v10.15](https://en.wikipedia.org/wiki/MacOS_Catalina) (Catalina), I am unable to open Xcode. Xcode prompts me to install additional components but the installation fails because of MobileDevice.pkg (Applications/Xcode.app/Contents/Resources/Packages)
I have found multiple answers on how to locate MobileDevice.pkg and that I should try to install it directly, but when I try to do this the installation fails too. I have also tried updating Xcode from [App Store](https://en.wikipedia.org/wiki/App_Store_(macOS)), but the update failed when it was nearly finished.
Has anyone experienced the same behaviour? Should I reset the Mac to default and install [macOS v10.13](https://en.wikipedia.org/wiki/MacOS_High_Sierra) (High Sierra) or Catalina from scratch or it is a problem of Xcode and re-install would do the job?
I have found a discussion [here](https://discussions.apple.com/thread/250782804) that was posted today and is probably regarding the same issue and it seems like many people are dealing with it, too.
The log:
```python
*2019-10-25 01:03:34+02 Vendula-MacBook-Pro Xcode[1567]: Package: PKLeopardPackage
<id=com.apple.pkg.MobileDevice, version=4.0.0.0.1.1567124787, url=file:///Applications/Xcode.app/Contents/Resources/Packages/MobileDevice.pkg>
Failed to verify with error: Error Domain=PKInstallErrorDomain Code=102
"The package “MobileDevice.pkg” is untrusted."
UserInfo={
NSLocalizedDescription=The package “MobileDevice.pkg” is untrusted.,
NSURL=MobileDevice.pkg -- file:///Applications/Xcode.app/Contents/Resources/Packages/,
PKInstallPackageIdentifier=com.apple.pkg.MobileDevice,
NSUnderlyingError=0x7fabf6626d00
{
Error Domain=NSOSStatusErrorDomain
Code=-2147409654 "CSSMERR_TP_CERT_EXPIRED"
UserInfo={
SecTrustResult=5,
PKTrustLevel=PKTrustLevelExpiredCertificate,
NSLocalizedFailureReason=CSSMERR_TP_CERT_EXPIRED
}
}
}*
```
|
2019/10/24
|
[
"https://Stackoverflow.com/questions/58550284",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11216994/"
] |
I had a similar problem, where I installed Xcode 11.1, and installed the components and everything within the same folder where I had Xcode 10.2.1. Then, I tried to go back to Xcode 10.2.1 and couldn't opened as it was asking me to install components again, and when I tried I was getting this error.
>
> The package “MobileDeviceDevelopment.pkg” is untrusted.
>
>
>
So, the workaround that fixed it for me was navigating to...
```
/Users/YourUser/Applications/Xcode\ 10.2.1.app/Contents/Resources/
```
Then, deleting **MobileDeviceDevelopment.pkg** and everything went back to normal :)
I hope this helps anyone else with this issue. Cheers!
|
You may solve this issue by setting the date of your Mac as October 1st, 2019. But this is just a hack! The real solution (suggested by apple) is this:
All you have to is to upgrade Xcode
-----------------------------------
**But** there is a [known Issues on apple developers site](https://developer.apple.com/documentation/xcode_release_notes/xcode_11_1_release_notes#3400988)
>
> Xcode may fail to update from the Mac App Store after updating to macOS Catalina. (56061273)
>
>
>
Apple suggests this:
>
> To trigger a new download you can delete the existing Xcode.app or temporarily change the file extension so it is no longer visible to the App Store.
>
>
>
---
Always working solution for all Xcode issues:
---------------------------------------------
1. Go [here](http://developer.apple.com/account) and log in.
2. Then [**download the xib from here**](https://developer.apple.com/services-account/download?path=/Developer_Tools/Xcode_11.2.1/Xcode_11.2.1.xip).
More information [here on this answer](https://stackoverflow.com/a/58285944/5623035).
---
##Answer to this specific issue##
Get rid of those packages.
```
cd /Applications/Xcode.app/Contents/Resources/Packages
sudo rm -rf MobileDevice.pkg
sudo rm -rf MobileDeviceDevelopment.pkg
```
Xcode will install all of them again for you.
| 7,965
|
30,668,557
|
I have a problem loading an external dll using Python through Python for .NET. I have tried different methodologis following stackoverflow and similar. I will try to summarize the situation and to describe all the steps that I've done.
I have a dll named for e.g. Test.NET.dll. I checked with dotPeek and I can see, clicking on it, x64 and .NET Framework v4.5. On my computer I have installed the .Net Framework 4.
I have also installed Python for .NET in different ways. I think that the best one is download the .whl from this website [LINK](http://www.lfd.uci.edu/~gohlke/pythonlibs/#pythonnet). I have downloaded and installed: pythonnet‑2.0.0.dev1‑cp27‑none‑win\_amd64.whl. I can imagine that it will works for .NET 4.0 since *Requires the Microsoft .NET Framework 4.0.*
Once I have installed everything, I can do these commands:
```
>>> import clr
>>> import System
>>> print System.Environmnet.Version
>>> print System.Environment.Version
4.0.30319.34209
```
It seems work. Then, I have tried to load my dll typing these commands:
```
>>> import clr
>>> dllpath= r'C:\Program Files\API\Test.NET'
>>> clr.AddReference(dllpath)
Traceback (most recent call last):
File "<pyshell#20>", line 1, in <module>
clr.AddReference(dllpath)
FileNotFoundException: Unable to find assembly 'C:\Program Files\API\Test.NET'.
at Python.Runtime.CLRModule.AddReference(String name)
```
I have also tried to add '.dll' at the end of the path but nothing changed. Then, I have also tried different solutions as described in [LINK](https://stackoverflow.com/questions/14633695/how-to-install-python-for-net-on-windows), [LINK](https://stackoverflow.com/questions/13259617/python-for-net-unable-to-find-assembly-error), [LINK](https://github.com/geographika/PythonDotNet), and much more.... Unfortunately, it doesn't work and I get different errors. I know that exists IronPython but I was trying to avoid to use it.
Thanks for your help!
|
2015/06/05
|
[
"https://Stackoverflow.com/questions/30668557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3075816/"
] |
>
> The return value of this decorator is ignored.
>
>
>
This is irrelevant to your situation. The return value of parameter decorators is ignored because they don't need to be able to replace anything (unlike method and class decorators which can replace the descriptor).
>
> My problem is that when I try to implement the decorator. **All the changes that I do to the target are ignored.**
>
>
>
The target is the object's prototype. It works fine:
```
class MyClass {
myMethod(@logParam myParameter: string) {}
}
function logParam(target: any, methodKey: string, parameterIndex: number) {
target.test = methodKey;
// and parameterIndex says which parameter
}
console.log(MyClass.prototype["test"]); // myMethod
var c = new MyClass();
console.log(c["test"]); // myMethod
```
Just change that situation to put the data where you want it (using `Reflect.defineMetadata` is probably best).
|
For my case I used a simple solution **without reflect metadata** API but **using method decoration**.
You can modify it for your purposes:
```
type THandler = (param: any, paramIndex: number, params: any[]) => void;
/**
* @example
* @Params((param1) => someHandlerFn(param1), ...otherParams)
* public async someMethod(param1, ...otherParams) { ...
*/
function Params(...handlers: THandler[]): MethodDecorator {
return (_target, _propertyKey, descriptor: PropertyDescriptor) => {
const originalMethod = descriptor.value;
descriptor.value = async function (...args: any[]) {
args.forEach((arg, index) => {
const handler = handlers[index];
if (handler) {
handler(arg, index, args);
}
});
return originalMethod.apply(this, args);
};
};
}
```
With usage:
```
export class SomeClass {
@Params(console.log, (param2) => {
if (param2.length < 3) {
console.log('param2 length must be more than 2 symbols');
}
}) // ...add your handlers
async someMethod(param1: string, param2: string) {
// ...some implementation
}
}
```
| 7,975
|
49,369,438
|
I'm trying to create a blobstore entry from an image data-uri object, but am getting stuck.
Basically, I'm posting via ajax the data-uri as text, an example of the payload:
```
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAPA...
```
I'm trying to receive this payload with the following handler. I'm assuming I need to convert the `data-uri` back into an image before storing? So am using the PIL library.
My python handler is as follows:
```
import os
import urllib
import webapp2
from google.appengine.ext.webapp import template
from google.appengine.ext import blobstore
from google.appengine.ext.webapp import blobstore_handlers
from google.appengine.api import images
class ImageItem(db.Model):
section = db.StringProperty(required=False)
description = db.StringProperty(required=False)
img_url = db.StringProperty()
blob_info = blobstore.BlobReferenceProperty()
when = db.DateTimeProperty(auto_now_add=True)
#Paste upload handler
class PasteUpload(webapp2.RequestHandler):
def post(self):
from PIL import Image
import io
import base64
data = self.request.body
#file_name = data['file_name']
img_data = data.split('data:image/png;base64,')[1]
#Convert base64 to jpeg bytes
f = Image.open(io.BytesIO(base64.b64decode(img_data)))
img = ImageItem(description=self.request.get('description'), section=self.request.get('section') )
img.blob_info = f.key()
img.img_url = images.get_serving_url( f.key() )
img.put()
```
This is likely all kinds of wrong. I get the following error when posting:
```
img.blob_info = f.key()
AttributeError: 'PngImageFile' object has no attribute 'key'
```
What am I doing wrong here? Is there an easier way to do this? I'm guessing I don't need to convert the `data-uri` into an image to store as a blob?
I also want this Handler to return the URL of the image created in the blobstore.
|
2018/03/19
|
[
"https://Stackoverflow.com/questions/49369438",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/791793/"
] |
There are a couple of ways to view your question and the sample code you posted, and it's a little confusing what you need because you are mixing strategies and technologies.
**POST base64 to `_ah/upload/...`**
Your service uses `create_upload_url()` to make a one-time upload URL/session for your client. Your client makes a POST to that URL and the data never touches your service (no HTTP-request-size restrictions, no CPU-time spent handling the POST). An App Engine internal "blob service" receives that POST and saves the body as a Blob in the Blobstore. App Engine then hands control back to your service in the `BlobstoreUploadHandler` class you write and then you can determine how you want to respond to the successful POST. In the case of the example/tutorial, `PhotoUploadHandler` redirects the client to the photo that was just uploaded.
That POST from your client must be encoded as `multipart/mixed` and use the fields shown in the example HTML `<form>`.
The multipart form can take the optional parameter, `Content-Transfer-Encoding`, and the App Engine internal handler will properly decode base64 data. From `blob_upload.py`:
```
base64_encoding = (form_item.headers.get('Content-Transfer-Encoding') ==
'base64')
...
if base64_encoding:
blob_file = cStringIO.StringIO(base64.urlsafe_b64decode(blob_file.read()))
...
```
Here's a complete multipart form I tested with cURL, based on the fields used in the example. I found out how to do this over at [Is there a way to pass the content of a file to curl?](https://stackoverflow.com/questions/31061379/is-there-a-way-to-pass-the-content-of-a-file-to-curl):
**myconfig.txt**:
```
header = "Content-length: 435"
header = "Content-type: multipart/mixed; boundary=XX
data-binary = "@myrequestbody.txt"
```
**myrequestbody.txt**:
```
--XX
Content-Disposition: form-data; name="file"; filename="test.gif"
Content-Type: image/gif
Content-Transfer-Encoding: base64
R0lGODdhDwAPAIEAAAAAzMzM/////wAAACwAAAAADwAPAAAIcQABCBxIsODAAAACAAgAIACAAAAiSgwAIACAAAACAAgAoGPHACBDigwAoKTJkyhTqlwpQACAlwIEAJhJc6YAAQByChAAoKfPn0CDCh1KtKhRAAEAKF0KIACApwACBAAQIACAqwECAAgQAIDXr2DDAggIADs=
--XX
Content-Disposition: form-data; name="submit"
Submit
--XX--
```
and then run like:
```
curl --config myconfig.txt "http://127.0.0.1:8080/_ah/upload/..."
```
You'll need to create/mock-up the multipart form in your client.
Also, as an alternative to Blobstore, you can use Cloud Storage if you want to save a little on storage costs or have some need to share the data without your API. Follow the documentation for [Setting Up Google Cloud Storage](https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/setting-up-cloud-storage), and then modify your service to create the upload URL for your bucket of choice:
```
create_upload_url(gs_bucket_name=...)
```
It's a little more complicated than just that, but reading the section *Using the Blobstore API with Google Cloud Storage* in the Blobstore document will get you pointed in the right direction.
**POST base64 directly to your service/handler**
Kind of like you coded in the original post, your service receives the POST from your client and you then decide if you need to manipulate the image and where you want to store it (Datastore, Blobstore, Cloud Storage).
If you need to manipulate the image, then using PIL is good:
```
from io import BytesIO
from PIL import Image
from StringIO import StringIO
data = self.request.body
#file_name = data['file_name']
img_data = data.split('data:image/png;base64,')[1]
# Decode base64 and open as Image
img = Image.open(BytesIO(base64.b64decode(img_data)))
# Create thumbnail
img.thumbnail((128, 128))
# Save img output as blob-able string
output = StringIO()
img.save(output, format=img.format)
img_blob = output.getvalue()
# now you choose how to save img_blob
```
If you don't need to manipulate the image, just stop at `b64decode()`:
```
img_blob = base64.b64decode(img_data)
```
|
An image object (<https://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.api.images>) isn't a Datastore entity, so it has no key. You need to actually save the image to blobstore[2] or Google Cloud Storage[1] then get a serving url for your image.
[1] <https://cloud.google.com/appengine/docs/standard/python/googlecloudstorageclient/setting-up-cloud-storage>
[2] <https://cloud.google.com/appengine/docs/standard/python/blobstore/>
| 7,976
|
59,156,316
|
I have a python code like this to interact with an API
```
from oauthlib.oauth2 import BackendApplicationClient
from requests_oauthlib import OAuth2Session
import json
from pprint import pprint
key = "[SOME_KEY]" # FROM API PROVIDER
secret = "[SOME_SECRET]" # FROM API PROVIDER
api_client = BackendApplicationClient(client_id=key)
oauth = OAuth2Session(client=api_client)
url = "[SOME_URL_FOR_AN_API_ENDPOINT]"
# GETTING TOKEN AFTER PROVIDING KEY AND SECRET
token = oauth.fetch_token(token_url="[SOME_OAUTH_TOKEN_URL]", client_id=key, client_secret=secret)
# GENERATING AN OAuth2Session OBJECT; WITH THE TOKEN:
client = OAuth2Session(key, token=token)
body = {
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
response = client.post(url, data=json.dumps(body))
pprint(response.json())
```
When I run this py file, I get this response from the API, **that I have to include the content type in the header**. How do I include the header with Oauth2Session?
```
{'detailedMessage': 'Your request was missing the Content-Type header. Please '
'add this HTTP header and try your request again.',
'errorId': '0a8868ec-d9c0-42cb-9570-59059e5b39a9',
'simpleMessage': 'Your field could not be created at this time.',
'statusCode': 400,
'statusName': 'Bad Request'}
```
|
2019/12/03
|
[
"https://Stackoverflow.com/questions/59156316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11719873/"
] |
Have you tried to you send a header paramter with this requests?
```py
headers = {"Content-Type": "application/json"}
response = client.post(url, data=json.dumps(body), headers=headers)
```
|
This is how I was able to configure the POST request for exchanging the code for a token.
```py
from requests_oauthlib import OAuth2Session
from oauthlib.oauth2 import WebApplicationClient, BackendApplicationClient
from requests.auth import HTTPBasicAuth
client_id = CLIENT_ID
client_secret = CLIENT_SECRET
authorization_base_url = AUTHORIZE_URI
token_url = TOKEN_URI
redirect_uri = REDIRECT_URI
auth = HTTPBasicAuth(client_id, client_secret)
scope = SCOPE
header = {
'User-Agent': 'myapplication/0.0.1',
'Content-Type': 'application/x-www-form-urlencoded',
'Accept': 'application/json',
}
# Create the Authorization URI
# Not included here but store the state in a safe place for later
the_first_session = OAuth2Session(client_id=client_id, redirect_uri=redirect_uri, scope=scope)
authorization_url, state = the_first_session.authorization_url(authorization_base_url)
# Browse to the Authorization URI
# Login and Auth with the OAuth provider
# Now to respond to the callback
the_second_session = OAuth2Session(client_id, state=state)
body = 'grant_type=authorization_code&code=%s&redirect_uri=%s&scope=%s' % (request.GET.get('code'), redirect_uri, scope)
token = the_second_session.fetch_token(token_url, code=request.GET.get('code'), auth=auth, header=header, body=body)
```
| 7,977
|
31,932,371
|
I have both Python 2.7 and Python 3.4 installed on my MacBook, as both are needed sometimes.
Python 2.7 is shipped by Apple itself.
Python 3.4 is installed by Mac OS X 64-bit/32-bit installer in the link
<https://www.python.org/downloads/release/python-343/>
Here is how I installed Meld on Mac OS X 10.10:
1. Install apple development command line tools;
2. Install Homebrew;
3. `homebrew install homebrew/x11/meld`
4. When launched meld, it says:
```
**bash: /usr/local/bin/meld: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory**
```
From my research, some people recommend to modify the first line in `/usr/local/bin/pip`, i.e.,
```none
#!/usr/local/opt/python/bin/python2.7
```
This file is missing. However, if I want to be able to use both Python 2.7 and Python 3.4 for Meld, what should I do to make it work?
|
2015/08/11
|
[
"https://Stackoverflow.com/questions/31932371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4778233/"
] |
This should solve the problem: `brew link --overwrite python`
|
I use Linux. I have no clue about Apple and the special things they do. But judging from the error messages given it seems that
1. bash starts *the meld program*
2. *the meld program* refers to *the python program in **someplace***
3. **someplace** is wrong in *the meld program*, causing the bash error message
Now here is how you debug and solve such problems:
1. open a terminal (black window where you can enter commands)
2. find out where *the python program* is installed by entering `which python` into the terminal window.
```
$ which python
/usr/bin/python
```
In my case we can see that python is installed in `/usr/bin/python`
3. Find out where *the meld program* is installed. In the original question this is in `/usr/local/bin/meld`, as we can deduce from the error message in 4. But we can try again the *which* command and ask where the system finds a meld: `which meld`:
```
$ which meld
/usr/bin/meld
```
In my case this is in `/usr/bin/meld`.
4. now you open the file that `which meld` reported in step 3. and make a backup copy of it
5. now you open the file that `which meld` reported in step 3. in your editor of choice and change the first line, such that it starts with `#!` followed by the path of *the python program* from step 2. in my case the first line of meld is: `#!/usr/bin/python`. Save the changes to the file and retry starting meld. You probably need some admin access rights to be able to save the file.
1. first start it in the terminal by entering `meld`, so it is easiert to read the error messages.
2. if meld started in the terminal, start meld your usual way from the gui, and hope that it works, too
The original question mentioned several versions of python. I have also several versions installed, I can ask the system for the different versions:
```
$ which python2
/usr/bin/python2
$ which python2.7
/usr/bin/python2.7
$ which python3
/usr/bin/python3
$ which python3.4
/usr/bin/python3.4
```
If you have several versions installed: try one at a time when editing the file from step 3.
| 7,978
|
8,380,733
|
I'm trying to run a custom django command as a scheduled task on Heroku. I am able to execute the custom command locally via: `python manage.py send_daily_email`. (note: I do NOT have any problems with the custom management command itself)
However, Heroku is giving me the following exception when trying to "Run" the task through Heroku Scheduler addon:
```
Traceback (most recent call last):
File "bin/send_daily_visit_email.py", line 2, in <module>
from django.conf import settings
ImportError: No module named django.conf
```
I placed a python script in **/bin/send\_daily\_email.py**, and it is the following:
```
#! /usr/bin/python
from django.conf import settings
settings.configure()
from django.core import management
management.call_command('send_daily_email') #delegates off to custom command
```
Within Heroku, however, I am able to run `heroku run bin/python` - launch the python shell - and successfully import `settings` from `django.conf`
I am pretty sure it has something to do with my `PYTHON_PATH` or visibility to Django's `SETTINGS_MODULE`, but I'm unsure how to resolve the issue. Could someone point me in the right direction? Is there an easier way to accomplish what I'm trying to do here?
Thank you so much for your tips and advice in advance! New to Heroku! :)
**EDIT:**
Per Nix's comment, I made some adjustments, and did discover that specifying my exact python path, I did get past the Django setup.
I now receive:
```
File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 155, in call_command
raise CommandError("Unknown command: %r" % name)
django.core.management.base.CommandError: Unknown command: 'send_daily_email'
```
Although, I can see 'send\_daily\_email' when I run ``heroku run bin/python app/manage.py```.
I'll keep an update if I come across the answer.
|
2011/12/05
|
[
"https://Stackoverflow.com/questions/8380733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/873197/"
] |
You are probably using a different interpreter.
Check to make sure shell python is the same as the one you reference in your script /usr/bin/python . It could be that there is a different one in your path, which would explain why it works when you run `python manage.py` but not your shell scrip which you explicitly reference `/usr/bin/python`.
---
Typing `which python` will tell you what interpreter is being found on your path.
|
In addition, this can also be resolved by adding your home directory to your Python path. A quick and unobtrusive way to accomplish that is to add it to the PYTHONPATH environment variable (which is generally /app on the Heroku Cedar stack).
Add it via the heroku config command:
```
$ heroku config:add PYTHONPATH=/app
```
That should do it! For more details: <http://tomatohater.com/2012/01/17/custom-django-management-commands-on-heroku/>
| 7,980
|
5,077,765
|
I am using Google App Engine's datastore and wants to retrieve an entity whose key value is written as
```
ID/Name
id=1
```
Can anyone suggest me a GQL query to view that entity in datastore admin console and also in my python program?
|
2011/02/22
|
[
"https://Stackoverflow.com/questions/5077765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/617462/"
] |
From your application use the [get\_by\_id()](http://code.google.com/intl/it/appengine/docs/python/datastore/modelclass.html#Model_get_by_id) class method of the Model:
```
entity = YourModel.get_by_id(1)
```
From Datastore viewer you should use the `KEY` function:
```
SELECT * FROM YourModel WHERE __key__ = KEY('YourModel',1)
```
|
An application can retrieve a model instance for a given Key using the [get()](http://code.google.com/appengine/docs/python/datastore/functions.html#get) function.
```
class member(db.Model):
firstName=db.StringProperty(verbose_name='First Name',required=False)
lastName=db.StringProperty(verbose_name='Last Name',required=False)
...
id = int(self.request.get('id'))
entity= member.get(db.Key.from_path('member', id))
```
I'm not sure how to return a specific entity in the admin console.
| 7,981
|
8,062,564
|
I try to apply image filters using python's [PIL](http://www.pythonware.com/products/pil/). The code is straight forward:
```
im = Image.open(fnImage)
im = im.filter(ImageFilter.BLUR)
```
This code works as expected on PNGs, JPGs and on 8-bit TIFs. However, when I try to apply this code on 16-bit TIFs, I get the following error
```
ValueError: image has wrong mode
```
Note that PIL was able to load, resize and save 16-bit TIFs without complains, so I assume that this problem is filter-related. However, [ImageFilter documentation](http://www.pythonware.com/library/pil/handbook/imagefilter.htm) says nothing about 16-bit support
Is there any way to solve it?
|
2011/11/09
|
[
"https://Stackoverflow.com/questions/8062564",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17523/"
] |
Your TIFF image's mode is most likely a "I;16".
In the current version of ImageFilter, kernels can only be applied to
"L" and "RGB" images (see source of ImageFilter.py)
Try converting first to another mode:
```
im.convert('L')
```
If it fails, try:
```
im.mode = 'I'
im = im.point(lambda i:i*(1./256)).convert('L').filter(ImageFilter.BLUR)
```
Remark: Possible duplicate from [Python and 16 Bit Tiff](https://stackoverflow.com/questions/7247371/python-and-16-bit-tiff)
|
To move ahead, try using [ImageMagick](http://www.imagemagick.org/script/index.php), look for PythonMagick hooks to the program. On the command prompt, you can use `convert.exe image-16.tiff -blur 2x2 output.tiff`. Didn't manage to install PythonMagick in my windows OS as the source needs compiling.
| 7,982
|
13,103,806
|
So I have been working on a two player "guess the number" program.
But im just having trouble with one thing.
So here's code:
```
import time
import random
thenumber = random.randint(1, 10)
print "Welcome to Noah's Two Player guess a number game."
print "What is player one's name?"
player1 = raw_input()
print "What is player two's name?"
player2 = raw_input()
print "Well " + player1 + " and " + player2 + ", are you ready to play?"
choice = raw_input()
if choice == yes:
print player1 + ", pick a number from 1 to 10."
player1guess = raw_input()
print player2 + ", pick a number from 1 to 10."
player2guess = raw_input()
print "Calculating..."
time.sleep(3)
p1 = thenumber - player1guess
p2 = thenumber - player2guess
if p1 > p2:
print player1 + " won!"
elif p2 > p1:
print player2 + " won!"
```
Everything is running smoothly until I get this error:
```
Traceback (most recent call last):
File "C:\Python27\Script 1", line 11, in <module>
if choice == yes:
NameError: name 'yes' is not defined
```
To my knowledge, I don't think I have done anything wrong, but then again I am a beginner to python.
Someone please help me with this.
EDIT: (This is python 2.7 if it makes a difference)
|
2012/10/27
|
[
"https://Stackoverflow.com/questions/13103806",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1779877/"
] |
I guess you need quotes around `yes` : `choice =='yes'` , otherwise python will think that `yes` is a variable.
|
try `if choice == 'yes':`. You're comparing it with a symbol that isn't defined.
Note there are no 'variables' in Python, only symbols and their values(with their types internally understood by the language interpreter). Think of them as names you give to various objects. They're all symbols.
| 7,983
|
73,697,975
|
I am working on a Django application where I have used twilio to send sms and whatsapp messages and the sendgrid api for sending the emails. The problem occurs in scheduled messages in all three. For example, if I have schedule an email to be sent at 06:24 PM(scheduled time is 06:24 PM), then I am receiving the email at 11:54 PM, hence the difference is of 5 hours and 30 minutes. The same problem also arises in sms and whatsapp.
I think the problem might be due to the timezone. I am scheduling the messages from India and twilio account is in USA. And, the time difference of 5 hours 30 mins is due to the UTC format as India is 5 hours 30 mins ahead of UTC. But I am not getting any solution for this.
Also the time is specified as the UNIX time. I am getting a datetime object whose value is like: "2022-09-13 11:44:00". I am converting it to unix time with the "time module" of python. Hence the code looks like:
```
message = MAIL(...)
finalScheduleTime = int(time.mktime(scheduleTime.timetupple()))
message.send_at = finalScheduleTime
```
In the above code, scheduleTime is the datetime object.
So, is there any specific way to solve this problem ?
|
2022/09/13
|
[
"https://Stackoverflow.com/questions/73697975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19663784/"
] |
If need replace missing value in `country` column by last value after split `city` by `,` use:
```
df['country'] = df['country'].fillna(df['city'].str.split(',').str[-1])
```
Or if need assign all column in `country` column:
```
df['country'] = df['city'].str.split(',').str[-1]
```
|
You can use [`str.extract`](https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extract.html) with a word regex `\w+` anchored to the end of the string (`$`) to get the last word:
```
# replacing all values
df['country'] = df['city'].str.extract('(\w+)$', expand=False)
# only updating NaNs
df.loc[df['country'].isna(), 'country'] = df['city'].str.extract('(\w+)$', expand=False)
```
output:
```
city country
0 Toronto,Canada Canada
```
Alternatively, you can use the `([^,]+)$` regex that is more permissive (any terminal character except `,`)
| 7,984
|
71,431,272
|
I'm having a problem with calling my TextInputs from one class in another. I have a special class inheriting from TextInput, which makes moving with arrows possible, in my MainScreen I want to grab the letters from TextInputs and later on do something with them, however I need to pass them into this class first. I don't know how I am supposed to do that. I checked, that it's because I am unable to find any ids with id1,id2,... key, but I don't know why and how to overcome this.
***.py***
```
import kivy
import sys
kivy.require('1.11.1')
sys.path.append("c:\\users\\dom\\appdata\\local\\programs\\python\\python39\\lib\\site-packages")
from kivy.app import App
from kivy.config import Config
from kivy.core.window import Window
from kivy.uix.widget import Widget
from kivy.lang import Builder
from kivy.uix.relativelayout import RelativeLayout
from kivy.uix.gridlayout import GridLayout
from kivy.uix.screenmanager import ScreenManager, Screen
from kivy.uix.textinput import TextInput
from kivy.uix.label import Label
from kivy.app import App
from random import choice
import math
from kivy.properties import StringProperty
import string
letters= string.ascii_lowercase
Builder.load_file("GuessWord.kv")
def set_color(letter, color):
COLORS_DICT = {"W" : "ffffff", "Y" : "ffff00", "G" : "00ff00"}
c = COLORS_DICT.get(color, "W")
return f"[color={c}]{letter}[/color]"
class MyTextInput(TextInput):
focused = 'id1'
def change_focus(self, *args):
app = App.get_running_app()
if app.root is not None:
# Now access the container.
layout = app.root.ids["layout"]
# Access the required widget and set its focus.
print("Changefocus", MyTextInput.focused)
layout.ids[MyTextInput.focused].focus = True
def keyboard_on_key_down(self, window, keycode, text, modifiers):
focusedid = int(MyTextInput.focused[2])
if keycode[1] == "backspace":
if self.text=="":
if int(MyTextInput.focused[2]) > 1:
focusedid -= 1
MyTextInput.focused = "id" + str(focusedid)
else:
self.text = self.text[:-1]
if keycode[1] == "right":
if int(MyTextInput.focused[2]) < 5:
focusedid += 1
MyTextInput.focused = "id" + str(focusedid)
elif int(MyTextInput.focused[2]) == 5:
MyTextInput.focused = "id" + str(1)
elif keycode[1] == "left":
if int(MyTextInput.focused[2]) > 1:
focusedid -= 1
MyTextInput.focused = "id" + str(focusedid)
elif int(MyTextInput.focused[2]) == 1:
MyTextInput.focused = "id" + str(5)
elif keycode[1] in letters:
if int(MyTextInput.focused[2]) < 5:
self.text=text
focusedid += 1
MyTextInput.focused = "id" + str(focusedid)
self.change_focus()
print("After changing", MyTextInput.focused)
return True
#TextInput.keyboard_on_key_down(self, window, keycode, text, modifiers)
class MainScreen(Widget):
def GetLetters(self):
app = App.get_running_app()
if app.root is not None:
t1=app.root.ids['id1']
t2=app.root.ids['id2']
t3=app.root.ids['id3']
t4=app.root.ids['id4']
t5=app.root.ids['id5']
listofletters=[t1,t2,t3,t4,t5]
for letter in listofletters:
print(letter.text)
class TestingappApp(App):
def build(self):
return MainScreen()
TestingappApp().run()
```
***.kv***
```
<MainScreen>:
BoxLayout:
size: root.size
orientation: "vertical"
CustomBox:
id: layout
cols:5
Button:
pos_hint: {'center_x':0.5,'y':0.67}
size_hint: 0.5,0.1
text: "Click here to get letters!"
on_release: root.GetLetters()
<CustomBox@BoxLayout>:
MyTextInput:
id: id1
focus: True
focused: "id1"
multiline: False
MyTextInput:
id: id2
focused: "id2"
multiline: False
MyTextInput:
id: id3
focused: "id3"
multiline: False
MyTextInput:
id: id4
focused: "id4"
multiline: False
MyTextInput:
id: id5
focused: "id5"
multiline: False
```
|
2022/03/10
|
[
"https://Stackoverflow.com/questions/71431272",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15846202/"
] |
I finally figured this one out. I ended up using a callback in the onchange attribute that set the value of the current property in the loop.
```
<div class="form-section">
@foreach (var property in DataModel.GetType().GetProperties())
{
var propertyString = $"DataModel.{property.Name.ToString()}";
@if (property.PropertyType == typeof(DateTime))
{
<input type="date" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, DateTime.Parse(e.Value.ToString())))" />
}
else if (property.PropertyType == typeof(int))
{
<input type="number" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, Int32.Parse(e.Value.ToString())))" />
}
else
{
<input type="number" id="@property.Name" @onchange="@(e => property.SetValue(dataModel, Decimal.Parse(e.Value.ToString())))" />
}
}
</div>
```
|
I think what @nocturns2 said is correct, you could try this code:
```
@if (property.PropertyType == typeof(DateTime))
{
DateTime.TryParse(YourPropertyString, out var parsedValue);
var resutl= (YourType)(object)parsedValue
<InputDate id=@"property.Name" @bind-Value="@resutl">
}
```
| 7,985
|
22,727,782
|
I'd previously used Anaconda to handle python, but I'm and start working with virtual environments.
I set up virtualenv and virtualenvwrapper, and have been trying to add modules, specifically scrapy and lxml, for a project I want to try.
Each time I pip install, I hit an error.
For scrapy:
```
File "/home/philip/Envs/venv/local/lib/python2.7/site-packages/setuptools/command/easy_install.py", line 1003, in run_setup
raise DistutilsError("Setup script exited with %s" % (v.args[0],))
distutils.errors.DistutilsError: Setup script exited with error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
---------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /home/philip/Envs/venv/build/cryptography
Storing debug log for failure in /home/philip/.pip/pip.log
```
For lxml:
```
In file included from src/lxml/lxml.etree.c:346:0:
/home/philip/Envs/venv/build/lxml/src/lxml/includes/etree_defs.h:9:31: fatal error: libxml/xmlversion.h: No such file or directory
include "libxml/xmlversion.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Cleaning up... Command /home/philip/Envs/venv/bin/python -c "import setuptools, tokenize;__file__='/home/philip/Envs/venv/build/lxml/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-zIsPdl-record/install-record.txt
--single-version-externally-managed --compile --install-headers /home/philip/Envs/venv/include/site/python2.7 failed with error code 1 in /home/philip/Envs/venv/build/lxml Storing debug log for failure in /home/philip/.pip/pip.log
```
I tried to install it following [scrapy's documentation](http://doc.scrapy.org/en/latest/topics/ubuntu.html#topics-ubuntu), but scrapy was still not listed when I called for python's installed modules.
Any ideas? Thanks--really appreciate it!
I'm on Ubuntu 13.10 if it matters. Other modules I've tried have installed fine (though I've only gone for a handful).
|
2014/03/29
|
[
"https://Stackoverflow.com/questions/22727782",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3115915/"
] |
I had the same problem in Ubuntu 14.04. I've solved it with the instructions of the page linked by @jdigital and the openssl-dev library pointed by @user3115915. Just to help others:
```
sudo apt-get install libxslt1-dev libxslt1.1 libxml2-dev libxml2 libssl-dev
sudo pip install scrapy
```
|
In my case, I solve the problem installing all the libraries that Manuel mention plus the extra library: libffi-dev
<https://askubuntu.com/questions/499714/error-installing-scrapy-in-virtualenv-using-pip>
| 7,987
|
65,093,883
|
I often debug my python code by plotting NumPy arrays in the vscode debugger.
Often I spend more than 3s looking at a plot. When I do vscode prints the extremely
long warning below. It's very annoying because I then have to scroll up a lot
all the time to see previous debugging outputs. Where is this PYDEVD\_WARN\_EVALUATION\_TIMEOUT
variable? How do I turn this off?
I included the warning below for completeness, thanks a lot for your help!
>
> Evaluating: plt.show() did not finish after 3.00s seconds.
> This may mean a number of things:
>
>
> * This evaluation is really slow and this is expected.
> In this case it's possible to silence this error by raising the timeout, setting the
> PYDEVD\_WARN\_EVALUATION\_TIMEOUT environment variable to a bigger value.
> * The evaluation may need other threads running while it's running:
> In this case, it's possible to set the PYDEVD\_UNBLOCK\_THREADS\_TIMEOUT
> environment variable so that if after a given timeout an evaluation doesn't finish,
> other threads are unblocked or you can manually resume all threads.
>
>
> Alternatively, it's also possible to skip breaking on a particular thread by setting a
> `pydev_do_not_trace = True` attribute in the related threading.Thread instance
> (if some thread should always be running and no breakpoints are expected to be hit in it).
> * The evaluation is deadlocked:
> In this case you may set the PYDEVD\_THREAD\_DUMP\_ON\_WARN\_EVALUATION\_TIMEOUT
> environment variable to true so that a thread dump is shown along with this message and
> optionally, set the PYDEVD\_INTERRUPT\_THREAD\_TIMEOUT to some value so that the debugger
> tries to interrupt the evaluation (if possible) when this happens.
>
>
>
|
2020/12/01
|
[
"https://Stackoverflow.com/questions/65093883",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8726027/"
] |
If found a way to adapt the launch.json which takes care of this problem.
```
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"env": {"DISPLAY":":1",
"PYTHONPATH": "${workspaceRoot}",
"PYDEVD_WARN_EVALUATION_TIMEOUT": "500"},
"cwd": "${workspaceFolder}",
"console": "integratedTerminal"
}
]
}
```
|
If you are keen to surpress the warning, you'd go like this:
In this documentation, point 28.6.3, you can do so:
<https://docs.python.org/2/library/warnings.html#temporarily-suppressing-warnings>
Here's the code if the link dies in the future.
```
import warnings
def fxn():
warnings.warn("deprecated", DeprecationWarning)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
```
You should be ready to go with a simple copy paste.
| 7,990
|
48,729,915
|
I am trying to read a `png` image in python. The `imread` function in `scipy` is being [deprecated](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.imread.html#scipy.ndimage.imread) and they recommend using `imageio` library.
However, I am would rather restrict my usage of external libraries to `scipy`, `numpy` and `matplotlib` libraries. Thus, using `imageio` or `scikit image` is not a good option for me.
Are there any methods in python or `scipy`, `numpy` or `matplotlib` to read images, which are not being deprecated?
|
2018/02/11
|
[
"https://Stackoverflow.com/questions/48729915",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5495304/"
] |
With matplotlib you can use (as shown in the matplotlib [documentation](https://matplotlib.org/2.0.0/users/image_tutorial.html))
```
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img=mpimg.imread('image_name.png')
```
And plot the image if you want
```
imgplot = plt.imshow(img)
```
|
From [documentation](https://matplotlib.org/3.1.1/api/image_api.html?highlight=matplotlib%20image#module-matplotlib.image.imread):
>
> Matplotlib can only read PNGs natively. Further image formats are supported via the optional dependency on Pillow.
>
>
>
So in case of `PNG` we may use `plt.imread()`. In other cases it's probably better to use `Pillow` directly.
| 7,992
|
34,086,675
|
I would like to slice an array `a` in Julia in a loop in such a way that it's divided in chunks of `n` samples. The length of the array `nsamples` is *not* a multiple of `n`, so the last stride would be shorter.
My attempt would be using a ternary operator to check if the size of the stride is greater than the length of the array:
```
for i in 0:n:nsamples-1
end_ = i+n < nsamples ? i+n : end
window = a[i+1:end_]
end
```
In this way, `a[i+1:end_]` would resolve to `a[i+1:end]` if I'm exceeding the size of the array.
However, the use of the keyword "end" in line 2 is not acceptable (it's also the keyword for "end of control statement" in julia.
In python, I can assign `None` to `end_` and this would resolve to `a[i+1:None]`, which will be the end of the array.
How can I get around this?
|
2015/12/04
|
[
"https://Stackoverflow.com/questions/34086675",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/277113/"
] |
The `end` keyword is only given this kind of special treatment inside of indexing expressions, where it evaluates to the last index of the dimension being indexed. You could put it inside with e.g.
```
for i in 0:n:nsamples-1
window = a[i+1:min(i+n, end)]
end
```
Or you could just use `length(a)` (or `nsamples`, I guess they are the same?) instead of `end` to make it clear which `end` you are referring to.
|
Ugly way:
```
a=rand(7);
nsamples=7;
n=3;
for i in 0:n:nsamples-1
end_ = i+n < nsamples ? i+n : :end
window = @eval a[$i+1:$end_]
println(window)
end
```
Better solution:
```
for i in 0:n:nsamples-1
window = i+n < nsamples ? a[i+1:i+n] : a[i+1:end]
println(window)
end
```
| 8,002
|
43,721,155
|
I'm trying to close each image opened via iteration, within each iteration.
I've referred to this thread below, but the correct answer is not producing the results.
[How do I close an image opened in Pillow?](https://stackoverflow.com/questions/31751464/how-do-i-close-an-image-opened-in-pillow)
My code
```
for i in Final_Bioteck[:5]:
with Image.open('{}_screenshot.png'.format(i)) as test_image:
test_image.show()
time.sleep(2)
```
I also tried,
`test_image.close()` , but no result.
My loop above is opening 5 Windows Photos Viewer dialogs; I was hoping that through iteration, each window would be closed.
I saw this thread as well, but the answers are pretty outdated, so not sure if there is a more simple way to execute what I desire.
[How can I close an image shown to the user with the Python Imaging Library?](https://stackoverflow.com/questions/6725099/how-can-i-close-an-image-shown-to-the-user-with-the-python-imaging-library)
Thank you =)
|
2017/05/01
|
[
"https://Stackoverflow.com/questions/43721155",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6802252/"
] |
Got it working, but I installed a different image viewer on Windows as I couldn't find the .exe of the default viewer.
```
import webbrowser
import subprocess
import os, time
for i in Final_Bioteck[6:11]:
webbrowser.open( '{}.png'.format(i)) # opens the pic
time.sleep(3)
subprocess.run(['taskkill', '/f', '/im', "i_view64.exe"])
#taskkill kills the program, '/f' indiciates it's a process, '/'im' (not entirely sure), and i_view64.exe is the image viewers' exe file.
```
|
In Windows 10, the process is dllhost.exe
using the same script as Moondra, except with "dllhost.exe" instead of "i\_view64.exe"
```
import webbrowser
import subprocess
import os, time
for i in Final_Bioteck[6:11]:
webbrowser.open( '{}.png'.format(i)) # opens the pic
time.sleep(3)
subprocess.run(['taskkill', '/f', '/im', "dllhost.exe"])
#taskkill kills the program, '/f' indicates it's a process, '/'im' (indicates the name of the image process), and "dllhost.exe" is the image viewers' exe file.
```
| 8,005
|
32,004,317
|
I am working on a python GUI for serial communication with some hardware.I am using USB-RS232 converter for that.I do'nt want user to look for com port of hardware in device manager and then select port no in GUI for communication.How can my python code automatically get the port no. for that particular USB port?I can connect my hardware to that particular everytime and what will happen if i run the GUI in some other PC.You can suggest any other solution for this.
Thanks in advance!!!
|
2015/08/14
|
[
"https://Stackoverflow.com/questions/32004317",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5036147/"
] |
pyserial can list the ports with their USB VID:PID numbers.
```
from serial.tools import list_ports
list_ports.comports()
```
This function returns a tuple, 3rd item is a string that may contain the USB VID:PID number. You can parse it from there. Or better, you can use the `grep` function also provided by `list_ports` module:
```
list_ports.grep("6157:9988")
```
This one returns a generator object which you can iterate over. If it's unlikely that there are 2 devices connected with the same VID:PID (I wouldn't assume that, but for testing purposes its okay) you can just do this:
```
my_port_name = list(list_ports.grep("0483:5740"))[0][0]
```
Documentation for [list\_ports](http://pyserial.readthedocs.io/en/latest/tools.html#module-serial.tools.list_ports).
Note: I've tested this on linux only. pyserial documentation warns that on some systems hardware ids may not be listed.
|
I assume that you are specifically looking for a COM port that is described as being a USB to RS232 in the device manager, rather than wanting to list all available COM ports?
Also, you have not mentioned what OS you are developing on, or the version of Python you are using, but this works for me on a Windows system using Python 3.4:
```
import serial.tools.list_ports
def serial_ports():
# produce a list of all serial ports. The list contains a tuple with the port number,
# description and hardware address
#
ports = list(serial.tools.list_ports.comports())
# return the port if 'USB' is in the description
for port_no, description, address in ports:
if 'USB' in description:
return port_no
```
| 8,006
|
14,004,839
|
I have Flask, Babel and Flask-Babel installed in the global packages.
When running python and I type this, no error
```
>>> from flaskext.babel import Babel
>>>
```
With a virtual environment, starting python and typing the same command I see
```
>>> from flaskext.babel import Babel
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named flaskext.babel
>>>
```
The problem is that I'm using Ninja-IDE and I'm apparently forced to use a virtualenv. I don't mind as long as it doesn't break Flask packing system.
|
2012/12/22
|
[
"https://Stackoverflow.com/questions/14004839",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/75517/"
] |
I think you're supposed to import Flask extensions like the following from version 0.8 onwards:
```
from flask.ext.babel import Babel
```
I tried the old way (`import flaskext.babel`), and it didn't work for me either.
|
The old way of importing Flask extension was like:
```
import flaskext.babel
```
[Namespace packages](https://stackoverflow.com/questions/1675734/how-do-i-create-a-namespace-package-in-python) were, however, "too painful for everybody involved", so now Flask extensions should be importable like:
```
import flask_babel
```
[`flask.ext`](https://github.com/mitsuhiko/flask/blob/master/flask/ext/__init__.py) is a special package. If you `import flask.ext.babel`, it will try out both of the above variants, so it should work in any case.
| 8,007
|
46,515,990
|
Can somebody please help me to create a python program whereby the unsorted list is split up into groups of 2, arranged alphabetically within their groups of two. The program should then create a new list in alphabetical order by taking the next greatest letter from the correct pair. Please don't tell me to do this in a different way as my method must take place as is written above. Thanks :)
```
unsorted = ['B', 'D', 'A', 'G', 'F', 'E', 'H', 'C']
n = 4
num = float(len(unsorted))/n
l = [ unsorted [i:i + int(num)] for i in range(0, (n-1)*int(num), int(num))]
l.append(unsorted[(n-1)*int(num):])
print(l)
complete = unsorted.split()
print(complete)
```
|
2017/10/01
|
[
"https://Stackoverflow.com/questions/46515990",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8705227/"
] |
In your code, openin tag `<tr>` is not added to first `<td>`. You are appending html twice. You need to form correct html then add it to the table after for loop. Also you don't need ';' commas at the end of condition and standart function definition code blocks.
```js
function myFunction() {
var response = "[\r\n {\r\n \"ID\": \"1\",\r\n \"Title\": \"title 1 \",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/1/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"2\",\r\n \"Title\": \"title 2\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/2/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"3\",\r\n \"Title\": \"title 3\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/3/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"4\",\r\n \"Title\": \"title 4\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/4/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"5\",\r\n \"Title\": \"title 5\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/5/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"6\",\r\n \"Title\": \"title 6\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/6/102.jpg\",\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"7\",\r\n \"Title\": \"title 7\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/7/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"8\",\r\n \"Title\": \"title 8\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/8/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"9\",\r\n \"Title\": \"title 9\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/9/102.jpg\",\r\n \"Note\": null\r\n }\r\n]";
var json = $.parseJSON(response);
var html = '';
var x = 0; // x is number of cells per row maximum 2 cells per row
for (i in json) {
// create HTML code
var div = "<div class=\"image\">\n" +
"<a href=\"javascript:doIt('" + json[i].ID + "')\">\n" +
"<img src=\"" + json[i].ImageUrl + "\" alt=\"\" />" + json[i].Title + "\n" +
"</a>\n" +
"</div>\n";
//alert("x is:"+x);
div = '<td>' + div + '</td>\n';
if (x === 0) {
html += '<tr>' + div;
x++;
} else if (x === 1) {
html += div;
x++;
} else if (x === 2) {
html += div + '</tr>';
x = 0;
}
} //end of for loop
$('#demo > tbody').append(html);
}
```
```html
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<body onload="myFunction()">
<div class="scroller">
<br>
<table id="demo" cellspacing="0" border="1" style="display: visible;">
<thead>
<th>A</th>
<th>B</th>
<th>C</th>
</thead>
<tbody>
</tbody>
</table>
</div>
</body>
```
|
Thought I would offer a different solution.
<https://jsfiddle.net/wfc9p0e8/>
```
<html>
<head>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script>
</head>
<body>
<style>
#table{ display: table; width:100%; }
#table .table-cell { display: inline-table; width:33.33%; }
</style>
<div class="scroller">
<div id="table"></div>
</div><!--scroller-->
<script>
$(document).ready(function(){
var response= "[\r\n {\r\n \"ID\": \"1\",\r\n \"Title\": \"title 1 \",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/1/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"2\",\r\n \"Title\": \"title 2\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/2/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"3\",\r\n \"Title\": \"title 3\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/3/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"4\",\r\n \"Title\": \"title 4\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/4/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"5\",\r\n \"Title\": \"title 5\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/5/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"6\",\r\n \"Title\": \"title 6\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/6/102.jpg\",\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"7\",\r\n \"Title\": \"title 7\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/7/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"8\",\r\n \"Title\": \"title 8\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/8/102.jpg\",\r\n \"Note\": null\r\n },\r\n {\r\n \"ID\": \"9\",\r\n \"Title\": \"title 9\",\r\n \"Text\": \"\",\r\n \"ImageUrl\": \"http://awebsite/imagges/9/102.jpg\",\r\n \"Note\": null\r\n }\r\n]";
$.each(JSON.parse(response), function(i, value){
var html = '<div class="table-cell"><div class="image"><a href=javascript:doIt("'+ value.ID +'")><img src="'+ value.ImageUrl +'" alt="'+ value.Title +'"></a></div></div>';
$('#table').append(html);
})
})//doc ready
</script>
</body>
</html>
```
| 8,013
|
38,145,706
|
I'm using PyInstaller 3.2 to package a Web.py app. Typically, with Web.py and the built-in WSGI [server](http://webpy.org/cookbook/ssl), you specify the port on the command line, like
```
$ python main.py 8091
```
Would run the Web.py app on port 8091 (default is 8080). I'm bundling the app with PyInstaller via a spec file, but I can't figure out how to specify the port number with that -- passing in Options only seems to work for the [3 given ones in the docs](http://pythonhosted.org/PyInstaller/spec-files.html). I'm tried:
```
exe = EXE(pyz,
a.scripts,
[('8091', None, 'OPTION')],
a.binaries,
a.zipfiles,
a.datas,
name='main',
debug=False,
strip=False,
upx=True,
console=False )
```
But that doesn't seem to do anything. I didn't see anything else in the docs -- is there another way to bundle / specify / include command-line arguments to the PyInstaller spec file?
|
2016/07/01
|
[
"https://Stackoverflow.com/questions/38145706",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1370384/"
] |
So very hacky, but what I wound up doing was to just append an argument in `sys.argv` in my web.py app...
```
sys.argv.append('8888')
app.run()
```
I also thought in my `spec` file I could just do:
```
a = Analysis(['main.py 8888'],
```
But that didn't work at all.
|
`options` argument in EXE is only for the python interpreter ([ref](https://pythonhosted.org/PyInstaller/spec-files.html#giving-run-time-python-options))
| 8,014
|
65,919,766
|
I am using python 3.8.3 version.
I installed folium typing `pip install folium` in the command line. After typing `pip show folium` in the command line, the output is as follows:
```
Name: folium
Version: 0.12.1
Summary: Make beautiful maps with Leaflet.js & Python
Home-page: https://github.com/python-visualization/folium
Author: Rob Story
Author-email: wrobstory@gmail.com
License: MIT
Location: c:\users\koryun\appdata\local\programs\python\python38-32\lib\site-packages
Requires: requests, jinja2, branca, numpy
Required-by:
```
When I type `import folium` in VS code, I get an `ModuleNotFoundError: No module named 'folium'` error.
What can I do to solve this issue?
|
2021/01/27
|
[
"https://Stackoverflow.com/questions/65919766",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14127138/"
] |
Avoiding this kind of errors, always use virtualenv.
Take a look here
<https://docs.python.org/3/library/venv.html>
|
Try restarting VSCode, sometimes the python extension needs a restart so newly installed modules are indexed.
You can try running the code despite the Error in VSCode. It works if you can confirm that the required module is properly installed.
| 8,015
|
62,339,871
|
**The question is this:**
We add a Leap Day on February 29, almost every four years. The leap day is an extra, or intercalary day and we add it to the shortest month of the year, February.
In the Gregorian calendar three criteria must be taken into account to identify leap years:
The year can be evenly divided by 4, is a leap year, unless:
The year can be evenly divided by 100, it is NOT a leap year, unless:
The year is also evenly divisible by 400. Then it is a leap year.
This means that in the Gregorian calendar, the years 2000 and 2400 are leap years, while 1800, 1900, 2100, 2200, 2300 and 2500 are NOT leap years.
**This is what I've coded in python 3**
```
def is_leap(year):
leap = False
# Write your logic here
if((year%4==0) |(year%100==0 & year%400==0)):
leap= True
else:
leap= False
return leap
```
`year = int(input())
print(is_leap(year))`
This code fails for **input 2100**. I'm not able to point out the mistake. Help please.
|
2020/06/12
|
[
"https://Stackoverflow.com/questions/62339871",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13732680/"
] |
First of all, you are using bitwise operators **|** and **&** (you can read about it here - <https://www.educative.io/edpresso/what-are-bitwise-operators-in-python>), but you need to use logical operators, such as **or** and **and**.
Also, your code can be simplified:
```
def is_leap(year):
return (year % 4 == 0) and ((year % 100 != 0) or (year % 400 == 0))
```
|
try this:
```
def leap_year(n):
if (n%100==0 and n%400==0):
return True
elif (n%4==0 and n%100!=0):
return True
else:
return False
```
| 8,020
|
42,742,499
|
PEP [3141](https://www.python.org/dev/peps/pep-3141/) defines a numerical hierarchy with `Complex.__add__` but no `Number.__add__`. This seems to be a weird choice, since the other numeric type `Decimal` that (virtually) derives from `Number` also implements an add method.
So why is it this way? If I want to add type annotations or assertions to my code, should I use `x:(Complex, Decimal)`? Or `x:Number` and ignore the fact that this declaration is practically meaningless?
|
2017/03/12
|
[
"https://Stackoverflow.com/questions/42742499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4133053/"
] |
Its because you are increamenting two times in the loop.
Remove the last i++ and it works fine.
|
The form of this for loop is to always increment the counter variable after the final statement or function has executed or returned, respectively. So, any incrementation of 'i' in the loop body, in this case, will add 1 to the value of the for loop counter, corrupting the count.
| 8,021
|
59,600,235
|
Tell me please, what am I doing wrong?
I try to drag and drop through Selenium, but every time I come across an error "AttributeError: move\_to requires a WebElement"
**Here is my code:**
```
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
chromedriver = '/usr/local/bin/chromedriver'
driver = webdriver.Chrome(chromedriver)
driver.get('http://www.dhtmlgoodies.com/scripts/drag-drop-custom/demo-drag-drop-3.html')
source = driver.find_elements_by_xpath('//*[@id="box3"]')
target = driver.find_elements_by_xpath('//*[@id="box103"]')
action = ActionChains(driver)
action.drag_and_drop(source, target).perform()
```
**I also tried, like this:**
```
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
chromedriver = '/usr/local/bin/chromedriver'
driver = webdriver.Chrome(chromedriver)
driver.get('http://www.dhtmlgoodies.com/scripts/drag-drop-custom/demo-drag-drop-3.html')
source = driver.find_elements_by_xpath('//*[@id="box3"]')
target = driver.find_elements_by_xpath('//*[@id="box103"]')
ActionChains(driver).click_and_hold(source).move_to_element(target).release(target).perform()
```
**Always coming out "AttributeError: move\_to requires a WebElement"**
```
Traceback (most recent call last):
File "drag_and_drop_test.py", line 13, in <module>
ActionChains(driver).click_and_hold(source).move_to_element(target).release(target).perform()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/selenium/webdriver/common/action_chains.py", line 121, in click_and_hold
self.move_to_element(on_element)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/selenium/webdriver/common/action_chains.py", line 273, in move_to_element
self.w3c_actions.pointer_action.move_to(to_element)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/selenium/webdriver/common/actions/pointer_actions.py", line 42, in move_to
raise AttributeError("move_to requires a WebElement")
AttributeError: move_to requires a WebElement
```
|
2020/01/05
|
[
"https://Stackoverflow.com/questions/59600235",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9729098/"
] |
`find_elements_by_xpath` returns a list of `WebElement`s, `drag_and_drop` (and the other methods) accept a single `WebElement`. Use `find_element_by_xpath`
```
source = driver.find_element_by_xpath('//*[@id="box3"]')
target = driver.find_element_by_xpath('//*[@id="box103"]')
```
|
as @guy said:
```
find_elements_by_xpath
```
returns list of `WebElements`. You can use `find_element_by_xpath` method to get single web element. Or select specific element from `WebElements` return by `find_elements_by_xpath`. For example, if you know, you wanted to select 2nd element from return list for target. Then you can try like this:
```
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
chromedriver = '/usr/local/bin/chromedriver'
driver = webdriver.Chrome(chromedriver)
driver.get('http://www.dhtmlgoodies.com/scripts/drag-drop-custom/demo-drag-drop-3.html')
source = driver.find_elements_by_xpath('//*[@id="box3"]')[0]
target = driver.find_elements_by_xpath('//*[@id="box103"]')[1]
action = ActionChains(driver)
action.drag_and_drop(source, target).perform()
```
i can see we are selecting element which have id but ids are unique, so there can be only one id. So you can also do this like this:
```
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
chromedriver = '/usr/local/bin/chromedriver'
driver = webdriver.Chrome(chromedriver)
driver.get('http://www.dhtmlgoodies.com/scripts/drag-drop-custom/demo-drag-drop-3.html')
source = driver.find_element_by_id('box3')
target = driver.find_element_by_id('box103')
action = ActionChains(driver)
action.drag_and_drop(source, target).perform()
```
i like to use `find_element_by_id` because it looks cleaner to me than xpath.
| 8,024
|
19,939,365
|
I'm trying to install a module called Scrapy. I installed it using
```
pip install Scrapy
```
I see the 'scrapy' folder in my /usr/local/lib/python2.7/site-packages, but when I try to import it in a Python program, is says there is no module by that name. Any ideas as to why this might be happening?
EDIT: Here is the output of the pip command:
```
Downloading/unpacking Scrapy
Downloading Scrapy-0.20.0.tar.gz (745kB): 745kB downloaded
Running setup.py egg_info for package Scrapy
no previously-included directories found matching 'docs/build'
Requirement already satisfied (use --upgrade to upgrade): Twisted>=10.0.0 in /usr/local/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): w3lib>=1.2 in /usr/local/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): queuelib in /usr/local/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): lxml in /usr/local/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): pyOpenSSL in /usr/local/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): cssselect>=0.9 in /usr/local/lib/python2.7/site-packages (from Scrapy)
Requirement already satisfied (use --upgrade to upgrade): zope.interface>=3.6.0 in /usr/local/lib/python2.7/site-packages (from Twisted>=10.0.0->Scrapy)
Requirement already satisfied (use --upgrade to upgrade): six>=1.4.1 in /usr/local/lib/python2.7/site-packages (from w3lib>=1.2->Scrapy)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/local/lib/python2.7/site-packages/setuptools-1.1.6-py2.7.egg (from zope.interface>=3.6.0->Twisted>=10.0.0->Scrapy)
Installing collected packages: Scrapy
Running setup.py install for Scrapy
changing mode of build/scripts-2.7/scrapy from 644 to 755
no previously-included directories found matching 'docs/build'
changing mode of /usr/local/bin/scrapy to 755
Successfully installed Scrapy
Cleaning up...
```
When I run /usr/local/bin/scrapy I get the usage for the command and the available commands. I noticed that I have a python2.7 and python2.7-32 in my /usr/local/bin, and I remember installing the 32 bit version because of a problem with Mavericks.
Here is the output of `python /usr/local/bin/scrapy`:
```
Traceback (most recent call last): File "/usr/local/bin/scrapy", line 3, in <module> from scrapy.cmdline import execute ImportError: No module named scrapy.cmdline
```
And `head /usr/local/bin/scrapy`:
```
#!/usr/local/opt/python/bin/python2.7 from scrapy.cmdline import execute execute()
```
|
2013/11/12
|
[
"https://Stackoverflow.com/questions/19939365",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/191726/"
] |
Are you using Homebrew or MacPorts or something? As @J.F.Sebastian said, it sounds like you are having issues mixing the default python that comes with OS X, and one that is installed via a package manager... Try `/usr/local/opt/python/bin/python2.7 -m scrapy` and see if that throws an `ImportError`.
If that works, then you may want to consider making *that* python executable your default. Something like `alias python2.7=/usr/local/opt/python/bin/python2.7` and then always use `python2.7` instead of the default `python`. You can likewise just point `python` to the `/urs/local...` bit, but then you won't have easy access to the system (OS X-supplied) python if you ever needed it for some reason.
|
EDIT: You can force pip to install to an alternate location. The details are here: [Install a Python package into a different directory using pip?](https://stackoverflow.com/questions/2915471/install-a-python-package-into-a-different-directory-using-pip). If you do indeed have extra Python folders on your system, maybe you can try directing scrapy to those, even if just for a temporary solution.
Can you post the output of the pip command? Perhaps it is failing somewhere?
Also, is it possible you have two versions of Python on your machine? Pip only installs to one location, but perhaps the version of Python on your path is different.
Finally, sometimes package names given to pip are not exactly the same as the name used to import. Check the documentation of the package. I took a quick look and the import should be lowercase:
```
import scrapy
```
| 8,029
|
64,483,669
|
I am trying to make a multi-container docker app using `docker-compose`.
**Here's what I am trying to accomplish:** I have a python3 app, that takes a list of list of numbers as input from API call(`fastAPI` with gunicorn server) and pass the numbers to a function(an ML model actually) that returns a number, which will then be sent back(in json of course) as result to that API call. That part is working absolutely fine. Problem started when I introduced a postgres container to store the inputs I receive into a postgres table and I am yet to add the part where I should also be access data of this postgres database from my local pgadmin4 app.
**Here's what I have done till now:** I am using "docker-compose.yml" file to set up both of these containers and here it is:
```
version: '3.8'
services:
postgres:
image: postgres:12.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres_password
- POSTGRES_DATABASE=postgres
docker_fastapi:
# use the Dockerfile in the current directory.
build: .
ports:
# 3000 is what I send API calls to
- "3000:3000"
# this is postgres's port
- "5432:5432"
environment:
# these are the environment variables that I am using inside psycop2 to make connection.
- POSTGRES_HOST=postgres
- POSTGRES_PORT=5432
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres_password
- POSTGRES_DATABASE=postgres
```
he
Here's how I am using those environment variables in `psycopg2`:
```
import os
from psycopg2 import connect
# making database connection using environement variables.
connection = connect(host=os.environ['POSTGRES_HOST'], port=os.environ['POSTGRES_PORT'],
user=os.environ['POSTGRES_USER'], password=os.environ['POSTGRES_PASSWORD'],
database=os.environ['POSTGRES_DATABASE']
)
```
here's the Dockerfile:
```
FROM tiangolo/uvicorn-gunicorn:python3.8-slim
# slim = debian-based. Not using alpine because it has poor python3 support.
LABEL maintainer="Sebastian Ramirez <tiangolo@gmail.com>"
RUN apt-get update
RUN apt-get install -y libpq-dev gcc
# copy and install from requirements.txt file
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r /app/requirements.txt
# remove all the dependency files to reduce the final image size
RUN apt-get autoremove -y gcc
# copying all the code files to the container's file system
COPY ./api /app/api
WORKDIR /app/api
EXPOSE 3000
ENTRYPOINT ["uvicorn"]
CMD ["api.main:app", "--host", "0.0.0.0", "--port", "3000"]
```
And here's the error it generates for an API call I send:
```
root@naveen-hp:/home/naveen/Videos/ML-Model-serving-with-fastapi-and-Docker# # docker-compose up
Starting ml-model-serving-with-fastapi-and-docker_docker_fastapi_1 ... done
Starting ml-model-serving-with-fastapi-and-docker_postgres_1 ... done
Attaching to ml-model-serving-with-fastapi-and-docker_postgres_1, ml-model-serving-with-fastapi-and-docker_docker_fastapi_1
postgres_1 |
postgres_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
postgres_1 |
postgres_1 | 2020-10-22 13:17:14.080 UTC [1] LOG: starting PostgreSQL 12.4 (Debian 12.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
postgres_1 | 2020-10-22 13:17:14.080 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2020-10-22 13:17:14.080 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2020-10-22 13:17:14.092 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2020-10-22 13:17:14.120 UTC [24] LOG: database system was shut down at 2020-10-22 12:48:50 UTC
postgres_1 | 2020-10-22 13:17:14.130 UTC [1] LOG: database system is ready to accept connections
docker_fastapi_1 | INFO: Started server process [1]
docker_fastapi_1 | INFO: Waiting for application startup.
docker_fastapi_1 | INFO: Application startup complete.
docker_fastapi_1 | INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
docker_fastapi_1 | INFO: 172.18.0.1:56094 - "POST /predict HTTP/1.1" 500 Internal Server Error
docker_fastapi_1 | ERROR: Exception in ASGI application
docker_fastapi_1 | Traceback (most recent call last):
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 391, in run_asgi
docker_fastapi_1 | result = await app(self.scope, self.receive, self.send)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
docker_fastapi_1 | return await self.app(scope, receive, send)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/fastapi/applications.py", line 179, in __call__
docker_fastapi_1 | await super().__call__(scope, receive, send)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
docker_fastapi_1 | await self.middleware_stack(scope, receive, send)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
docker_fastapi_1 | raise exc from None
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
docker_fastapi_1 | await self.app(scope, receive, _send)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
docker_fastapi_1 | raise exc from None
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__
docker_fastapi_1 | await self.app(scope, receive, sender)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__
docker_fastapi_1 | await route.handle(scope, receive, send)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle
docker_fastapi_1 | await self.app(scope, receive, send)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
docker_fastapi_1 | response = await func(request)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 182, in app
docker_fastapi_1 | raw_response = await run_endpoint_function(
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/fastapi/routing.py", line 135, in run_endpoint_function
docker_fastapi_1 | return await run_in_threadpool(dependant.call, **values)
docker_fastapi_1 | File "/usr/local/lib/python3.8/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool
docker_fastapi_1 | return await loop.run_in_executor(None, func, *args)
docker_fastapi_1 | File "/usr/local/lib/python3.8/concurrent/futures/thread.py", line 57, in run
docker_fastapi_1 | result = self.fn(*self.args, **self.kwargs)
docker_fastapi_1 | File "/app/api/main.py", line 83, in predict
docker_fastapi_1 | insert_into_db(X)
docker_fastapi_1 | File "/app/api/main.py", line 38, in insert_into_db
docker_fastapi_1 | cursor.execute(f"INSERT INTO public.\"API_Test\""
docker_fastapi_1 | IndexError: index 1 is out of bounds for axis 0 with size 1
```
Here's how I am sending API calls:
```
curl -X POST "http://0.0.0.0:3000/predict" -H "accept: application/json" -H "Content-Type: application/json" -d "{\"input_data\":[[
1.354e+01, 1.436e+01, 8.746e+01, 5.663e+02, 9.779e-02, 8.129e-02,
6.664e-02, 4.781e-02, 1.885e-01, 5.766e-02, 2.699e-01, 7.886e-01,
2.058e+00, 2.356e+01, 8.462e-03, 1.460e-02, 2.387e-02, 1.315e-02,
1.980e-02, 2.300e-03, 1.511e+01, 1.926e+01, 9.970e+01, 7.112e+02,
1.440e-01, 1.773e-01, 2.390e-01, 1.288e-01, 2.977e-01, 7.259e-02]]}"
```
This works just as expected when I build it with credentials of postgres instance of AWS RDS without this second postgres container and specify credentials directly inside `psycopg2.connect()` without using environment variables and docker-compose and built directly using Dockerfile shown above; So, my code to insert the received data into postgres is presumably fine. And problems started when I introduced second container. What causes errors like these and How do I fix this?
|
2020/10/22
|
[
"https://Stackoverflow.com/questions/64483669",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11814996/"
] |
The problem are your labels.
They have the same ids as your input fields.
Since document.getElementById("date") only finds the first occurrence of the desired id your labels are returned.
To solve this you can change your labels to
```
<label for="date">Date: </label>
```
```html
<html>
<head>
<title>Expense Tracker</title>
<style type="text/css">
table
{
display: none;
}
</style>
</head>
<body>
<h1>Expense Tracker</h1>
<form action="">
<label for="date">Date: </label>
<input type="date" id="date">
<br>
<label for="desc">Description: </label>
<input type="text" id="desc">
<br>
<label for="amount">Amount: </label>
<input type="text" id="amount">
<br>
<input type="button" id="submit" value="Submit">
<br>
</form>
<table id="table">
<tr>
<th>Date</th>
<th>Description</th>
<th>Amount</th>
</tr>
</table>
<script type="text/javascript">
document.getElementById("submit").onclick=function()
{
document.getElementById("table").style.display="block";
var table = document.getElementById("table");
var row = table.insertRow(-1);
var date = row.insertCell(0);
var desc = row.insertCell(1);
var amt = row.insertCell(2);
date.innerHTML = document.getElementById("date").value;
desc.innerHTML = document.getElementById("desc").value;
amt.innerHTML = document.getElementById("amount").value;
return false;
}
</script>
</body>
</html>
```
|
On your html file, each `<label>` and `<input>` tags have got the same id so the problem happened.
For example, for the last `input`, the label has id `amount` and the input tag also has id `amount`.
So `document.getElementById("amount")` will return the first tag `<label>` tag so it won't have no values.
To solve this problem, it is needed to change the id of the labels so no duplicates.
```js
document.getElementById("submit").onclick = function () {
document.getElementById("table").style.display = "block";
var table = document.getElementById("table");
var row = table.insertRow(-1);
var date = row.insertCell(0);
var desc = row.insertCell(1);
var amt = row.insertCell(2);
date.innerHTML = document.getElementById("date").value;
desc.innerHTML = document.getElementById("desc").value;
amt.innerHTML = document.getElementById("amount").value;
return false;
}
```
```css
table {
display: none;
}
```
```html
<h1>Expense Tracker</h1>
<form action="">
<label for="date">Date: </label>
<input type="date" id="date">
<br>
<label for="desc">Description: </label>
<input type="text" id="desc">
<br>
<label for="amount">Amount: </label>
<input type="text" id="amount">
<br>
<input type="button" id="submit" value="Submit">
<br>
</form>
<table id="table">
<tr>
<th>Date</th>
<th>Description</th>
<th>Amount</th>
</tr>
</table>
```
| 8,038
|
5,253,358
|
this is the first time I have used Python.
I downloaded the file ActivePython-2.7.1.4-win32-x86
and installed it on my computer; I'm using Win7.
So when I tried to run a python program, it appears and disappears very quickly. I don't have enough time to see anything on the screen. I just downloaded the file and double-cliked on it.
How do I launch this file? I know that it is a long file for a first Python tutorial.
|
2011/03/09
|
[
"https://Stackoverflow.com/questions/5253358",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/618111/"
] |
Add the line
```
input()
```
to the end of the program, with the correct indentation. The issue is that after the data is printed to the console the program finishes, so the console goes away. `input` tells the program to wait for input, so the console won't be closed when it finishes printing.
I hope you're not using that program to learn Python; it's pretty complicated!
|
Just a bit more on this.
You have a script `myscript.py` in a folder `C:\myscripts`. This is how to set up Windows 7 so that you can type `> myscript` into a CMD window and the script will run.
1) Set your `PATH` variable to include the Python Interpreter.
Control Panel > System and Security > System > Advanced Settings > Environment Variables. You can set either the System Variables or the User Variables. Scroll down till you find `PATH`, select it, click `Edit`.The Path appears selected in a new dialog. I always copy it into Notepad to edit it though all you need do is add `;C:\Python27` to the end of the list. Save this.
2) Set your `PATH` variable to include `C:\myscripts`
3) Set your `PATHEXT` variable to include `;.PY`. (This is the bit that saves you from typing `myscript.py`)
This may now just work. Try opening a command window and typing `myscript`
But it may not. Windows can still mess you about. I had installed and then uninstalled a Python package and when I typed `myscript` Windows opened a box asking me which program to use. I browsed for `C:\python27\python.exe` and clicked that. Windows opened another command window ran the script and closed it before I could see what my script had done! To fix this when Windows opens its dialog select your Python and *click the "Always do this" checkbox at the bottom.* Then it doesn't open and close another window and things work as they should. Or they did for me.
Added: Above does not say how to pass arguments to your script. For this see answer [Windows fails to pass arguments to python script](https://stackoverflow.com/questions/6109582/windows-fails-to-pass-the-args-to-a-python-script)
| 8,039
|
35,600,152
|
I am deploying a django project on apache2 using mod\_wsgi, but the problem is that the server dont serve pages and it hangs for 10 minute before giving an error:
```
End of script output before headers
```
This is my **`site-available/000-default.conf`**:
```sh
ServerAdmin webmaster@localhost
DocumentRoot /home/artfact/arTfact_webSite/
Alias /static /home/artfact/arTfact_webSite/static
<Directory /home/artfact/arTfact_webSite/static>
Order allow,deny
Allow from all
Require all granted
</Directory>
<Directory /home/artfact/arTfact_webSite>
Order allow,deny
Allow from all
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess artfact_site processes=5 threads=25 python-path=/home/artfact/anaconda/lib/python2.7/site-packages/:/home/artfact/arTfact_webSite
WSGIProcessGroup artfact_site
WSGIScriptAlias / /home/artfact/arTfact_webSite/arTfact_webSite/wsgi.py
```
**settings.py**
```
"""
Django settings for arTfact_webSite project.
Generated by 'django-admin startproject' using Django 1.8.5.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.8/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = xxxx
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'website',
'blog',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.security.SecurityMiddleware',
)
ROOT_URLCONF = 'arTfact_webSite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'arTfact_webSite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.8/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.8/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Europe/Paris'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.8/howto/static-files/
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR,'media')
```
**wsgi.py**
```py
"""
WSGI config for arTfact_webSite project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.8/howto/deployment/wsgi/
"""
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "arTfact_webSite.settings")
application = get_wsgi_application()
```
**Project structure**
```sh
arTfact_webSite/
├── arTfact_webSite
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
├── blog
├── static
├── media
└── website
├── admin.py
├── admin.pyc
├── forms.py
├── forms.pyc
├── general_analyser.py
├── general_analyser.pyc
├── __init__.py
├── __init__.pyc
├── migrations
│ ├── __init__.py
│ └── __init__.pyc
├── models.py
├── models.pyc
├── send_mail.py
├── send_mail.pyc
├── static
│ └── website
├── templates
│ └── website
├── tests.py
├── tests.pyc
├── urls.py
├── urls.pyc
├── views.py
└── views.pyc
```
In the **arTfact\_webSite/urls.py**
```py
urlpatterns = [
url(r'^/*', include('website.urls')),
]+static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
```
In the **website/urls.py**
```py
urlpatterns = [
url(r'^$', views.index, name='index'),
]
```
am I doing something wrong here?
|
2016/02/24
|
[
"https://Stackoverflow.com/questions/35600152",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4759209/"
] |
It seems you have an **'a'** in your *wsgi.py* file between the lines
```
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "arTfact_webSite.settings")
a
application = get_wsgi_application()
```
no sure if this is in your actual file as well.
|
Try use the command:
apachectl configtest
This should help you isolate what is broken in your apache configuration. See this link for more information:
<https://httpd.apache.org/docs/2.4/programs/apachectl.html>
If it reports 'Syntax OK', then you know that it's a configuration **detail** problem rather than a configuration syntax problem.
Otherwise, posting your apache2 logs would be helpful.
| 8,043
|
33,713,513
|
I want to use Drupal for building a Genealogy application. The difficulty, I see, is in allowing users to upload a gedcom file and for it to be parsed and then from that data, various Drupal nodes would be created. Nodes in Drupal are content items. So, I'd have individuals and families as content types and each would have various events as fields that would come from the gedcom file.
The gedcom file is a generic format for genealogy information. Different desktop applications take the data and convert it to a proprietary format. What I am not sure about how to accomplish is to give the end user a form to upload their gedcom file and then have it create nodes, aka content items, with Drupal. I can find open source code to parse a gedcom file, using python, or perl, or other languages. So, I could use one of these libraries to create from the gedcom file output in JSON, XML, CSV, etc.
Drupal is written in PHP and another detail of the challenge is that for the end user, I don't want to ask him/her to find that file created in step one (where step one is the parse and convert of the gedcom file) and upload that into Drupal. I'd like to somehow make this happen in one step as far as the end user sees. Somehow I would need a way to trigger Drupal to import the data after it is converted into JSON, or XML or CSV.
|
2015/11/14
|
[
"https://Stackoverflow.com/questions/33713513",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/784304/"
] |
There are nested calls `Magic(in - 1);`. If number is even it is printed immediately and then `Magic(in - 1);` is called. Only when `n` is zero all functions print not even number in reverse order. The first odd number is printed by the deepest `Magic()` function:
```
Magic(10)
|print 10
|Magic(9)
| |Magic(8)
| | print 8
| | ...
| | Magic(1)
| | Magic(0)
| | return;
| | print 1
| | return
| | ...
| | return
| |print 9
| |return
|return
```
|
this is caused by the recursion of the function. the function is returning in the order it was called. if you want to print the odd numbers in decreasing order after the even numbers, you need to save them in a variable (array ) that is also passed to the magic function
| 8,046
|
597,289
|
I'm trying to jury-rig the [Amazon S3 python library](http://developer.amazonwebservices.com/connect/entry.jspa?externalID=134) to allow chunked handling of large files. Right now it does a "self.body = http\_response.read()", so if you have a 3G file you're going to read the entire thing into memory before getting any control over it.
My current approach is to try to keep the interface for the library the same but provide a callback after reading each chunk of data. Something like the following:
```
data = []
while True:
chunk = http_response.read(CHUNKSIZE)
if not chunk:
break
if callback:
callback(chunk)
data.append(chunk)
```
Now I need to do something like:
```
self.body = ''.join(data)
```
Is *join* the right way to do this or is there another (better) way of putting all the chunks together?
|
2009/02/28
|
[
"https://Stackoverflow.com/questions/597289",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13055/"
] |
In python3, `bytes` objects are distinct from `str`, but I don't know any reason why there would be anything wrong with this.
|
`join` seems fine if you really do need to put the entire string together, but then you just wind up storing the whole thing in RAM anyway. In a situation like this, I would try to see if there's a way to process each part of the string and then discard the processed part, so you only need to hold a fixed number of bytes in memory at a time. That's usually the point of the callback approach. (If you can only process part of a chunk at a time, use a buffer as a queue to store the unprocessed data.)
| 8,047
|
22,890,598
|
I have a function which calculates the jaccard index for two parse strings. The function is working OK and its code is below:
```
def jack(a,b):
x=a.split()
y=b.split()
k=float(len(list(set(x)&set(y))))/float(len(list(set(x) | set(y))))
return k
```
However, when I want to apply the function for any two elements of a list, an error appears. My list is called "a" and it's like this :[ ["Coca Cola"],["Coca Sc"]]. The error message is:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-51-0d7031267380> in <module>()
----> 1 jack(a[2],a[3])
<ipython-input-27-256123b04a44> in jack(a, b)
1 def jack(a,b):
----> 2 x=a.split()
3 y=b.split()
4 k=float(len(list(set(x)&set(y))))/float(len(list(set(x) | set(y))))
5 return k
AttributeError: 'list' object has no attribute 'split'
```
I know it´s because a[2] is a list too, but I would like to find a way to deal with this to have the expected output. Maybe I can modify my function or the way I enter the output.
|
2014/04/06
|
[
"https://Stackoverflow.com/questions/22890598",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2825079/"
] |
Since you have one element lists and you are passing the lists as the parameters whereas your function expects strings, I would recommend you to invoke your function like this
```
jack(a[2][0], a[3][0])
```
Also, you dont have to convert the `set` to a `list` to find the length.
```
return float(len(set(x) & set(y))) / float(len(set(x) | set(y)))
```
should be enough here.
|
That is because your variable `a` is a nested list. You should either flatten `a` or pass the arguments as:
`jack(a[2][0],a[3][0])`
### Or, you could flatten your list as:
`a = [i[0] for i in a]`
then you can easily do:
`jack(a[0],a[1])`
| 8,053
|
21,269,702
|
I’m using wxPython to write an app that will run under OS X, Windows, and Linux. I’m trying to implement the standard “Close Window” menu item, but I’m not sure how to find out which window is frontmost. WX has a [`GetActiveWindow` function](http://wxpython.org/Phoenix/docs/html/functions.html#GetActiveWindow), but apparently this only works under Windows and GTK. Is there a built-in way to do this on OS X? (And if not, why not?)
|
2014/01/21
|
[
"https://Stackoverflow.com/questions/21269702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/371228/"
] |
Okay I managed to finally get this program working, I've summarized below. I hope this might help someone also stuck on ex17.
First, I removed the MAX\_DATA and MAX\_ROWS constants and changed the structs like so:
```
struct Address {
int id;
int set;
char *name;
char *email;
};
struct Database {
int max_data;
int max_rows;
struct Address **rows;
};
struct Connection {
FILE *file;
struct Database *db;
};
```
I assign `max_data` and `max_rows` to the new variables in the struct and them write them to the file.
```
conn->db->max_data = max_data;
conn->db->max_rows = max_rows;
int rc = fwrite(&conn->db->max_data, sizeof(int), 1, conn->file);
rc = fwrite(&conn->db->max_rows, sizeof(int), 1, conn->file);
```
Now I can run my program and replace `MAX_ROWS` & `MAX_DATA` with `conn->db->max_rows` & `conn->db->max_data`.
|
One way is to change your arrays into pointers. Then you could write an alloc\_db function which would use the max\_row and max\_data values to allocate the needed memory.
```
struct Address {
int id;
int set;
char* name;
char* email;
};
struct Database {
struct Address* rows;
unsigned int max_row;
unsigned int max_data;
};
struct Connection {
FILE *file;
struct Database *db;
};
```
| 8,054
|
9,560,616
|
I am using ArcGIS focal statistics tool to add spatial autocorrelation to a random raster to model error in DEMs. The input DEM has a 1.5m pixel size and the semivariogram exhibits a sill around 2000m. I want to make sure to model the extent of the autocorrelation in the input in my model.
Unfortunately, ArcGIS requires that the input kernel be in ASCII format, where the first line defines the size and the subsequent lines define the weights.
Example:
```
5 5
1 1 1 1 1
1 2 2 2 1
1 2 3 2 1
1 2 2 2 1
1 1 1 1 1
```
I need to generate a 1333x1333 kernel with an inverse distance weighting and immediately went to python to get this done. Is it possible to generate a matrix in numpy and assign values by ring? Does a better programmatic tool within numpy exist to generate a plain text matrix.
This is similar to [this question](https://stackoverflow.com/questions/7598264/generate-a-patterned-numpy-matrix), but I need to have a fixed central value and descending rings, as per the example above.
Note: I am a student, but this not a homework assignment...those ended years ago. This is a part of a larger research project that I am working on and any help (even just a nudge in the right direction) would be appreciated. The focus of this work is not programming kernels, but exploring errors in DEMs.
|
2012/03/05
|
[
"https://Stackoverflow.com/questions/9560616",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/839375/"
] |
I'm not sure if there is a built-in way, but it should not be hard to roll your own:
```
>>> def kernel_thing(N):
... import numpy as np
... n = N // 2 + 1
... a = np.zeros((N, N), dtype=int)
... for i in xrange(n):
... a[i:N-i, i:N-i] += 1
... return a
...
>>> def kernel_to_string(a):
... return '{} {}\n'.format(a.shape[0], a.shape[1]) + '\n'.join(' '.join(str(element) for element in row) for row in a)
...
>>> print kernel_to_string(kernel_thing(5))
5 5
1 1 1 1 1
1 2 2 2 1
1 2 3 2 1
1 2 2 2 1
1 1 1 1 1
>>> print kernel_to_string(kernel_thing(6))
6 6
1 1 1 1 1 1
1 2 2 2 2 1
1 2 3 3 2 1
1 2 3 3 2 1
1 2 2 2 2 1
1 1 1 1 1 1
>>> print kernel_to_string(kernel_thing(17))
17 17
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1
1 2 3 3 3 3 3 3 3 3 3 3 3 3 3 2 1
1 2 3 4 4 4 4 4 4 4 4 4 4 4 3 2 1
1 2 3 4 5 5 5 5 5 5 5 5 5 4 3 2 1
1 2 3 4 5 6 6 6 6 6 6 6 5 4 3 2 1
1 2 3 4 5 6 7 7 7 7 7 6 5 4 3 2 1
1 2 3 4 5 6 7 8 8 8 7 6 5 4 3 2 1
1 2 3 4 5 6 7 8 9 8 7 6 5 4 3 2 1
1 2 3 4 5 6 7 8 8 8 7 6 5 4 3 2 1
1 2 3 4 5 6 7 7 7 7 7 6 5 4 3 2 1
1 2 3 4 5 6 6 6 6 6 6 6 5 4 3 2 1
1 2 3 4 5 5 5 5 5 5 5 5 5 4 3 2 1
1 2 3 4 4 4 4 4 4 4 4 4 4 4 3 2 1
1 2 3 3 3 3 3 3 3 3 3 3 3 3 3 2 1
1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
```
|
[Hmmph. @wim beat me, but I'd already written the following, so I'll post it anyway.] Short version:
```
import numpy
N = 5
# get grid coords
xx, yy = numpy.mgrid[0:N,0:N]
# get the distance weights
kernel = 1 + N//2 - numpy.maximum(abs(xx-N//2), abs(yy-N//2))
with open('kernel.out','w') as fp:
# header
fp.write("{} {}\n".format(N, N))
# integer matrix output
numpy.savetxt(fp, kernel, fmt="%d")
```
which produces
```
~/coding$ python kernel.py
~/coding$ cat kernel.out
5 5
1 1 1 1 1
1 2 2 2 1
1 2 3 2 1
1 2 2 2 1
1 1 1 1 1
```
---
Verbose explanation of the magic: the first thing we're going to need are the indices of each entry in the matrix, and for that we can use [mgrid](http://docs.scipy.org/doc/numpy/reference/generated/numpy.mgrid.html):
```
>>> import numpy
>>> N = 5
>>> xx, yy = numpy.mgrid[0:N,0:N]
>>> xx
array([[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4]])
>>> yy
array([[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]])
```
So, taken pairwise, these are the x and y coordinates of each element of a 5x5 array. The centre will be at N//2,N//2 (where // is truncating division), so we can subtract that to get the distances, and take the absolute value because we don't care about the sign:
```
>>> abs(xx-N//2)
array([[2, 2, 2, 2, 2],
[1, 1, 1, 1, 1],
[0, 0, 0, 0, 0],
[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2]])
>>> abs(yy-N//2)
array([[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2]])
```
Now looking at the original grid, it looks like you want the maximum value of the two:
```
>>> numpy.maximum(abs(xx-N//2), abs(yy-N//2))
array([[2, 2, 2, 2, 2],
[2, 1, 1, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 1, 1, 2],
[2, 2, 2, 2, 2]])
```
which looks good but goes the wrong way. We can invert, though:
```
>>> N//2 - numpy.maximum(abs(xx-N//2), abs(yy-N//2))
array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 2, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]])
```
and you want 1-indexing, it looks like, so:
```
>>> 1 + N//2 - numpy.maximum(abs(xx-N//2), abs(yy-N//2))
array([[1, 1, 1, 1, 1],
[1, 2, 2, 2, 1],
[1, 2, 3, 2, 1],
[1, 2, 2, 2, 1],
[1, 1, 1, 1, 1]])
```
and there we have it. Everything else is boring IO.
| 8,056
|
54,619,732
|
I am developing a model for multi-class classification problem ( 4 classes) using Keras with Tensorflow backend. The values of `y_test` have 2D format:
```
0 1 0 0
0 0 1 0
0 0 1 0
```
This is the function that I use to calculate a balanced accuracy:
```
def my_metric(targ, predict):
val_predict = predict
val_targ = tf.math.argmax(targ, axis=1)
return metrics.balanced_accuracy_score(val_targ, val_predict)
```
And this is the model:
```
hidden_neurons = 50
timestamps = 20
nb_features = 18
model = Sequential()
model.add(LSTM(
units=hidden_neurons,
return_sequences=True,
input_shape=(timestamps,nb_features),
dropout=0.15
#recurrent_dropout=0.2
)
)
model.add(TimeDistributed(Dense(units=round(timestamps/2),activation='sigmoid')))
model.add(Dense(units=hidden_neurons,
activation='sigmoid'))
model.add(Flatten())
model.add(Dense(units=nb_classes,
activation='softmax'))
model.compile(loss="categorical_crossentropy",
metrics = [my_metric],
optimizer='adadelta')
```
When I run this code, I get this error:
>
> --------------------------------------------------------------------------- TypeError Traceback (most recent call
> last) in ()
> 30 model.compile(loss="categorical\_crossentropy",
> 31 metrics = [my\_metric], #'accuracy',
> ---> 32 optimizer='adadelta')
>
>
> ~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in
> compile(self, optimizer, loss, metrics, loss\_weights,
> sample\_weight\_mode, weighted\_metrics, target\_tensors, \*\*kwargs)
> 449 output\_metrics = nested\_metrics[i]
> 450 output\_weighted\_metrics = nested\_weighted\_metrics[i]
> --> 451 handle\_metrics(output\_metrics)
> 452 handle\_metrics(output\_weighted\_metrics, weights=weights)
> 453
>
>
> ~/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in
> handle\_metrics(metrics, weights)
> 418 metric\_result = weighted\_metric\_fn(y\_true, y\_pred,
> 419 weights=weights,
> --> 420 mask=masks[i])
> 421
> 422 # Append to self.metrics\_names, self.metric\_tensors,
>
>
> ~/anaconda3/lib/python3.6/site-packages/keras/engine/training\_utils.py
> in weighted(y\_true, y\_pred, weights, mask)
> 402 """
> 403 # score\_array has ndim >= 2
> --> 404 score\_array = fn(y\_true, y\_pred)
> 405 if mask is not None:
> 406 # Cast the mask to floatX to avoid float64 upcasting in Theano
>
>
> in my\_metric(targ, predict)
> 22 val\_predict = predict
> 23 val\_targ = tf.math.argmax(targ, axis=1)
> ---> 24 return metrics.balanced\_accuracy\_score(val\_targ, val\_predict)
> 25 #return 5
> 26
>
>
> ~/anaconda3/lib/python3.6/site-packages/sklearn/metrics/classification.py
> in balanced\_accuracy\_score(y\_true, y\_pred, sample\_weight, adjusted)
>
> 1431 1432 """
> -> 1433 C = confusion\_matrix(y\_true, y\_pred, sample\_weight=sample\_weight) 1434 with
> np.errstate(divide='ignore', invalid='ignore'): 1435
>
> per\_class = np.diag(C) / C.sum(axis=1)
>
>
> ~/anaconda3/lib/python3.6/site-packages/sklearn/metrics/classification.py
> in confusion\_matrix(y\_true, y\_pred, labels, sample\_weight)
> 251
> 252 """
> --> 253 y\_type, y\_true, y\_pred = \_check\_targets(y\_true, y\_pred)
> 254 if y\_type not in ("binary", "multiclass"):
> 255 raise ValueError("%s is not supported" % y\_type)
>
>
> ~/anaconda3/lib/python3.6/site-packages/sklearn/metrics/classification.py
> in \_check\_targets(y\_true, y\_pred)
> 69 y\_pred : array or indicator matrix
> 70 """
> ---> 71 check\_consistent\_length(y\_true, y\_pred)
> 72 type\_true = type\_of\_target(y\_true)
> 73 type\_pred = type\_of\_target(y\_pred)
>
>
> ~/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py in
> check\_consistent\_length(\*arrays)
> 229 """
> 230
> --> 231 lengths = [\_num\_samples(X) for X in arrays if X is not None]
> 232 uniques = np.unique(lengths)
> 233 if len(uniques) > 1:
>
>
> ~/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py in
> (.0)
> 229 """
> 230
> --> 231 lengths = [\_num\_samples(X) for X in arrays if X is not None]
> 232 uniques = np.unique(lengths)
> 233 if len(uniques) > 1:
>
>
> ~/anaconda3/lib/python3.6/site-packages/sklearn/utils/validation.py in
> \_num\_samples(x)
> 146 return x.shape[0]
> 147 else:
> --> 148 return len(x)
> 149 else:
> 150 return len(x)
>
>
> TypeError: object of type 'Tensor' has no len()
>
>
>
|
2019/02/10
|
[
"https://Stackoverflow.com/questions/54619732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9585135/"
] |
You cannot call a sklearn function on a Keras tensor. You'll need to implement the functionality yourself using Keras' backend functions, or TensorFlow functions if you are using the TF backend.
The `balanced_accuracy_score` is defined [as the average of the recall](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html) obtained in each column. Check [this link](https://gist.github.com/dgrahn/f68447e6cc83989c51617571396020f9) for implementations of precision and recall. As for the `balanced_accuracy_score`, you can implement it as follows:
```
import keras.backend as K
def balanced_recall(y_true, y_pred):
"""
Computes the average per-column recall metric
for a multi-class classification problem
"""
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)), axis=0)
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)), axis=0)
recall = true_positives / (possible_positives + K.epsilon())
balanced_recall = K.mean(recall)
return balanced_recall
```
|
try :
`pip install --upgrade tensorflow`
| 8,057
|
37,463,506
|
I am trying to open a word document with python in windows, but I am unfamiliar with windows.
My code is as follows.
```
import docx as dc
doc = dc.Document(r'C:\Users\justin.white\Desktop\01100-Allergan-UD1314-SUMMARY OF WORK.docx')
```
Through another post, I learned that I had to put the r in front of my string to convert it to a raw string or it would interpret the \U as an escape sequence.
The error I get is
```
PackageNotFoundError: Package not found at 'C:\Users\justin.white\Desktop\01100-Allergan-UD1314-SUMMARY OF WORK.docx'
```
I'm unsure of why it cannot find my document, 01100-Allergan-UD1314-SUMMARY OF WORK.docx. The pathway is correct as I copied it directly from the file system.
Any help is appreciated thanks.
|
2016/05/26
|
[
"https://Stackoverflow.com/questions/37463506",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4673518/"
] |
try this
```
import StringIO
from docx import Document
file = r'H:\myfolder\wordfile.docx'
with open(file) as f:
source_stream = StringIO(f.read())
document = Document(source_stream)
source_stream.close()
```
<http://python-docx.readthedocs.io/en/latest/user/documents.html>
Also, in regards to debugging the file not found error, simplify your directory names and files names. Rename the file to 'file' instead of referring to a long path with spaces, etc.
|
If you want to open the document in Microsoft Word try using `os.startfile()`.
In your example it would be:
```
os.startfile(r'C:\Users\justin.white\Desktop\01100-Allergan-UD1314-SUMMARY OF WORK.docx')
```
This will open the document in word on your computer.
| 8,058
|
32,734,437
|
I got an file with text form:
```
a:
b(0.1),
c(0.33),
d:
e(0.21),
f(0.41),
g(0.5),
k(0.8),
h:
y(0.9),
```
And I want get the following form:
```
a: b(0.1), c(0.33)
d: e(0.21), f(0.41), g(0.5), k(0.8)
h: y(0.9)
```
In python language,
I have tried:
```
for line in menu:
for i in line:
if i == ":":
```
but i do not know if i wondering print (text before i and after i till meet another i) as one line.
also delete the ',' at end of the line
|
2015/09/23
|
[
"https://Stackoverflow.com/questions/32734437",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5316423/"
] |
```
import re
one_line = ''.join(menu).replace('\n', ' ')
print re.sub(', ([a-z]+:)', r'\n\1', one_line)[:-1]
```
You might have to tweak the `one_line` to match your input better.
|
I am not exactly sure if you want to print the stuff or actually manipulate the file. But in the case of just printing:
```
from __future__ import print_function
from itertools import tee, islice, chain, izip
def previous_and_next(some_iterable):
prevs, items, nexts = tee(some_iterable, 3)
prevs = chain([None], prevs)
nexts = chain(islice(nexts, 1, None), [None])
return izip(prevs, items, nexts)
with open('test.txt') as f:
for i, (prev, line, next) in enumerate(previous_and_next(f)):
if ":" in line and i != 0: #Add a newline before every ":" except the first.
print()
if not next or ":" in next: #Check if the next line is a semicolon, if so strip it. "not next" is need because if there is no next NoneType is returned.
print(line.rstrip()[:-1], end=" ")
else: #Otherwise just strip the newline and don't print any newline.
print(line.rstrip(), end=" ")
```
Using [this helper function](https://stackoverflow.com/a/1012089/667648).
| 8,059
|
63,811,316
|
I am running celery worker(version 4.4) on windows machine, when I run the worker with `-P eventlet` option it throws Attribute error.
Error logs are as follows:-
```
pipenv run celery worker -A src.celery_app -l info -P eventlet --without-mingle --without-heartbeat --without-gossip -Q queue1 -n worker1
Traceback (most recent call last):
File "d:\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "d:\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\G-US01.Test\.virtualenvs\celery-z5n-38Vt\Scripts\celery.exe\__main__.py", line 7, in <module>
File "c:\users\g-us01.test\.virtualenvs\celery-z5n-38vt\lib\site-packages\celery\__main__.py", line 14, in main
maybe_patch_concurrency()
File "c:\users\g-us01.test\.virtualenvs\celery-z5n-38vt\lib\site-packages\celery\__init__.py", line 152, in maybe_patch_concurrency
patcher()
File "c:\users\g-us01.test\.virtualenvs\celery-z5n-38vt\lib\site-packages\celery\__init__.py", line 109, in _patch_eventlet
eventlet.monkey_patch()
File "c:\users\g-us01.test\.virtualenvs\celery-z5n-38vt\lib\site-packages\eventlet\patcher.py", line 334, in monkey_patch
fix_threading_active()
File "c:\users\g-us01.test\.virtualenvs\celery-z5n-38vt\lib\site-packages\eventlet\patcher.py", line 331, in fix_threading_active
_os.register_at_fork(
AttributeError: module 'os' has no attribute 'register_at_fork'
```
I have installed eventlet in virtual environment, the pipfile contents are as follows:-
```
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
rope = "*"
autopep8 = "*"
[packages]
eventlet = "*"
psutil = "*"
celery = "*"
pythonnet = "*"
redis = "*"
gevent = "*"
[requires]
python_version = "3.7"
```
Please let me know where I am going wrong.
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63811316",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5605353/"
] |
`os.register_at_fork` is a new function available since `Python 3.7`, it is only available for Unix systems ([Source from Python doc](https://docs.python.org/3.8/library/os.html#os.register_at_fork)) and Eventlet use it to patch `threading` library.
There is an issue opened in Eventlet Github:
<https://github.com/eventlet/eventlet/issues/644>
So I don't think `Celery -A app worker -l info -P eventlet` is any longer a good alternative for Windows.
Personally I ran:
```
Celery -A app worker -l info
```
And it worked with `Windows 10`, `Python 3.7` and `Celery 4.4.7`.
|
One reason you are facing this is because of the fact that Celery works on a pre-fork model.
So if the underlying OS does not support it, you will have a tough time running celery. As per my knowledge, this model does not exist for the Windows kernel.
You can still use Cygwin if you want to make it work on windows or have Linux subsystem available for Windows
Sources:-
1. [How to run celery on windows?](https://stackoverflow.com/questions/37255548/how-to-run-celery-on-windows)
2. <https://docs.celeryproject.org/en/stable/faq.html#does-celery-support-windows>
| 8,061
|
18,233,399
|
I have a fat32 partition image file dump, for example created with dd. how i can parse this file with python and extract the desired file inside this partition.
|
2013/08/14
|
[
"https://Stackoverflow.com/questions/18233399",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2460058/"
] |
As far as reading a FAT32 filesystem image in Python goes, the [Wikipedia page](http://en.wikipedia.org/wiki/FAT32) has all the detail you need to write a read-only implementation.
[Construct](http://construct.readthedocs.org/en/latest/) may be of some use. Looks like they have an example for FAT16 (<https://github.com/construct/construct/blob/master/construct/formats/filesystem/fat16.py>) which you could try extending.
|
Just found out this nice [lib7zip bindings](https://github.com/topia/pylib7zip) that can read RAW FAT images (and [much more](https://7zip.bugaco.com/7zip/MANUAL/general/formats.htm)).
Example usage:
```py
# pip install git+https://github.com/topia/pylib7zip
from lib7zip import Archive, formats
archive = Archive("fd.ima", forcetype="FAT")
# iterate over archive contents
for f in archive:
if f.is_dir:
continue
print("; %12s %s %s" % ( f.size, f.mtime.strftime("%H:%M.%S %Y-%m-%d"), f.path))
f_crc = f.crc
if not f_crc:
# extract in memory and compute crc32
f_crc = -1
try:
f_contents = f.contents
except:
# possible extraction error
continue
if len(f_contents) > 0:
f_crc = crc32(f_contents)
# end if
print("%s %08X " % ( f.path, f_crc ) )
```
An alternative is [pyfatfs](https://pypi.org/project/pyfatfs/) (untested by me).
| 8,066
|
35,569,042
|
I apologize if this is a silly question, but I have been trying to teach myself how to use BeautifulSoup so that I can create a few projects.
I was following this link as a tutorial: <https://www.youtube.com/watch?v=5GzVNi0oTxQ>
After following the exact same code as him, this is the error that I get:
```
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 1240, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1083, in request
self._send_request(method, url, body, headers)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1128, in _send_request
self.endheaders(body)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1079, in endheaders
self._send_output(message_body)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 911, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 854, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1237, in connect
server_hostname=server_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 376, in wrap_socket
_context=self)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 747, in __init__
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 983, in do_handshake
self._sslobj.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 628, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)
```
During handling of the above exception, another exception occurred:
```
Traceback (most recent call last):
File "WorldCup.py", line 3, in <module>
x = urllib.request.urlopen('https://www.google.com')
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 162, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 465, in open
response = self._open(req, data)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 483, in _open
'_open', req)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 443, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 1283, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/urllib/request.py", line 1242, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)>
```
Can someone help me figure out how to fix this?
|
2016/02/23
|
[
"https://Stackoverflow.com/questions/35569042",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4790055/"
] |
On Debian 9 I had to:
```
$ sudo update-ca-certificates --fresh
$ export SSL_CERT_DIR=/etc/ssl/certs
```
I'm not sure why, but this enviroment variable was never set.
|
This has changed in recent versions of the ssl library. The SSLContext was moved to it's own property. This is the equivalent of Jia's answer in Python 3.8
```
import ssl
ssl.SSLContext.verify_mode = ssl.VerifyMode.CERT_OPTIONAL
```
| 8,067
|
50,640,716
|
I am using MACOS 10.12.6
I was trying to uninstall python to reinstall it, and I foolishly typed these commands into my terminals.
```
sudo rm -rf /Users/<myusername>/anaconda2/lib/python2.7
sudo rm -rf /Users/<myusername>/anaconda2/lib/python27.zip
sudo rm -rf /Users/<myusername>/anaconda2/lib/python2.7/plat-darwin
sudo rm -rf /Users/<myusername>/anaconda2/lib/plat-mac
sudo rm -rf /Users/<myusername>/anaconda2/lib/plat-mac/lib-scriptpackages
```
now my Python won't work. I get these errors:
```
>Could not find platform independent libraries <prefix>
>Could not find platform dependent libraries <exec_prefix>
>Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
```
And when I try to run python I get things such as
```
>ModuleNotFoundError: No module named 'pandas'
```
I currently cannot do anything which requires python
I came to understand much later that what I did was deleting an important part of python files from my computer.
Is there any way I can reinstall python or is formatting my computer the only option if I want to use Python on this computer?
|
2018/06/01
|
[
"https://Stackoverflow.com/questions/50640716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9880455/"
] |
Since you used Anconda on your mac you should be able to just reinstall python 2.7. If you still have the install package: Anaconda2-5.2.0-MacOSX-x86\_64.pkg, just double click that and follow directions. If you don't have this package, download it from [here](https://www.anaconda.com/download/#macos) and when the package downloads completely double-click it.
|
You only deleted Anaconda, not the System Python.
Therefore, you probably only need to edit your PATH variable to remove references to those folders.
Check your `~/.bashrc`
| 8,077
|
62,142,223
|
I've installed from sources the SimpleITK package on Python3. When I perform the provided registration example :
```
#!/usr/bin/env python
#=========================================================================
#
# Copyright NumFOCUS
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0.txt
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#=========================================================================
from __future__ import print_function
import SimpleITK as sitk
import sys
import os
def command_iteration(method) :
print("{0:3} = {1:10.5f} : {2}".format(method.GetOptimizerIteration(),
method.GetMetricValue(),
method.GetOptimizerPosition()))
if len ( sys.argv ) < 4:
print( "Usage: {0} <fixedImageFilter> <movingImageFile>
<outputTransformFile>".format(sys.argv[0]))
sys.exit ( 1 )
fixed = sitk.ReadImage(sys.argv[1], sitk.sitkFloat32)
moving = sitk.ReadImage(sys.argv[2], sitk.sitkFloat32)
R = sitk.ImageRegistrationMethod()
R.SetMetricAsMeanSquares()
R.SetOptimizerAsRegularStepGradientDescent(4.0, .01, 200 )
R.SetInitialTransform(sitk.TranslationTransform(fixed.GetDimension()))
R.SetInterpolator(sitk.sitkLinear)
R.AddCommand( sitk.sitkIterationEvent, lambda: command_iteration(R) )
outTx = R.Execute(fixed, moving)
print("-------")
print(outTx)
print("Optimizer stop condition:
{0}".format(R.GetOptimizerStopConditionDescription()))
print(" Iteration: {0}".format(R.GetOptimizerIteration()))
print(" Metric value: {0}".format(R.GetMetricValue()))
sitk.WriteTransform(outTx, sys.argv[3])
if ( not "SITK_NOSHOW" in os.environ ):
resampler = sitk.ResampleImageFilter()
resampler.SetReferenceImage(fixed);
resampler.SetInterpolator(sitk.sitkLinear)
resampler.SetDefaultPixelValue(100)
resampler.SetTransform(outTx)
out = resampler.Execute(moving)
simg1 = sitk.Cast(sitk.RescaleIntensity(fixed), sitk.sitkUInt8)
simg2 = sitk.Cast(sitk.RescaleIntensity(out), sitk.sitkUInt8)
cimg = sitk.Compose(simg1, simg2, simg1//2.+simg2//2.)
sitk.Show( cimg, "ImageRegistration1 Composition" )
```
The execution with `Python ImageRegistrationMethod1.py image_ref.tif
image_moving.tif res.tif` seems to work well until it comes to write the `res.tif` image and triggers the following error :
```
TIFFReadDirectory: Warning, Unknown field with tag 50838 (0xc696)
encountered.
TIFFReadDirectory: Warning, Unknown field with tag 50839 (0xc697)
encountered.
TIFFReadDirectory: Warning, Unknown field with tag 50838 (0xc696)
encountered.
TIFFReadDirectory: Warning, Unknown field with tag 50839 (0xc697)
encountered.
-------
itk::simple::Transform
TranslationTransform (0x7fb2a54ec0f0)
RTTI typeinfo: itk::TranslationTransform<double, 3u>
Reference Count: 2
Modified Time: 2196
Debug: Off
Object Name:
Observers:
none
Offset: [14.774, 10.57, 18.0612]
Optimizer stop condition: RegularStepGradientDescentOptimizerv4: Step
too small after 22 iterations. Current step (0.0078125) is less than
minimum step (0.01).
Iteration: 23
Metric value: 2631.5128202930223
Traceback (most recent call last):
File "ImageRegistrationMethod1.py", line 57, in <module>
sitk.WriteTransform(outTx, sys.argv[3])
File "/usr/local/lib/python3.7/site-
packages/SimpleITK/SimpleITK.py", line 5298, in
WriteTransform
return _SimpleITK.WriteTransform(transform, filename)
RuntimeError: Exception thrown in SimpleITK WriteTransform:
itk::ERROR: TransformFileWriterTemplate(0x7faa4fcfce70): Could not
create Transform IO
object for writing file /Users/anass/Desktop/res.tif
Tried to create one of the following:
HDF5TransformIOTemplate
HDF5TransformIOTemplate
MatlabTransformIOTemplate
MatlabTransformIOTemplate
TxtTransformIOTemplate
TxtTransformIOTemplate
You probably failed to set a file suffix, or
set the suffix to an unsupported type.
```
I really don't know why I'm getting this error since the code was built from source. Any help please?
|
2020/06/01
|
[
"https://Stackoverflow.com/questions/62142223",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11107590/"
] |
The result of running the ImageRegistrationMethod1 example is a transform. SimpleITK supports a number of file formats for transformations, including a text file (.txt), a Matlab file (.mat) and a HDF5Tranform (.hdf5). That does not include a .tif file, which is an image file, not a transform.
You can read more about it on the Transform section of the SimpleITK IO page:
<https://simpleitk.readthedocs.io/en/master/IO.html#transformations>
In the case of this example, the transform produced is a 3-d translation. So if you've chosen a .txt file output, you can see the X, Y and Z translations on the Parameters line.
|
if you want to write dicom as file output, please try this one.
```
writer.SetFileName(os.path.join('transformed.dcm'))
writer.Execute(cimg)
```
| 8,078
|
73,277,276
|
I know how to add a function to a python dict:
```
def burn(theName):
return theName + ' is burning'
kitchen = {'name': 'The Kitchen', 'burn_it': burn}
print(kitchen['burn_it'](kitchen['name']))
### output: "the Kitchen is burning"
```
but is there any way to reference the dictionary's own 'name' value without having to name the dict itself specifically? To refer to the dictionary as itself?
With other languages in mind I was thinking there might be something like
```
print(kitchen['burn_it'](__self__['name']))
```
or
```
print(kitchen['burn_it'](__this__['name']))
```
where the function could access the 'name' key of the dictionary it was inside.
I have googled quite a bit but I keep finding people who want to do this:
```
kitchen = {'name': 'Kitchen', 'fullname': 'The ' + ['name']}
```
where they're trying to access the dictionary key before they've finished initialising it.
TIA
|
2022/08/08
|
[
"https://Stackoverflow.com/questions/73277276",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1914833/"
] |
You can extend `dict` object with your custom class, like this:
```py
class MyDict(dict):
def __init__(self, *args, **kwargs):
self["burn_it"] = self.burn
super().__init__(*args, **kwargs)
def burn(self):
return self["name"] + " is burning"
kitchen = MyDict({'name': 'The Kitchen'})
print(kitchen['burn_it']()) # The Kitchen is burning
print(kitchen.burn()) # The Kitchen is burning
```
|
You cannot know which object reference the function.
A simple example, image the following:
```
def burn(theName):
return theName + ' is burning'
kitchen = {'name': 'The Kitchen', 'burn_it': burn}
garage = {'name': 'The Garage', 'burn_it': burn}
```
`burn` is referenced both in `kitchen` and `garage`, how could it "know" which dictionary is referencing it?
```
id(kitchen['burn_it']) == id(garage['burn_it'])
# True
```
What you can do (if you don't want a custom object) is to use a function:
```
def action(d):
return d['burn_it'](d['name'])
action(kitchen)
# 'The Kitchen is burning'
action(garage)
# 'The Garage is burning'
```
with a custom object:
```
class CustomDict(dict):
def __init__(self, *arg, **kwargs):
super().__init__(*arg, **kwargs)
def action(self):
return self['burn_it'](self['name'])
kitchen = CustomDict({'name': 'The Kitchen', 'burn_it': burn})
kitchen.action()
# 'The Kitchen is burning'
```
| 8,079
|
56,693,576
|
I am trying to access a variable defined inside an if statement in a for loop, outside the for loop. but I am getting the 'Unbounded Local Error'
I have tried assigning `lambdaPriceUsWest2 = None` as suggested here:
[Python Get variable outside the loop](https://stackoverflow.com/questions/25406399/python-get-variable-outside-the-loop)
Also, tried specifying `global lambdaPriceUsWest2` inside if statement with `lambdaPriceUsWest2 = None` before the code snippet.
```
for x in range(len(response['PriceList'])):
priceList=json.loads(response['PriceList'][x])
if priceList['product']['sku'] == 'DU9X9ZR8C8DYH3Y9':
lambdaPriceUsWest2= priceListpriceList['product']['sku']['USD']
if priceList['product']['sku'] == 'CVE47QZ9RSF8DTEM':
lambdaPriceUsEast2= priceListpriceList['product']['sku']['USD']
break
logger.debug(lambdaPriceUsWest2)
```
Expected Result:
0.055512 (similar value)
Actual Result :
Error:
```
"errorMessage": "local variable 'lambdaPriceUsWest2' referenced before assignment",
"errorType": "UnboundLocalError"
```
|
2019/06/20
|
[
"https://Stackoverflow.com/questions/56693576",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6921304/"
] |
the best way is before the for loop try to initialize that variable. For example:
```
lambdaPriceUsWest2 = ""
```
|
just defined a var outside loop and update it value
```
local_val =''
for x in range(len(response['PriceList'])):
priceList=json.loads(response['PriceList'][x])
if priceList['product']['sku'] == 'DU9X9ZR8C8DYH3Y9':
lambdaPriceUsWest2= priceListpriceList['product']['sku']['USD']
local_val = lambdaPriceUsWest2
if priceList['product']['sku'] == 'CVE47QZ9RSF8DTEM':
lambdaPriceUsEast2= priceListpriceList['product']['sku']['USD']
break
logger.debug(local_val)
```
| 8,080
|
59,125,889
|
```
npm install expo-cli --global
```
I got this following error:
```
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! envsub@3.1.0 postinstall: `test -d .git && cp gitHookPrePush.sh .git/hooks/pre-push || true`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the envsub@3.1.0 postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\User\AppData\Roaming\npm-cache\_logs\2019-12-01T12_11_45_118Z-debug.log
```
node and npm versions:
```
node --version
v12.13.1
npm --version
6.12.1
```
I am trying to install expo-cli on windows 10, according it's official site:
>
> npm install expo-cli --global
> I got this following error:
>
>
>
43056 verbose Windows\_NT 10.0.18362
43057 verbose argv "C:\Program Files\nodejs\node.exe" "C:\Program Files\nodejs\node\_modules\npm\bin\npm-cli.js" "install" "expo-cli" "--global"
43058 verbose node v12.13.1
43059 verbose npm v6.12.1
43060 error code ELIFECYCLE
43061 error errno 1
43062 error envsub@3.1.0 postinstall: `test -d .git && cp gitHookPrePush.sh .git/hooks/pre-push || true`
43062 error Exit status 1
43063 error Failed at the envsub@3.1.0 postinstall script.
43063 error This is probably not a problem with npm. There is likely additional logging output above.
43064 verbose exit [ 1, true ]
I am using python version:
>
> python --version
> Python 3.8.0
> and node and npm versions:
>
>
> node --version
> v12.13.1
>
>
> npm --version
> 6.12.1
> \*\*
> What is your suggestion?
>
>
>
\*\*
|
2019/12/01
|
[
"https://Stackoverflow.com/questions/59125889",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5982462/"
] |
just try installing `npm install expo-cli --global` this command on git bash. It worked for me.
|
[I fixed this problem](https://stackoverflow.com/questions/59124830/reactnative-code-elifecycle-error-when-installing-expo-cli/59126514#59126514) :
```
1- Download and install Git SCM
2- Download Visual Studio Community HERE and install a Custom Installation, selecting ONLY the following packages: VISUAL C++, PYTHON TOOLS FOR VISUAL STUDIO and MICROSOFT WEB DEVELOPER TOOLS
3- Download and install Python 2.7.x
4- Register a Environment Variable with name: GYP_MSVS_VERSION with this value: 2015
```
After these installations i think this part is important:
>
> **postinstall** script of **envsub** depends on built-in **unix shell** commands. So any shell compatible with unix shell should works, like **Git BASH**
>
>
>
So run `npm install expo-cli --global` after above installation on `Git BASH`
| 8,082
|
50,751,484
|
I trained on TensorFlow model on a GPU cluster, saved the model using
```
saver = tf.train.Saver()
saver.save(sess, config.model_file, global_step=global_step)
```
and now I am trying to restore the model with
```
saver = tf.train.import_meta_graph('model-1000.meta')
saver.restore(sess,tf.train.latest_checkpoint(save_path))
```
for evaluation, on a different system. The issue is that `saver.restore`
yields the following error:
```
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1664, in <module>
main()
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1658, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm.app/Contents/helpers/pydev/pydevd.py", line 1068, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/jonpdeaton/Developer/BraTS18-Project/segmentation/evaluate.py", line 205, in <module>
main()
File "/Users/jonpdeaton/Developer/BraTS18-Project/segmentation/evaluate.py", line 162, in main
restore_and_evaluate(save_path, model_file, output_dir)
File "/Users/jonpdeaton/Developer/BraTS18-Project/segmentation/evaluate.py", line 127, in restore_and_evaluate
saver.restore(sess, tf.train.latest_checkpoint(save_path))
File "/Users/jonpdeaton/anaconda3/envs/BraTS/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 1857, in latest_checkpoint
if file_io.get_matching_files(v2_path) or file_io.get_matching_files(
File "/Users/jonpdeaton/anaconda3/envs/BraTS/lib/python3.6/site-packages/tensorflow/python/lib/io/file_io.py", line 337, in get_matching_files
for single_filename in filename
File "/Users/jonpdeaton/anaconda3/envs/BraTS/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: /afs/cs.stanford.edu/u/jdeaton/dfs/unet; No such file or directory
```
It seems as though there are some paths that were stored in the model or `checkpoint` file form the system that it was trained on, that are no longer valid on the system that I am doing evaluation on. How do I restore a model (for evaluation) on a different machine after having copied the `model-X.meta`, `model-X.index` and `checkpoint` files?
|
2018/06/07
|
[
"https://Stackoverflow.com/questions/50751484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6263317/"
] |
By default, the `Saver` object will write the absolute model checkpoint paths into the `checkpoint` file. So the path returned by `tf.train.latest_checkpoint(save_path)` is the absolute path on your old machine.
Temporary solution:
1. Pass the actual model file path directly to the `restore` method rather than the result of `tf.train.latest_checkpoint`.
2. Manually edit the `checkpoint` file, which is a simple text file.
Long term solution:
```
saver = tf.train.Saver(save_relative_paths=True)
```
|
Open up the checkpoint file with your favorite text editor and simply change the absolute paths found therein to just filenames.
| 8,083
|
25,201,504
|
I'm trying to minimize function, that returns a vector of values,
and here is an error:
>
> setting an array element with a sequence
>
>
>
Code:
```
P = np.matrix([[0.3, 0.1, 0.2], [0.01, 0.4, 0.2], [0.0001, 0.3, 0.5]])
Ps = np.array([10,14,5])
def objective(x):
x = np.array([x])
res = np.square(Ps - np.dot(x, P))
return res
def main():
x = np.array([10, 11, 15])
print minimize(objective, x, method='Nelder-Mead')
```
At these values of P, Ps, x function returns [[ 47.45143225 16.81 44.89 ]]
Thank you for any advice
**UPD (full traceback)**
```
Traceback (most recent call last):
File "<ipython-input-125-9649a65940b0>", line 1, in <module>
runfile('C:/Users/Roark/Documents/Python Scripts/optimize.py', wdir='C:/Users/Roark/Documents/Python Scripts')
File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile
execfile(filename, namespace)
File "C:/Users/Roark/Documents/Python Scripts/optimize.py", line 28, in <module>
main()
File "C:/Users/Roark/Documents/Python Scripts/optimize.py", line 24, in main
print minimize(objective, x, method='Nelder-Mead')
File "C:\Anaconda\lib\site-packages\scipy\optimize\_minimize.py", line 413, in minimize
return _minimize_neldermead(fun, x0, args, callback, **options)
File "C:\Anaconda\lib\site-packages\scipy\optimize\optimize.py", line 438, in _minimize_neldermead
fsim[0] = func(x0)
ValueError: setting an array element with a sequence.
```
**UPD2: function should be minimized (Ps is a vector)**

|
2014/08/08
|
[
"https://Stackoverflow.com/questions/25201504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2824962/"
] |
Your objective function needs to return a scalar value, not a vector. You probably want to return the *sum* of squared errors rather than the vector of squared errors:
```
def objective(x):
res = ((Ps - np.dot(x, P)) ** 2).sum()
return res
```
|
Use [`least_squares`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html). This will require to modify the objective a bit to return differences instead of squared differences:
```py
import numpy as np
from scipy.optimize import least_squares
P = np.matrix([[0.3, 0.1, 0.2], [0.01, 0.4, 0.2], [0.0001, 0.3, 0.5]])
Ps = np.array([10,14,5])
def objective(x):
x = np.array([x])
res = Ps - np.dot(x, P)
return np.asarray(res).flatten()
def main():
x = np.array([10, 11, 15])
print(least_squares(objective, x))
```
Result:
```
active_mask: array([0., 0., 0.])
cost: 5.458917464129402e-28
fun: array([1.59872116e-14, 2.84217094e-14, 5.32907052e-15])
grad: array([-8.70414856e-15, -1.25943700e-14, -1.11926469e-14])
jac: array([[-3.00000002e-01, -1.00000007e-02, -1.00003682e-04],
[-1.00000001e-01, -3.99999999e-01, -3.00000001e-01],
[-1.99999998e-01, -1.99999999e-01, -5.00000000e-01]])
message: '`gtol` termination condition is satisfied.'
nfev: 4
njev: 4
optimality: 1.2594369966691647e-14
status: 1
success: True
x: array([ 31.95419775, 41.56815698, -19.40894189])
```
| 8,084
|
31,580,319
|
I use ansible module `fetch` to download a large file, said 2GB. Then I got the following error message. Ansible seems to be unable to deal with large file.
```
fatal: [x.x.x.x] => failed to parse:
SUDO-SUCCESS-ucnhswvujwylacnodwyyictqtmrpabxp
Traceback (most recent call last):
File "/home/xxx/.ansible/tmp/ansible-tmp-1437624638.74-184884633599028/slurp", line 1167, in <module>
main()
File "/home/xxx/.ansible/tmp/ansible-tmp-1437624638.74-184884633599028/slurp", line 67, in main
data = base64.b64encode(file(source).read())
File "/usr/lib/python2.7/base64.py", line 53, in b64encode
encoded = binascii.b2a_base64(s)[:-1]
MemoryError
```
|
2015/07/23
|
[
"https://Stackoverflow.com/questions/31580319",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2605599/"
] |
<https://github.com/ansible/ansible/issues/11702>
This is an Ansible bug which have been solved in newer version.
|
Looks like the remote server you're trying to fetch from is running out of memory during the base64 encoding process. Perhaps try the synchronize module instead (which will use rsync); fetch isn't really designed to work with large files.
| 8,086
|
15,811,082
|
I am developing some python packages and I do want to perform proper testing before releasing them to PyPi.
This would require running the unittests across
* different python versions: 2.5, 2.6, 2.7, 3.2
* different operating systems: OS X, Debian, Ubuntu and Windows
Right now I am using pytest
Question: how can I implement this easily and preferably making the results publicly available and having integrated with github, so anyone who pushes will know the results.
Note: I am already aware about <https://travis-ci.org/> but this seems to be missing the cross-platform part, which is essential in this case.
Another option I was considering was to use Jenkins, but I don't know how to provide the matrix testing on it.
|
2013/04/04
|
[
"https://Stackoverflow.com/questions/15811082",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/99834/"
] |
I have used Jenkins, and I would recommend it. It has a plethora of plugins, and is very configurable.
I have used it for running projects over windows/linux/mac/mobile platforms, for sanity, unit, component, and regression tests.
It can support chaining of projects and tests, fingerprinting of items to be monitored as they progress your testing environment and also you can set up users and keep track of changes.
You can use it for production and for testing at the same time, hooking it up to your git repository, any change you make is automatically run through all the gauntlets you want.
|
You can use [`tox`](http://codespeak.net/tox/index.html) to automate setting up virtual environments and running your tests across Python versions:
```
[tox]
envlist = py25,py26,py27,py32
[testenv]
commands=py.test
```
Tox supports Python versions 2.4 and up, as well as Jython and PyPy.
If you want to look at a real-world project that uses `tox`, take a look at the [`zope.configuration` `tox.ini` configuration](https://github.com/zopefoundation/zope.configuration/blob/master/tox.ini). The package includes excellent [documentation on how to run the `tox` tests](http://docs.zope.org/zope.configuration/hacking.html#running-tests-on-multiple-python-versions-via-tox). These tests are automatically run by the [Zope nightly test builders](http://docs.zope.org/zopetoolkit/process/buildbots.html#the-nightly-builds).
Configuring tox to [run under Jenkins](http://tox.readthedocs.org/en/latest/example/jenkins.html) is trivial and fully documented.
| 8,089
|
51,919,720
|
I've been unable to use Pyenv to install Python on macOS (10.13.6) and have exhausted advice about common build problems.
pyenv-doctor reports: **OpenSSL development header is not installed.** Reinstallation of OpenSSL, as suggested in various related GitHub issues has not worked, not have various flag settings, eg (in various combinations):
```
export CFLAGS="-I$(brew --prefix openssl)/include"
export CPPFLAGS="-I$(brew --prefix openssl)/include"
export LDFLAGS="-L$(brew --prefix openssl)/lib"
export PKG_CONFIG_PATH="/usr/local/opt/openssl/lib/pkgconfig/"
export PATH="/usr/local/opt/openssl@1.1/bin:$PATH"
```
(Tried these in command line put as well.)
(Tried both OpenSSL 1.02p and 1.1, via Homebrew)
Tried
```
brew install readline xz
```
and
```
$ CFLAGS="-I$(xcrun --show-sdk-path)/usr/include" pyenv install 3.6.6
```
and
```
$ CFLAGS="-I$(brew --prefix openssl)/include -I$(xcrun --show-sdk-path)/usr/include" LDFLAGS="-L$(brew --prefix openssl)/lib" pyenv install 3.6.6
```
and
```
xcode-select --install
(or via downloadable command line tools installer for reinstallation)
```
No luck.
```
brew link --force openssl
```
is disallowed (error message says to use flags).
Also tried:
```
$(brew --prefix)/opt/openssl/bin/openssl
```
and tried the OpenSSL/macOS advice here:
```
https://solitum.net/openssl-os-x-el-capitan-and-brew/
```
$PATH shows:
```
/usr/local/opt/openssl/bin:/Users/tc/google-cloud-sdk/bin:/Users/tc/Code/git/flutter/bin:/usr/local/sbin:/usr/local/heroku/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/Users/tc/google-cloud-sdk/bin:/Users/tc/Code/git/flutter/bin:/usr/local/sbin:/usr/local/heroku/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/Users/tc/google-cloud-sdk/bin:/Users/tc/.nvm/versions/node/v8.11.3/bin:/Users/tomclaburn/Code/git/flutter/bin:/usr/local/sbin:/usr/local/heroku/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin:/Applications/Postgres.app/Contents/Versions/latest/bin:/usr/local/mongodb/bin:/usr/local/opt/openssl/bin/openssl:/usr/local/mongodb/bin:/usr/local/mongodb/bin
```
and .bash\_profile contains:
```
if [ -d "${PYENV_ROOT}" ]; then
export PATH="${PYENV_ROOT}/bin:${PATH}"
eval "$(pyenv init -)"
#eval "$(pyenv virtualenv-init -)"
fi
```
I suspect there's a missing/incorrect path or link but I've been unable to determine what it might be. Any advice would be welcome.
Pyenv error output:
BUILD FAILED (OS X 10.13.6 using python-build 20180424)
...
Last 10 log lines:
```
checking size of long... 0
checking size of long long... 0
checking size of void *... 0
checking size of short... 0
checking size of float... 0
checking size of double... 0
checking size of fpos_t... 0
checking size of size_t... configure: error: in `/var/folders/jb/h01vxbqs6z93h_238q61d48h0000gn/T/python-build.20180819081705.3009/Python-3.6.6':
configure: error: cannot compute sizeof (size_t)
```
pyenv-doctor error output:
```
checking for gcc... clang
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether clang accepts -g... yes
checking for clang option to accept ISO C89... none needed
checking for rl_gnu_readline_p in -lreadline... yes
checking for readline/readline.h... no
checking for SSL_library_init in -lssl... yes
checking how to run the C preprocessor... clang -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... no
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... no
checking for string.h... no
checking for memory.h... no
checking for strings.h... no
checking for inttypes.h... no
checking for stdint.h... no
checking for unistd.h... yes
checking openssl/ssl.h usability... no
checking openssl/ssl.h presence... no
checking for openssl/ssl.h... no
configure: error: OpenSSL development header is not installed.
```
|
2018/08/19
|
[
"https://Stackoverflow.com/questions/51919720",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7322742/"
] |
If this is the same issue as me, it's because there's headers in your path that shouldn't be there. Run `brew doctor` and you would see it complain. To fix it you can do:
```
mkdir /tmp/includes
brew doctor 2>&1 | grep "/usr/local/include" | awk '{$1=$1;print}' | xargs -I _ mv _ /tmp/includes
```
|
After applying Kit's answer; I had to do the following to overcome the fact that I also installed `openssl` with homebrew:
```
CFLAGS="-I$(brew --prefix openssl)/include" \
LDFLAGS="-L$(brew --prefix openssl)/lib" \
pyenv doctor
```
That got me working.
Also found this [reference](https://github.com/pyenv/pyenv/wiki/Common-build-problems) useful for common build problems.
| 8,090
|
46,492,510
|
I'm new to python, I'm trying to create a list of lists from a text file. The task seems easy to do but I don't know why it's not working with my code.
I have the following lines in my text file:
```
word1,word2,word3,word4
word2,word3,word1
word4,word5,word6
```
I want to get the following output:
```
[['word1','word2','word3','word4'],['word2','word3','word1'],['word4','word5','word6']]
```
The following is the code I tried:
```
mylist =[]
list =[]
with open("file.txt") as f:
for line in f:
mylist = line.strip().split(',')
list.append(mylist)
```
|
2017/09/29
|
[
"https://Stackoverflow.com/questions/46492510",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4699562/"
] |
You can iterate like this:
```
f = [i.strip('\n').split(',') for i in open('file.txt')]
```
|
Your code works fine, if your code creating issues in your system then if you want you can do this in one line with this :
```
with open("file.txt") as f:
print([i.strip().split(',') for i in f])
```
| 8,091
|
61,327,413
|
I have a client python and a server python and the commands which work perfectly. Now I want to build the interface which needs a variable(string) from the server file and I encountered a problem.
**my client.py file**
```
import socket
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((socket.gethostname(),1234))
s.send(bytes("command1","utf-8"))
```
**my server.py file:**
```
import socket
while (1):
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((socket.gethostname(),1234))
s.listen(5)
clientsocket, address=s.accept()
msg=clientsocket.recv(1024)
message=msg.decode("utf-8") # I need to pass this to the other file
print(message)
```
**the file in which I need to import the message string from the server.py file:**
```
from tkinter import *
from server import message
def function():
#do some stuff
Menu_Win = Tk()
photo = PhotoImage(file=r"path_to_local_file.png")
Background_Main = Canvas(Menu_Win, width=1980, height=1080, bg="white")
Background_Main.pack()
Background_Main.create_image(0,80, image=photo, anchor='nw')
if message=="command1": #this would be the variable from the server file
function()
Menu_Win.mainloop()
I tried to use **from server import message** but it gives me this error **OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted** and I found out that it gives me this error only when I am continuously running the server.py file and I thinking that the second files when imports, it imports the socket too.
UPDATE1: deleted
UPDATE2: deleted
UPDATE3:
I have found myself a solution, by using threading library and running the echoing server on a thread and the rest of the code (GUI) on another.
import socket
from tkinter import*
import threading
message="dummy variable"
def test_button():
print(message)
root=Tk()
Canvas(root, width=300, height=100).pack()
Button(root,text=("Server"), command=test_button).pack()
def get_command():
while 1:
global message
s=socket.socket()
s.bind((socket.gethostname(),1234))
s.listen(5)
clientsocket, address=s.accept()
msg=clientsocket.recv(1024)
message=msg.decode("utf-8")
t = threading.Thread(target=get_command)
t.start()
def my_mainloop(): #to actually see if the command is updated in real time
print(message)
root.after(1000, my_mainloop)
root.after(1000, my_mainloop)
root.mainloop()
```
Thank you for all the support :)
|
2020/04/20
|
[
"https://Stackoverflow.com/questions/61327413",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12305440/"
] |
Here's your exact same code using a file system object instead to do the folder work inside the loop. I didn't test it, but it illustrates what I am talking about in my comment above. You should be able to get it working using this:
```
Sub Unzip()
Dim oApplicationlication As Object
Dim MyFolder As String
Dim MyFile As String
Dim ZipFile As Variant
Dim ExtractTo As Variant
' create the fso
Dim fso as Object
Set fso = CreateObject("Scripting.FileSystemObject")
Application.ScreenUpdating = False
'Cell B2 is the folder path which contains all zip file
MyFolder = Range("B2")
MyFile = Dir(MyFolder & "\*.zip")
ZipFile = Range("C2")
ExtractTo = Range("B3")
Do While MyFile <> ""
'Cell C2 is updated with a zip file name via loop function
Range("C2") = MyFolder & "\" & MyFile
' use the fso to check for and create the folder
' this way you dont have to use the DIR function again, which was messing things up
If Not fso.FolderExists(Range("B3")) Then
fso.CreateFolder(Range("B3"))
End If
Set oApplication = CreateObject("Shell.Application")
oApplication.Namespace(ExtractTo).CopyHere oApplication.Namespace(ZipFile).Items
DoEvents
MyFile = Dir
Loop
Application.ScreenUpdating = True
End Sub
```
You may also benefit (speed-wise, depending on how many zip files there are) from moving this line outside of the loop and put it at the top where the fso object is created.
```
Set oApplication = CreateObject("Shell.Application")
```
|
Your code is failing because you are using `Dir` within the loop to check the existence of the folder to extract to. Instead, move that piece of code to outside the loop:
```
Sub Unzip()
Dim oApplication As Object
Dim MyFolder As String
Dim MyFile As String
Dim ExtractTo As Variant
Application.ScreenUpdating = False
'Cell B2 is the folder path which contains all zip file
If Len(Dir(Range("B3"), vbDirectory)) = 0 Then
MkDir Range("B3")
End If
MyFolder = Range("B2")
If Right(MyFolder, 1) <> "\" Then MyFolder = MyFolder & "\"
MyFile = Dir(MyFolder, vbNormal)
ExtractTo = Range("B3")
Do While MyFile <> ""
'Cell C2 is updated with a zip file name via loop function
If Right(MyFile, 3) = "zip" Then
Range("C2") = MyFolder & MyFile
Set oApplication = CreateObject("Shell.Application")
oApplication.Namespace(ExtractTo).CopyHere oApplication.Namespace(MyFolder & MyFile).Items
DoEvents
End If
MyFile = Dir
Loop
Application.ScreenUpdating = True
End Sub
```
Regards,
| 8,092
|
29,859,173
|
I am following the example to deploy sample python application to bluemix
[BLUEMIX-PYTHON-FLASK-SAMPLE](https://github.com/IBM-Bluemix/bluemix-python-flask-sample)
Created project successfully
Cloned repository successfully
Configured pipeline successfully
Deploy to BLUEMIX failed.
I checked the error in deployment log, seem to complain about `memory limit exceeded, Server error, status code:400, error code 100005`.
How do I add the statement in the sample app to output the memory requirement so that I will know how much it is needed?
Thanks in advance for any help.
|
2015/04/24
|
[
"https://Stackoverflow.com/questions/29859173",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2639529/"
] |
The memory limit is controlled by the memory value in the manifest.yml file in the root of the project. You don't need to have this manifest.yml file present as Bluemix will define defaults for you. In this case the memory allocation would be 1GB as this is the default which is really to much for a sample app like this.
I have just committed a change to the github project [bluemix-python-flask-sample](https://github.com/IBM-Bluemix/bluemix-python-flask-sample) which adds the [manifest.yml](https://github.com/IBM-Bluemix/bluemix-python-flask-sample/blob/master/manifest.yml) file with a memory value of 128M. You should pull the new changes or just use the 'Deploy to Bluemix' button either on the github page or here:
[](https://bluemix.net/deploy?repository=https://github.com/IBM-Bluemix/bluemix-python-flask-sample)
|
You probably have exceeded the max app limit on your Bluemix account.
Login to your Bluemix account and check if all the app memory limit is utilized. If you have reached your limit then you might have to remove one or more of the apps which you are not using based on how much memory space is needed.
in the python-flask-sample example guide, it is mentioned.
`cf push your-app-name -m 128M`
You can also lower the memory for your app by running the following.
`cf scale your-app-name -m 128M`
So 128 M will be allocated to this app while deploying on Bluemix, so ideally freeing 128 M should be good enough to get rid of the error you are seeing. This memory limit can be changed as needed.
Additionally, this limit is only during the free trial phase. Once you enter your credit card there is no limit.
| 8,093
|
18,950,409
|
I have a two dimensional associative array (dictionary). I'd like to iterate over the first dimension using a for loop, and extract the second dimension's dictionary at each iteration.
For example:
```
#!/usr/bin/python
doubleDict = dict()
doubleDict['one'] = dict()
doubleDict['one']['type'] = 'animal'
doubleDict['one']['name'] = 'joe'
doubleDict['one']['species'] = 'monkey'
doubleDict['two'] = dict()
doubleDict['two']['type'] = 'plant'
doubleDict['two']['name'] = 'moe'
doubleDict['two']['species'] = 'oak'
for thing in doubleDict:
print thing
print thing['type']
print thing['name']
print thing['species']
```
My desired output:
```
{'type': 'plant', 'name': 'moe', 'species': 'oak'}
plant
moe
oak
```
My actual output:
```
two
Traceback (most recent call last):
File "./test.py", line 16, in <module>
print thing['type']
TypeError: string indices must be integers, not str
```
What am I missing?
PS I'm aware I can do a `for k,v in doubleDict`, but I'm *really* trying to avoid having to do a long `if k == 'type': ... elif k == 'name': ...` statement. I'm looking to be able to call `thing['type']` directly.
|
2013/09/23
|
[
"https://Stackoverflow.com/questions/18950409",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1174102/"
] |
For-loops in `dict`s iterates over the keys and not over the values.
To iterate over the values do:
```
for thing in doubleDict.itervalues():
print thing
print thing['type']
print thing['name']
print thing['species']
```
I used your exact same code, but added the `.itervalues()` at the end which means: "I want to iterate over the values".
|
When you iterate through a dictionary, you iterate through it's keys and not its values. To get nested values, you have to do:
```
for thing in doubleDict:
print doubleDict[thing]
print doubleDict[thing]['type']
print doubleDict[thing]['name']
print doubleDict[thing]['species']
```
| 8,095
|
48,000,225
|
I have two dataframes as follows:
`leader`:
```none
0 11
1 8
2 5
3 9
4 8
5 6
[6065 rows x 2 columns]
```none
`DatasetLabel`:
```none
0 1 .... 7 8 9 10 11 12
0 A J .... 1 2 5 NaN NaN NaN
1 B K .... 3 4 NaN NaN NaN NaN
[4095 rows x 14 columns]
```
The Information dataset column names 0 to 6 are `DatasetLabel` about data and 7 to 12 are indexes that refer to the first column of `leader` Dataframe.
I want to create dataset where instead of the indexes in `DatasetLabel` dataframe, I have the value of each index from the `leader` dataframe, which is `leader.iloc[index,1]`
How can I do it using python features?
The output should look like:
`DatasetLabel`:
```none
0 1 .... 7 8 9 10 11 12
0 A J .... 8 5 6 NaN NaN NaN
1 B K .... 9 8 NaN NaN NaN NaN
```
I have come up with the following, but I get an error:
```py
for column in DatasetLabel.ix[:, 8:13]:
DatasetLabel[DatasetLabel[column].notnull()] = leader.iloc[DatasetLabel[DatasetLabel[column].notnull()][column].values, 1]
```
Error:
```none
ValueError: Must have equal len keys and value when setting with an iterable
```
|
2017/12/28
|
[
"https://Stackoverflow.com/questions/48000225",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3806649/"
] |
You can use `apply` to index into `leader` and exchange values with `DatasetLabel`, although it's not very pretty.
One issue is that Pandas won't let us index with `NaN`. Converting to `str` provides a workaround. But that creates a second issue, namely, column `9` is of type `float` (because `NaN` is `float`), so `5` becomes `5.0`. Once it's a string, that's `"5.0"`, which will fail to match the index values in `leader`. We can remove the `.0`, and then this solution will work - but it's a bit of a hack.
With `DatasetLabel` as:
```
Unnamed:0 0 1 7 8 9 10 11 12
0 0 A J 1 2 5.0 NaN NaN NaN
1 1 B K 3 4 NaN NaN NaN NaN
```
And `leader` as:
```
0 1
0 0 11
1 1 8
2 2 5
3 3 9
4 4 8
5 5 6
```
Then:
```
cols = ["7","8","9","10","11","12"]
updated = DatasetLabel[cols].apply(
lambda x: leader.loc[x.astype(str).str.split(".").str[0], 1].values, axis=1)
updated
7 8 9 10 11 12
0 8.0 5.0 6.0 NaN NaN NaN
1 9.0 8.0 NaN NaN NaN NaN
```
Now we can `concat` the unmodified columns (which we'll call `original`) with `updated`:
```
original_cols = DatasetLabel.columns[~DatasetLabel.columns.isin(cols)]
original = DatasetLabel[original_cols]
pd.concat([original, updated], axis=1)
```
Output:
```
Unnamed:0 0 1 7 8 9 10 11 12
0 0 A J 8.0 5.0 6.0 NaN NaN NaN
1 1 B K 9.0 8.0 NaN NaN NaN NaN
```
Note: It may be clearer to use `concat` here, but here's another, cleaner way of merging `original` and `updated`, using `assign`:
```
DatasetLabel.assign(**updated)
```
|
The [source code](https://github.com/pandas-dev/pandas/blob/v1.5.2/pandas/core/indexing.py#L1828-L1882) shows that this error occurs when you try to broadcast a list-like object (numpy array, list, set, tuple etc.) to multiple columns or rows but didn't specify the index correctly. Of course, list-like objects don't have custom indices like pandas objects, so it usually causes this error.
Solutions to common cases:
1. **You want to assign the same values across multiple columns at once.** In other words, you want to change the values of certain columns using a list-like object whose (a) length doesn't match the number of columns or rows and (b) dtype doesn't match the dtype of the columns they are being assigned to.1 An illustration may make it clearer. If you try to make the transformation below:
[](https://i.stack.imgur.com/kAyhC.png)
using a code similar to the one below, this error occurs:
```py
df = pd.DataFrame({'A': [1, 5, 9], 'B': [2, 6, 10], 'C': [3, 7, 11], 'D': [4, 8, 12]})
df.loc[:2, ['C','D']] = [100, 200.2, 300]
```
**Solution:** Duplicate the list/array/tuple, transpose it (either using `T` or `zip()`) and assign to the relevant rows/columns.2
```py
df.loc[:2, ['C','D']] = np.tile([100, 200.2, 300], (len(['C','D']), 1)).T
# if you don't fancy numpy, use zip() on a list
# df.loc[:2, ['C','D']] = list(zip(*[[100, 200.2, 300]]*len(['C','D'])))
```
2. **You want to assign the same values to multiple rows at once.** If you try to make the following transformation
[](https://i.stack.imgur.com/p56EI.png)
using a code similar to the following:
```py
df = pd.DataFrame({'A': [1, 5, 9], 'B': [2, 6, 10], 'C': [3, 7, 11], 'D': [4, 8, 12]})
df.loc[[0, 1], ['A', 'B', 'C']] = [100, 200.2]
```
**Solution:** To make it work as expected, we must convert the list/array into a Series with the correct index:
```py
df.loc[[0, 1], ['A', 'B', 'C']] = pd.Series([100, 200.2], index=[0, 1])
```
A common sub-case is if the row indices come from using a boolean mask. N.B. This is the case in the OP. In that case, just use the mask to filter `df.index`:
```py
msk = df.index < 2
df.loc[msk, ['A', 'B', 'C']] = [100, 200.2] # <--- error
df.loc[msk, ['A', 'B', 'C']] = pd.Series([100, 200.2], index=df.index[msk]) # <--- OK
```
3. **You want to store the same list in some rows of a column.** An illustration of this case is:
[](https://i.stack.imgur.com/Jk8zJ.png)
**Solution:** Explicitly construct a Series with the correct indices.
```py
# for the case on the left in the image above
df['D'] = pd.Series([[100, 200.2]]*len(df), index=df.index)
# latter case
df.loc[[1], 'D'] = pd.Series([[100, 200.2]], index=df.index[[1]])
```
---
1: Here, we tried to assign a list containing a float to int dtype columns, which contributed to this error being raised. If we tried to assign a list of ints (so that the dtypes match), we'd get a different error: `ValueError: shape mismatch: value array of shape (2,) could not be broadcast to indexing result of shape (2,3)` which can also be solved by the same method as above.
2: An error related to this one `ValueError: Must have equal len keys and value when setting with an ndarray` occurs if the object being assigned is a numpy array and there's a shape mismatch. That one is often solved either using `np.tile` or simply transposing the array.
| 8,103
|
7,007,400
|
I have a small python application, which uses pyttsx for some text to speech.
How it works:
simply say whatever is there in the clipboard.
The program works as expected inside eclipse. But if run on cmd.exe it only works partly if the text on the clipboard is too large(a few paras). Why ?
when run from cmd, it prints statements , but the actual 'talking' doesn't work(if the clipboard text is too large
Here is a of the program part which actually does the talking: As can be seen the 'talking' part is handled inside a thread.
```
def saythread(queue , text , pauselocation, startingPoint):
saythread.pauselocation = pauselocation
saythread.pause = 0
saythread.engine = pyttsx.init()
saythread.pausequeue1 = False
def onWord(name, location, length):
saythread.pausequeue1 = queue.get(False)
saythread.pause = location
saythread.pauselocation.append(location)
if saythread.pausequeue1 == True :
saythread.engine.stop()
def onFinishUtterance(name, completed):
if completed == True:
os._exit(0)
def engineRun():
if len(saythread.pauselocation) == 1:
rate = saythread.engine.getProperty('rate')
print rate
saythread.engine.setProperty('rate', rate-30)
textMod = text[startingPoint:]
saythread.engine.say(text[startingPoint:])
token = saythread.engine.connect("started-word" , onWord )
saythread.engine.connect("finished-utterance" , onFinishUtterance )
saythread.engine.startLoop(True)
engineRun()
if saythread.pausequeue1 == False:
os._exit(1)
def runNewThread(wordsToSay, startingPoint):
global queue, pauselocation
e1 = (queue, wordsToSay, pauselocation, startingPoint)
t1 = threading.Thread(target=saythread,args=e1)
t1.start()
#wordsToSay = CLIPBOARD CONTENTS
runNewThread(wordsToSay,0)
```
Thanks
Edit: I have checked than the python version used is the same 2.7 . The command used to run the program in cmd : `python d:\python\play\speech\speechplay.py`
|
2011/08/10
|
[
"https://Stackoverflow.com/questions/7007400",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/161179/"
] |
Checked that the problem is not in the code that reads the text from the clipboard.
You should check if your eclipse setup specifies custom environment variables for the project which do not exist outside Eclipse. Especially:
* PYTHONPATH (and also additional projects on which your program could depend in your setup)
* PATH
Use
```
import os
print os.environ['PATH']
print os.environ['PYTHONPATH']
```
at the beginning of your program to compare both settings.
Misc stylistic advices:
* don't use `os._exit`, prefer `sys.exit` (you should only use `os._exit` in a child process after a call to `os.fork`, which is not available on Windows)
* I think a `threading.Event` would be more appropriate than a `queue.Queue`
* I'd use a subclass approach for the thread with methods rather than a function with inner functions
For example:
```
import threading
import sys
import pyttsx
class SayThread(threading.Thread):
def __init__(self, queue, text, pauselocation, startingPoint, debug=False):
threading.Thread.__init__(self)
self.queue = queue
self.text = text
self.pauselocation = pauselocation
self.startingPoint = startingPoint
self.pause = 0
self.engine = pyttsx.init(debug=debug)
self.pausequeue1 = False
def run(self):
if len(self.pauselocation) == 1:
rate = self.engine.getProperty('rate')
print rate
self.engine.setProperty('rate', rate-30)
textMod = self.text[self.startingPoint:]
self.engine.say(self.text[self.startingPoint:])
self.engine.connect("started-word", self.onWord )
self.engine.connect("finished-utterance", self.onFinishUtterance )
self.engine.startLoop(True)
if self.pausequeue1 == False:
sys.exit(1)
def onWord(self, name, location, length):
self.pausequeue1 = self.queue.get(False)
self.pause = location
self.pauselocation.append(location)
if self.pausequeue1 == True :
self.engine.stop()
def onFinishUtterance(self, name, completed):
if completed == True:
sys.exit(0)
def runNewThread(wordsToSay, startingPoint):
global queue, pauselocation
t1 = SayThread(queue, wordsToSay,
pauselocation, startingPoint)
t1.start()
#wordsToSay = CLIPBOARD CONTENTS
runNewThread(wordsToSay,0)
```
|
In fact, eclipse itself uses a commandline command to start it's apps.
You should check what command eclipse is giving to start the program. It might be a bit verbose, but you can start from there and test what is necessary and what isn't.
You can find out the commandline eclipse uses by running the program and then selecting the output in the debug window. Right-click it, select properties and you're done.
If you don't have a debug window you can open it window/show view/(other possibly)/debug.
| 8,104
|
32,678,690
|
How to install pip for python3.4 when my pi have python3.2 and python3.4
when I used `sudo install python3-pip`
it's only for python3.2
but I want install pip for python3.4
|
2015/09/20
|
[
"https://Stackoverflow.com/questions/32678690",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5089211/"
] |
Python 3.4 has `pip` included, see [*What's New in Python 3.4*](https://docs.python.org/3/whatsnew/3.4.html#whatsnew-pep-453).
Just execute:
```
python3.4 -m ensurepip
```
to install it if it is missing for you. See the [`ensurepip` module documentation](https://docs.python.org/3/library/ensurepip.html) for further details.
|
You can go to your python 3.4 directory scripts and run it's pip in:
`../python3.4/scripts`
| 8,107
|
7,921,973
|
i'm writing an installer using py2exe which needs to run in admin to have permission to perform various file operations. i've modified some sample code from the user\_access\_controls directory that comes with py2exe to create the setup file. creating/running the generated exe works fine when i run it on my own computer. however, when i try to run the exe on a computer that doesn't have python installed, i get an error saying that the import modules (shutil and os in this case) do not exist. it was my impression that py2exe automatically wraps all the file dependencies into the exe but i guess that this is not the case. py2exe does generate a zip file called library that contains all the python modules but apparently they are not used by the generated exe. basically my question is how do i get the imports to be included in the exe generated by py2exe. perhaps modification need to be made to my setup.py file - the code for this is as follows:
```
from distutils.core import setup
import py2exe
# The targets to build
# create a target that says nothing about UAC - On Python 2.6+, this
# should be identical to "asInvoker" below. However, for 2.5 and
# earlier it will force the app into compatibility mode (as no
# manifest will exist at all in the target.)
t1 = dict(script="findpath.py",
dest_base="findpath",
uac_info="requireAdministrator")
console = [t1]
# hack to make windows copies of them all too, but
# with '_w' on the tail of the executable.
windows = [{'script': "findpath.py",
'uac_info': "requireAdministrator",
},]
setup(
version = "0.5.0",
description = "py2exe user-access-control",
name = "py2exe samples",
# targets to build
windows = windows,
console = console,
)
```
|
2011/10/27
|
[
"https://Stackoverflow.com/questions/7921973",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/971550/"
] |
Try to set `options={'py2exe': {'bundle_files': 1}},` and `zipfile = None` in setup section. Python will make single .exe file without dependencies. Example:
```
from distutils.core import setup
import py2exe
setup(
console=['watt.py'],
options={'py2exe': {'bundle_files': 1}},
zipfile = None
)
```
|
I rewrite your setup script for you. This will work
```
from distutils.core import setup
import py2exe
# The targets to build
# create a target that says nothing about UAC - On Python 2.6+, this
# should be identical to "asInvoker" below. However, for 2.5 and
# earlier it will force the app into compatibility mode (as no
# manifest will exist at all in the target.)
t1 = dict(script="findpath.py",
dest_base="findpath",
uac_info="requireAdministrator")
console = [t1]
# hack to make windows copies of them all too, but
# with '_w' on the tail of the executable.
windows = [{'script': "findpath.py",
'uac_info': "requireAdministrator",
},]
setup(
version = "0.5.0",
description = "py2exe user-access-control",
name = "py2exe samples",
# targets to build
windows = windows,
console = console,
#the options is what you fail to include it will instruct py2exe to include these modules explicitly
options={"py2exe":
{"includes": ["sip","os","shutil"]}
}
)
```
| 8,109
|
71,853,039
|
In short, how do I get this:
[](https://i.stack.imgur.com/JBBws.jpg)
From this:
```py
def fiblike(ls, n):
store = []
for i in range(n):
a = ls.pop(0)
ls.append(sum(ls)+a)
store.append(a)
return store
```
With all the indentation guide and code highlighting.
I have written hundreds of Python scripts and I need to convert all of them to images...
I have seen this:
```py
import Image
import ImageDraw
import ImageFont
def getSize(txt, font):
testImg = Image.new('RGB', (1, 1))
testDraw = ImageDraw.Draw(testImg)
return testDraw.textsize(txt, font)
if __name__ == '__main__':
fontname = "Arial.ttf"
fontsize = 11
text = "example@gmail.com"
colorText = "black"
colorOutline = "red"
colorBackground = "white"
font = ImageFont.truetype(fontname, fontsize)
width, height = getSize(text, font)
img = Image.new('RGB', (width+4, height+4), colorBackground)
d = ImageDraw.Draw(img)
d.text((2, height/2), text, fill=colorText, font=font)
d.rectangle((0, 0, width+3, height+3), outline=colorOutline)
img.save("D:/image.png")
```
from [here](https://www.codegrepper.com/code-examples/python/how+to+convert+text+file+to+image+in+python)
But it does not do code highlighting and I want either a `numpy` or `cv2` based solution.
How can I do it?
|
2022/04/13
|
[
"https://Stackoverflow.com/questions/71853039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
* Getting pair token balance of contracts
>
> web3.eth.contract(address=token\_address,abi=abi).functions.balanceOf(contract\_address).call()
>
>
>
* and then get current price of each token / USDT by calling function slot0 in pool tokenA/USDT & tokenB/USDT
>
> slot0 = contract.functions.slot0().call()
>
>
>
>
> sqrtPriceCurrent = slot0[0] / (1 << 96)
>
>
>
>
> priceCurrent = sqrtPriceCurrent \*\* 2
>
>
>
>
> decimal\_diff = USDT\_decimal - TOKEN\_A\_decimal
>
>
>
>
> token\_price = 10\*\*(-decimal\_diff)/( priceCurrent) if token0\_address == USDT\_address else priceCurrent/(10\*\*decimal\_diff)
>
>
>
* Finally, TVL = sum(token\_balance \* token\_price)
\*\* Remember: check price from big pool
|
No offense but you are following a hard way, which needs to use `TickBitmap` to get the next initialized tick (Remember not all ticks are initialized unless necessary.)
Alternatively the easy way to get a pool's TVL is to query Uniswap V3's [subgraph](https://thegraph.com/hosted-service/subgraph/ianlapham/uniswap-v3-subgraph?selected=playground): like
```
{
pool(id: "0x4e68ccd3e89f51c3074ca5072bbac773960dfa36") {
id
token0 {symbol}
totalValueLockedToken0
token1 {symbol}
totalValueLockedToken1
}
}
```
(for some reason it doesn't show result if you put checksum address)
or
```
{
pools(first: 5) {
id
token0 {symbol}
totalValueLockedToken0
token1 {symbol}
totalValueLockedToken1
}
}
```
| 8,110
|
60,325,327
|
I wrote an app in python3.7.5 that connects to RabbitMQ:
========================================================
### Using Ubuntu as the docker-machine
I am running rabbitmq with docker:
`docker run --name rabbitmq -p 5671:5671 -p 5672:5672 -p 15672:15672 --hostname rabbitmq rabbitmq:3.6.6-management`
TEST:
-----
* My python app connects to it via 127.0.01:5672
* Expected: connects and works
* Actual: connects and works
### I put the app inside docker and build and run
```
--build-arg ENVIRONMENT_NAME=develop
-t pdf-svc-image:latest .
&& docker run
-P
--env ENVIRONMENT_NAME=local
--name html-to-pdf
-v /home/mickey/dev/core/components/pdf-svc/:/html-to-pdf
--privileged
--network host
pdf-svc-image:latest bash
```
(This command line is created with pycharm)
### When running this code (inside the docker) , I get an exception
```
return await aio_pika.connect_robust(
"amqp://guest:guest@{host}".format(host=consts.MESSAGE_QUEUE_HOST)
)
```
* [Errno 111] Connect call failed ('127.0.0.1', 5672)
* [Errno 99] Cannot assign requested address
Help ?
|
2020/02/20
|
[
"https://Stackoverflow.com/questions/60325327",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1125913/"
] |
According to <https://docs.docker.com/network/host/>,
>
> Note: Given that the container does not have its own IP-address when using host mode networking, port-mapping does not take effect, and the -p, --publish, -P, and --publish-all option are ignored, producing a warning instead:
>
>
>
I am not sure this is your case. You would login the container, and do run `ping, nslookup` to check the network connection.
|
RabbitMQ container
```
docker run --name rabbitmq \
-p 5671:5671 -p 5672:5672 -p 15672:15672 \
--hostname rabbitmq \
--network host \ # <-- Add this line, now both container see each other
rabbitmq:3.6.6-management
```
App container
```
docker run \
-P \
--env ENVIRONMENT_NAME=local \
--name html-to-pdf \
-v /home/mickey/dev/core/components/pdf-svc/:/html-to-pdf \
--privileged \
--network host \
pdf-svc-image:latest bash
```
Then on your code you need to load your variable with `host = rabbitmq` not 127.0.0.1.
| 8,111
|
66,169,625
|
I have two CSV files:
**File 1**
```
Id, 1st, 2nd
1, first, row
2, second, row
```
**File 2**
```
Id, 1st, 2nd
1, first, row
2, second, line
3, third, row
```
I am just starting in python and need to write some code, which can do the diff on these files based on primary columns and in this case first column "Id". Output file should be a delta file which should identify the rows that have changed in the second file:
**Output delta file**
```
2, second, line
3, third, row
```
|
2021/02/12
|
[
"https://Stackoverflow.com/questions/66169625",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15196604/"
] |
I suggest you load both CSV files as Pandas DataFrames, and then you use and outer `merge` with indicator to know what rows changed in the second file. Then, you use `query` to get only the rows that changed in the second file, and you drop the indicator column ('\_merge').
```py
import pandas as pd
df1 = pd.read_csv("FILENAME_1.csv")
df2 = pd.read_csv("FILENAME_2.csv")
merged = pd.merge(df1, df2, how="outer", indicator=True)
diff = merged.query("_merge == 'right_only'").drop("_merge", axis="columns")
```
For further details on finding differences in Pandas DataFrames, read [this](https://stackoverflow.com/questions/48647534/python-pandas-find-difference-between-two-data-frames) other question.
|
I'd also use pandas, as Enrico suggested, for anything more complex than your example. But if you want to do it in pure Python, you can convert your rows into sets and compute a set difference:
```py
import csv
from io import StringIO
data1 = """Id, 1st, 2nd
1, first, row
2, second, row"""
data2 = """Id, 1st, 2nd
1, first, row
2, second, line
3, third, row"""
s1 = {tuple(row) for row in csv.reader(StringIO(data1))}
s2 = {tuple(row) for row in csv.reader(StringIO(data2))}
print(s2-s1)
print(s2-s1)
{('2', ' second', ' line'), ('3', ' third', ' row')}
```
Note that in your example you are not actually diffing based on your primary column only, but on the entire row. If you really want to only consider the `Id` column, you can do:
```py
d1 = {row[0]:row[1:] for row in csv.reader(StringIO(data1))}
d2 = {row[0]:row[1:] for row in csv.reader(StringIO(data2))}
diff = { k : d2[k] for k in set(d2) - set(d1)}
print(diff)
{'3': [' third', ' row']}
```
| 8,112
|
60,532,107
|
Trying to find out the correct number of parallel processes to run with [python multiprocessing](https://docs.python.org/3.6/library/multiprocessing.html).
Scripts below are run on an 8-core, 32 GB (Ubuntu 18.04) machine. (There were only system processes and basic user processes running while the below was tested.)
Tested `multiprocessing.Pool` and `apply_async` with the following:
```
from multiprocessing import current_process, Pool, cpu_count
from datetime import datetime
import time
num_processes = 1 # vary this
print(f"Starting at {datetime.now()}")
start = time.perf_counter()
print(f"# CPUs = {cpu_count()}") # 8
num_procs = 5 * cpu_count() # 40
def cpu_heavy_fn():
s = time.perf_counter()
print(f"{datetime.now()}: {current_process().name}")
x = 1
for i in range(1, int(1e7)):
x = x * i
x = x / i
t_taken = round(time.perf_counter() - s, 2)
return t_taken, current_process().name
pool = Pool(processes=num_processes)
multiple_results = [pool.apply_async(cpu_heavy_fn, ()) for i in range(num_procs)]
results = [res.get() for res in multiple_results]
for r in results:
print(r[0], r[1])
print(f"Done at {datetime.now()}")
print(f"Time taken = {time.perf_counter() - start}s")
```
Here are the results:
```
num_processes total_time_taken
1 28.25
2 14.28
3 10.2
4 7.35
5 7.89
6 8.03
7 8.41
8 8.72
9 8.75
16 8.7
40 9.53
```
The following make sense to me:
* Running one process at a time takes about 0.7 seconds for each process, so running 40 should take about 28s, which agrees with what we observe above.
* Running 2 processes at a time should halve the time and this is observed above (~14s).
* Running 4 processes at a time should further halve the time and this is observed above (~7s).
* Increasing parallelism to more than the number of cores (8) should degrade performance (due to CPU contention) and this is observed (sort of).
What doesn't make sense is:
* Why does running 8 in parallel not twice as fast as running 4 in parallel i.e. why is it not ~3.5s?
* Why is running 5 to 8 in parallel at a time worse than running 4 at a time? There are 8 cores, but still why is the overall run time worse? (When running 8 in parallel, `htop` showed all CPUs at near 100% utilization. When running 4 in parallel, only 4 of them were at 100% which makes sense.)
|
2020/03/04
|
[
"https://Stackoverflow.com/questions/60532107",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1333610/"
] |
>
> **Q** : *"**Why** is running 5 to 8 in parallel at a time **worse than running 4** at a time?"*
>
>
>
Well, there are several reasons and we will start from a static, easiest observable one :
Since the **silicon design** ( for which they used a few hardware tricks ) **does not scale** beyond the 4.
So **the last** [Amdahl's Law](https://stackoverflow.com/revisions/18374629/3) explained & promoted speedup from just `+1` upscaled count of *processors* is 4 and any next +1 will not upscale the performance in that same way observed in the { 2, 3, 4 }-case :
This `lstopo` CPU-topology map helps to start to decode **WHY** ( here for 4-cores, but the logic is the same as for your 8-core silicon - run `lstopo` on your device to see more details in vivo ) :
```
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Machine (31876MB) │
│ │
│ ┌────────────────────────────────────────────────────────────┐ ┌───────────────────────────┐ │
│ │ Package P#0 │ ├┤╶─┬─────┼┤╶───────┤ PCI 10ae:1F44 │ │
│ │ │ │ │ │ │
│ │ ┌────────────────────────────────────────────────────────┐ │ │ │ ┌────────────┐ ┌───────┐ │ │
│ │ │ L3 (8192KB) │ │ │ │ │ renderD128 │ │ card0 │ │ │
│ │ └────────────────────────────────────────────────────────┘ │ │ │ └────────────┘ └───────┘ │ │
│ │ │ │ │ │ │
│ │ ┌──────────────────────────┐ ┌──────────────────────────┐ │ │ │ ┌────────────┐ │ │
│ │ │ L2 (2048KB) │ │ L2 (2048KB) │ │ │ │ │ controlD64 │ │ │
│ │ └──────────────────────────┘ └──────────────────────────┘ │ │ │ └────────────┘ │ │
│ │ │ │ └───────────────────────────┘ │
│ │ ┌──────────────────────────┐ ┌──────────────────────────┐ │ │ │
│ │ │ L1i (64KB) │ │ L1i (64KB) │ │ │ ┌───────────────┐ │
│ │ └──────────────────────────┘ └──────────────────────────┘ │ ├─────┼┤╶───────┤ PCI 10bc:8268 │ │
│ │ │ │ │ │ │
│ │ ┌────────────┐┌────────────┐ ┌────────────┐┌────────────┐ │ │ │ ┌────────┐ │ │
│ │ │ L1d (16KB) ││ L1d (16KB) │ │ L1d (16KB) ││ L1d (16KB) │ │ │ │ │ enp2s0 │ │ │
│ │ └────────────┘└────────────┘ └────────────┘└────────────┘ │ │ │ └────────┘ │ │
│ │ │ │ └───────────────┘ │
│ │ ┌────────────┐┌────────────┐ ┌────────────┐┌────────────┐ │ │ │
│ │ │ Core P#0 ││ Core P#1 │ │ Core P#2 ││ Core P#3 │ │ │ ┌──────────────────┐ │
│ │ │ ││ │ │ ││ │ │ ├─────┤ PCI 1002:4790 │ │
│ │ │ ┌────────┐ ││ ┌────────┐ │ │ ┌────────┐ ││ ┌────────┐ │ │ │ │ │ │
│ │ │ │ PU P#0 │ ││ │ PU P#1 │ │ │ │ PU P#2 │ ││ │ PU P#3 │ │ │ │ │ ┌─────┐ ┌─────┐ │ │
│ │ │ └────────┘ ││ └────────┘ │ │ └────────┘ ││ └────────┘ │ │ │ │ │ sr0 │ │ sda │ │ │
│ │ └────────────┘└────────────┘ └────────────┘└────────────┘ │ │ │ └─────┘ └─────┘ │ │
│ └────────────────────────────────────────────────────────────┘ │ └──────────────────┘ │
│ │ │
│ │ ┌───────────────┐ │
│ └─────┤ PCI 1002:479c │ │
│ └───────────────┘ │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
A closer look, like the one from a call to `hwloc`-tool: **`lstopo-no-graphics -.ascii`**, shows **where mutual processing independence ends** - here at a level of ***shared `L1`-instruction-cache** ( the `L3` one is shared either, yet at the top of the hierarchy and at such a size that bothers for large problems solvers only, not our case )*
---
Next comes a worse observable reason *WHY even worse* on 8-processes :
----------------------------------------------------------------------
>
> **Q** : *"Why does running 8 in parallel not twice as fast as running 4 in parallel i.e. why is it not **`~3.5s`**?"*
>
>
>
Because of **thermal management**.
[](https://i.stack.imgur.com/xCqqv.jpg)
The more work is loaded onto CPU-cores, the more heat is produced from driving electrons on **`~3.5+ GHz`** through the silicon maze. Thermal constraints are those, that prevent any further performance boost in CPU computing powers, simply because of the Laws of physics, as we know them, do not permit to grow beyond some material-defined limits.
**So what comes next?**
The CPU-design has circumvented not the physics ( that is impossible ), but us, the users - by promising us a CPU chip having **`~3.5+ GHz`** ( but in fact, the CPU can use this clock-rate only for small amounts of time - until the dissipated heat does not get the silicon close to the thermal-limits - and then, the CPU will decide to either **reduce its own clock-rate** as an overheating defensive step ( this reduces the performance, doesn't it? ) or **some CPU-micro-architectures may hop** ( move a flow of processing ) onto another, free, thus cooler, CPU-core ( which keeps a promise of higher clock-rate ***there** ( at least for some small amount of time )* yet also reduces the performance, as the hop does not occur in zero-time and does not happen at zero-costs ( cache-losses, re-fetches etc )
This picture shows a snapshot of the case of core-hopping - cores `0-19` got too hot and are under the Thermal Throttling cap, while cores **`20-39`** can ( at least for now ) run at full speed:
[](https://i.stack.imgur.com/nqJt4.png)
---
The Result?
-----------
Both the thermal-constraints ( diving CPU into a pool of liquid nitrogen was demonstrated for a "popular" magazine show, yet is not a reasonable option for any sustainable computing, as the mechanical stress from going from deep frozen state into a **`6+ GHz`** clock-rate steam-forming super-heater cracks the body of the CPU and will result in CPU-death from cracks and mechanical fatigue in but a few workload episodes - so a no-go zone, due to **negative ROI** for any serious project ).
Good cooling and right-sizing of the pool-of-workers, based on in-vivo pre-testing is the only sure bet here.
Other architecture :
[](https://i.stack.imgur.com/7DtTQ.png)
|
Most likely cause is that you are running the program on a CPU that uses [simultaneous multithreading (SMT)](https://en.wikipedia.org/wiki/Simultaneous_multithreading), better known as [hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading) on Intel units. To cite after wiki, *for each processor core that is physically present, the operating system addresses two virtual (logical) cores and shares the workload between them when possible.* That's what's happening here.
Your OS says 8 cores, but in truth it's 4 cores with SMT. The task is clearly CPU-bound, so any increase beyond **physical** number of cores does not bring any benefit, only overhead cost of multiprocessing. That's why you see almost linear increase in performance until you reach (physical!) max. number of cores (4) and then decrease when the cores needs be shared for this very CPU-intensive task.
| 8,113
|
73,171,968
|
I'm trying to make a form where JavaScript makes the authentication of it. After JavaScript says that the user followed the rules correctly, the JavaScript file collects the data typed by the user, so the data is sent to Python (with the help of ajax). From the Python file, I want that it recognizes the data and finally redirect to a new page.
Now, I am having an issue because after Python recognizes the previously shared data, `return redirect()` is not working. It's strange, even though I tried with return `redirect(url_for())`, I got the same result. **The browser does not update the new page.**
***Inside templates folder >>***
**educateForm.html**
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.5.0/css/all.css" integrity="sha384-B4dIYHKNBt8Bc12p+WXckhzcICo0wtJAoU8YZTY5qE0Id1GSseTk6S+L3BlXeVIU" crossorigin="anonymous">
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
<!--Icon link-->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.5.0/font/bootstrap-icons.css">
<title>Educate Form</title>
</head>
<body>
<div class="container-fluid padding" id="fullForm1">
<div class="row padding">
<div class="col-md-12 col-lg-12">
<div class="centerForm" style="text-align:left;">
<h2>Contact information: </h2>
<!--Form-->
<form name='registration' action="/educateForm" method="post"><!--method="post" onsubmit="return formValidation()"-->
<!--onsubmit="return false"-->
<div class="row">
<div class="col-md-5">
<label for="FName" class="form-label">First name</label>
<input type="text" name="FName" placeholder="Mark" id="FName" class="form-control" required="required" autocomplete="off"/>
</div>
<div class="col-md-5">
<label for="LName" class="form-label">Last name</label>
<input type="text" name="LName" placeholder="Smith" id="LName" class="form-control" required="required" autocomplete="off"/>
</div>
</div>
<div class="row">
<div class="col-md-5">
<label for="email" class="form-label">Email address</label>
<input type="email" name="email" id="email" placeholder="mark@smith.com" class="form-control" aria-describedby="emailHelp" required="required" autocomplete="off">
</div>
</div>
<!--DISPLAY NONE - STYLE-->
<div class="row" style="display: none;">
<div class="col-md-10">
<label for="skillsLabel" class="form-label">What are your skills?</label>
<textarea class="form-control" name="skillsText" id="skillsText" style="height: 100px" required="required" autocomplete="off"></textarea>
</div>
</div>
<div id="buttonContainer">
<input id="orderButton" type="submit" name="submit" class="btn btn-primary" value="NEXT" onClick="return formValidation();" />
<input id="resetButton" type="reset" name="reset" class="btn btn-outline-secondary" value="Clear Form" onClick="return confirmreset()" />
</div>
</form>
</div>
</div>
</div>
</div>
<!--Pagination-->
<ul class="pagination justify-content-center" id="pagination">
<li class="page-item disabled">
<a class="page-link">Previous</a>
</li>
<li class="page-item active"><a class="page-link" href="/educateForm">1</a></li>
<li class="page-item"><a class="page-link" href="/educateForm2">2</a></li>
<li class="page-item">
<a class="page-link" href="/educateForm2">Next</a>
</li>
</ul>
<!--<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.5/jquery.min.js"></script>-->
<!--<script src="/educate"></script>-->
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script>
<!--Doesn't recognizes AJAX ⏬-->
<!--<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>-->
<script src="https://code.jquery.com/jquery-3.6.0.min.js" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/popper.js@1.12.9/dist/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script>
<script src="/educate"></script>
</body>
</html>
```
**educateForm2.html**
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.5.0/css/all.css" integrity="sha384-B4dIYHKNBt8Bc12p+WXckhzcICo0wtJAoU8YZTY5qE0Id1GSseTk6S+L3BlXeVIU" crossorigin="anonymous">
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
<!--Icon link-->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.5.0/font/bootstrap-icons.css">
<title>Educate 2</title>
</head>
<body>
<div class="container-fluid padding">
<div class="row welcome text-center">
<div id="properties" class="col-12">
<h1 class="display-4">Page 2</h1>
</div>
</div>
</div>
<!--Pagination-->
<ul class="pagination justify-content-center" id="pagination">
<li class="page-item">
<a class="page-link" href="educateForm">Previous</a>
</li>
<li class="page-item"><a class="page-link" href="educateForm">1</a></li>
<li class="page-item active"><a class="page-link" href="educateForm2">2</a></li>
<li class="page-item disabled">
<a class="page-link">Next</a>
</li>
</ul>
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.0.2/dist/js/bootstrap.bundle.min.js" integrity="sha384-MrcW6ZMFYlzcLA8Nl+NtUVF0sA7MsXsP1UyJoMp4YLEuNSfAP+JcXn/tWtIaxVXM" crossorigin="anonymous"></script>
<!--Doesn't recognizes AJAX ⏬-->
<!--<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script>-->
<script src="https://code.jquery.com/jquery-3.6.0.min.js" integrity="sha256-/xUj+3OJU5yExlq6GSYGSHk7tPXikynS7ogEvDej/m4=" crossorigin="anonymous"></script>
<script src="https://cdn.jsdelivr.net/npm/popper.js@1.12.9/dist/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script>
</body>
</html>
```
**educate.js**
```
"use strict";
function formValidation() {
var emailRegex = /^[A-Za-z0-9._]*\@[A-Za-z]*\.[A-Za-z]{2,5}$/; // Expression for validating email
var fname = document.registration.FName.value;
var lname = document.registration.LName.value;
var email = document.registration.email.value;
if (fname == "") {
alert('Enter the first name!');
document.registration.FName.focus();
return false;
}
if (lname == "") {
document.registration.LName.focus();
alert('Enter the last name!');
return false;
}
if (email == "") {
document.registration.email.focus();
alert('Enter the email!');
return false;
}
if (!emailRegex.test(email)) {
alert('Re-enter the valid email in this format: [abc@abc.com]');
document.registration.email.focus();
return false;
}
if (fname != '' && lname != '' && email != '') // condition for check mandatory all fields
{
let confirmation = "Once you submit this form, you can't go back \nAre you sure you want to leave this page?";
if (confirm(confirmation) == true) {
const dict_values = {fname, lname, email} //Pass the javascript variables to a dictionary.
const s = JSON.stringify(dict_values); // Stringify converts a JavaScript object or value to a JSON string
console.log(s); // Prints the variables to console window, which are in the JSON format
window.alert(s);
//Passing the data to Python (into "/educateForm" page) ⏬
$.ajax({
url:"/educateForm",
type:"POST",
contentType: "application/json",
data: JSON.stringify(s)});
//Display 2nd page without sharing data with Python⏬
//var display = window.open("/educateForm2", "_self", "pagewin");
//window.location.href = "/educateForm2";
}
}
}
function setUpPage(){
formValidation();
}
window.addEventListener("load", setUpPage, false);
```
***Outside templates folder >>***
**app.py**
```
import json
import os
from flask import Flask, flash, redirect, render_template, request, session
from flask_session import Session
from tempfile import mkdtemp
from werkzeug.security import check_password_hash, generate_password_hash
from flask import jsonify # NEW
from flask import url_for
# Configure application
app = Flask(__name__)
# Ensure templates are auto-reloaded
app.config["TEMPLATES_AUTO_RELOAD"] = True
# Configure session to use filesystem (instead of signed cookies)
app.config["SESSION_PERMANENT"] = False
app.config["SESSION_TYPE"] = "filesystem"
Session(app)
@app.after_request
def after_request(response):
"""Ensure responses aren't cached"""
response.headers["Cache-Control"] = "no-cache, no-store, must-revalidate"
response.headers["Expires"] = 0
response.headers["Pragma"] = "no-cache"
return response
@app.route("/educateForm", methods=["GET", "POST"])
def educateForm():
"""Show Educate Form(part 1)"""
if request.method == "POST":
output = request.get_json()
print(output) # This is the output that was stored in the JSON within the browser
print(type(output))
result = json.loads(output) #this converts the json output to a python dictionary
print(result) # Printing the new dictionary
print(type(result))#this shows the json converted as a python dictionary
#PROBLEM: Neither of both options worked ⏬
#return redirect(url_for('educateForm2'))
return redirect("/educateForm2")
else: # GET
# Redirect user to educateForm.html
return render_template("educateForm.html")
@app.route("/educateForm2", methods=["GET"])
def educateForm2():
"""Show Educate Form(part 2)"""
if request.method == "GET":
# Redirect user to educateForm2.html
return render_template("educateForm2.html")
@app.route("/educate")
def educate():
"""Show educate.js"""
if request.method == "GET":
# Redirect user to educate.js
return render_template("educate.js")
if __name__ == "__main__":
app.run()
```
|
2022/07/29
|
[
"https://Stackoverflow.com/questions/73171968",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19575161/"
] |
A very simple, and performant way of checking if all pixels are the same, would be to use PIL's `getextrema()` which tells you the brightest and darkest pixel in an image. So you would just test if they are the same and that would work if testing they were both zero, or any other number. It will be performant because it is implemented in C.
```
min, max = msk.getextrema()
if min == max:
...
```
---
If you wanted to use Numpy, in a very similar vein, you could use its `np.ptp()` which tells you the *"peak-to-peak"* difference between brightest and darkest pixel:
```
import numpy as np
# Make Numpy array from image "msk"
na = np.array(msk)
diff = np.ptp(na)
if diff == 0:
...
```
Or, you could test if true that all elements equal the first:
```
result = np.all(na == na[0])
```
|
1. Convert image to 3D numpy array
[enter link description here](https://ru.stackoverflow.com/questions/1145128/%D0%9A%D0%B0%D0%BA-%D0%BF%D1%80%D0%B5%D0%BE%D0%B1%D1%80%D0%B0%D0%B7%D0%BE%D0%B2%D0%B0%D1%82%D1%8C-jpg-%D0%B2-%D0%BC%D0%B0%D1%81%D1%81%D0%B8%D0%B2-numpy)
2. Check if all elements of an array are the same
[enter link description here](https://ru.stackoverflow.com/questions/1096559/%D0%9A%D0%B0%D0%BA-%D0%BF%D1%80%D0%BE%D0%B2%D0%B5%D1%80%D0%B8%D1%82%D1%8C-%D0%B2%D1%81%D0%B5-%D0%BB%D0%B8-%D1%8D%D0%BB%D0%B5%D0%BC%D0%B5%D0%BD%D1%82%D1%8B-%D0%B2-%D0%BC%D0%B0%D1%81%D1%81%D0%B8%D0%B2%D0%B5-numpy-%D0%BE%D0%B4%D0%B8%D0%BD%D0%B0%D0%BA%D0%BE%D0%B2%D1%8B%D0%B5-python)
| 8,114
|
69,607,510
|
```
import csv
import mysql.connector as mysql
marathons = []
with open ("marathon_results.csv") as file:
data = csv.reader(file)
next(data)
for rij in data:
year = rij[0],
winner = rij[1],
gender = rij[2],
country = rij[3],
time = rij[4],
marathon = rij[5],
marathons.append((year, winner, gender, country, time, marathon))
conn = mysql.connect(
host="localhost",
user="root",
password=""
)
c = conn.cursor()
create_database_query = 'CREATE DATABASE IF NOT EXISTS marathon_file'
c.execute(create_database_query)
c.execute('USE marathon_file')
c.execute("""CREATE TABLE IF NOT EXISTS winners(
year INT(100),
winner VARCHAR(255),
gender VARCHAR(255),
country VARCHAR(255),
time TIME,
marathon VARCHAR(255)
)
""")
print('CSV-bestand in de MySQL-database aan het laden...')
insert_query = "INSERT INTO winners(year, winner, gender, country, time, marathon) VALUES (%s, %s, %s, %s, %s, &s);"
c.executemany(insert_query, marathons)
c.commit()
print('Bestand succesvol geladen!')
```
So i have this code above trying to get a certain .csv file from my venv to mysql. made a list from the data and skipped the first line (since those were headers) and tried to import it to mysql. But i keep getting the following Error:
```
CSV-bestand in de MySQL-database aan het laden...
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/conversion.py in to_mysql(self, value)
179 try:
--> 180 return getattr(self, "_{0}_to_mysql".format(type_name))(value)
181 except AttributeError:
AttributeError: 'MySQLConverter' object has no attribute '_tuple_to_mysql'
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in _process_params(self, params)
430
--> 431 res = [to_mysql(i) for i in res]
432 res = [escape(i) for i in res]
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in <listcomp>(.0)
430
--> 431 res = [to_mysql(i) for i in res]
432 res = [escape(i) for i in res]
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/conversion.py in to_mysql(self, value)
181 except AttributeError:
--> 182 raise TypeError("Python '{0}' cannot be converted to a "
183 "MySQL type".format(type_name))
TypeError: Python 'tuple' cannot be converted to a MySQL type
During handling of the above exception, another exception occurred:
ProgrammingError Traceback (most recent call last)
/var/folders/yc/mz4bq04s7wngrglphldwpwfc0000gn/T/ipykernel_17482/929148642.py in <module>
38 insert_query = "INSERT INTO winners(year, winner, gender, country, time, marathon) VALUES (%s, %s, %s, %s, %s, &s);"
39
---> 40 c.executemany(insert_query, marathons)
41 c.commit()
42
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in executemany(self, operation, seq_params)
665 self._rowcount = 0
666 return None
--> 667 stmt = self._batch_insert(operation, seq_params)
668 if stmt is not None:
669 self._executed = stmt
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in _batch_insert(self, operation, seq_params)
607 tmp, self._process_params_dict(params))
608 else:
--> 609 psub = _ParamSubstitutor(self._process_params(params))
610 tmp = RE_PY_PARAM.sub(psub, tmp)
611 if psub.remaining != 0:
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/mysql/connector/cursor.py in _process_params(self, params)
433 res = [quote(i) for i in res]
434 except Exception as err:
--> 435 raise errors.ProgrammingError(
436 "Failed processing format-parameters; %s" % err)
437 else:
ProgrammingError: Failed processing format-parameters; Python 'tuple' cannot be converted to a MySQL type
```
I probably missed some () or brackets or am i missing something else? Thanks
|
2021/10/17
|
[
"https://Stackoverflow.com/questions/69607510",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17025019/"
] |
The problem is in these lines:
```py
year = rij[0],
winner = rij[1],
gender = rij[2],
country = rij[3],
time = rij[4],
marathon = rij[5],
```
The trailing commas cause `year`, `winner`, `gender` and so on to be created as 1-tuples. It's the same as writing
```py
year = (rij[0],)
winner = (rij[1],)
# and so on...
```
Delete the trailing commas and try again.
|
Your sql comad had a & instead of a %.
I additionally simplified the data loop
```
import csv
import mysql.connector as mysql
marathons = []
with open ("test2.csv") as file:
data = csv.reader(file)
next(data)
marathons = [tuple(row) for row in data]
conn = mysql.connect(
host="localhost",
user="root",
password=""
)
c = conn.cursor()
create_database_query = 'CREATE DATABASE IF NOT EXISTS marathon_file'
c.execute(create_database_query)
c.execute('USE marathon_file')
c.execute("""CREATE TABLE IF NOT EXISTS winners(
year INT(100),
winner VARCHAR(255),
gender VARCHAR(255),
country VARCHAR(255),
time TIME,
marathon VARCHAR(255)
)
""")
print('CSV-bestand in de MySQL-database aan het laden...')
insert_query = "INSERT INTO winners(year, winner, gender, country, time, marathon) VALUES (%s, %s, %s, %s, %s, %s);"
c.executemany(insert_query, marathons)
conn.commit()
print('Bestand succesvol geladen!')
```
| 8,115
|
46,053,097
|
I have created and API using python+flask. When is try to hit the api using postman or chrome it works fine and I am able to get to the api.
On the other hand when I try to use python
```
import requests
requests.get("http://localhost:5050/")
```
I get 407. I guess that the proxy of the our environment is not allowing me to hit the localhost. But due to LAN settings in IE/Chrome the request went through.
I did try to set proxies , auth in request and now I start getting 502(bad gateway). If I see on the API side I can't see a request come through. What can I do to troubleshoot the same.
|
2017/09/05
|
[
"https://Stackoverflow.com/questions/46053097",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6128923/"
] |
According to [requests module documentation](http://docs.python-requests.org/en/master/user/advanced/#proxies) you can either provide proxy details through environment variable **HTTP\_PROXY** (in case use Linux distribution):
```
$ export HTTP_PROXY="http://corporate-proxy:port"
$ python
>>> import requests
>>> requests.get('http://localhost:5050/')
```
Or provide **proxies** keyword argument to get method directly:
```
import requests
proxies = {
'http': 'http://coporate-proxy:port',
}
requests.get('http://localhost:5050/', proxies=proxies)
```
|
Try
```
import requests
from flask_cors import CORS, cross_origin
app = Flask(__name__)
cors = CORS(app, resources={r"/*": {"origins": "*"}})
requests.get("http://localhost:5050/")
```
| 8,116
|
7,641,592
|
I [recently asked a question](https://stackoverflow.com/questions/7626848/maximum-python-object-which-can-be-passed-to-write) regarding how to save large python objects to file. I had previously run into problems converting massive Python dictionaries into string and writing them to file via `write()`. Now I am using pickle. Although it works, the files are incredibly large (> 5 GB). I have little experience in the field of such large files. I wanted to know if it would be faster, or even possible, to zip this pickle file prior to storing it to memory.
|
2011/10/03
|
[
"https://Stackoverflow.com/questions/7641592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/654789/"
] |
You can compress the data with [bzip2](http://docs.python.org/library/bz2.html):
```
from __future__ import with_statement # Only for Python 2.5
import bz2,json,contextlib
hugeData = {'key': {'x': 1, 'y':2}}
with contextlib.closing(bz2.BZ2File('data.json.bz2', 'wb')) as f:
json.dump(hugeData, f)
```
Load it like this:
```
from __future__ import with_statement # Only for Python 2.5
import bz2,json,contextlib
with contextlib.closing(bz2.BZ2File('data.json.bz2', 'rb')) as f:
hugeData = json.load(f)
```
You can also compress the data using [zlib](http://docs.python.org/library/zlib.html) or [gzip](http://docs.python.org/library/gzip.html) with pretty much the same interface. However, both zlib and gzip's compression rates will be lower than the one achieved with bzip2 (or lzma).
|
>
> faster, or even possible, to zip this pickle file prior to [writing]
>
>
>
Of course it's possible, but there's no reason to try to make an explicit zipped copy in memory (it might not fit!) before writing it, when you can *automatically cause it to be zipped as it is written, with built-in standard library functionality* ;)
See <http://docs.python.org/library/gzip.html> . Basically, you create a special kind of stream with
```
gzip.GzipFile("output file name", "wb")
```
and then use it exactly like an ordinary `file` created with `open(...)` (or `file(...)` for that matter).
| 8,117
|
21,068,471
|
Running the following python script through web site works fine and (as expected) stops the playback of MPD:
```
#!/usr/bin/env python
import subprocess
subprocess.call(["mpc", "stop"])
print ("Content-type: text/plain;charset=utf-8\n\n")
print("Hello")
```
This script however causes an error (playback starts as expected):
```
#!/usr/bin/env python
print("Content-type: text/plain;charset=utf-8\n\n")
print ("Hello")
import subprocess
subprocess.call(["mpc", "play"])
```
The error is:
```
malformed header from script. Bad header=Scorpions - Eye of the tiger -: play.py, referer: http://...
```
Apparently whatever is returned by the playback command is taken as the header. When run in terminal, the output looks fine. Why could it be?
|
2014/01/11
|
[
"https://Stackoverflow.com/questions/21068471",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/143211/"
] |
1. You're running your script in some sort of CGI-like environment. I would strongly suggest using a light web framework like Flask or Bottle.
2. `mpc play` is writing to stdout. You need to silence it:
```
import os
with open(os.devnull, 'w') as dev_null:
subprocess.call(["mpc", "stop"], stdout=dev_null)
```
3. For your HTTP headers to be valid, you need to separate them with `\r\n`, not `\n\n`.
|
You need to use `\r\n` line endings.
| 8,126
|
2,700,195
|
I have some data that I would like to save to a MAT file (version 4 or 5, or any version, for that matter). The catch: I wanted to do this without using matlab libraries, since this code will not necessary run in a machine with matlab. My program uses Java and C++, so any existing library in those languages that achieves this could help me out...
I did some research but did not find anything in Java/C++. However, I found that scipy on python achieves this with `mio4.py` or `mio5.py`. I thought about implementing this on java or C++, but it seems a bit out of my time schedule.
So the question is: is there any libraries in Java or C/C++ that permits saving MAT files without using Matlab libraries?
Thanks a lot
|
2010/04/23
|
[
"https://Stackoverflow.com/questions/2700195",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/227103/"
] |
C: [matio](http://sourceforge.net/projects/matio/)
Java: [jmatio](http://sourceforge.net/projects/jmatio/)
(I'm really tempted to, so I will, tell you to learn to google)
But really, it's not that hard to write matfiles using `fwrite` if you don't need to handle some of the more complex stuff (nested structs, classes, functions, sparse matrix, etc).
See: <http://www.mathworks.com/access/helpdesk/help/pdf_doc/matlab/matfile_format.pdf>
|
MAT files since version 7 are HDF5 based. I recall that they use some rather funny conventions, but you may be able to reverse engineer what you need. There are certainly HDF5 writing libraries for both Java and C++.
Along these lines, Matlab can read/write several standard formats, including HDF5. It may be easiest to write your data in "standard" HDF5 and read it into the desired data structure within Matlab.
| 8,127
|
31,073,212
|
When running:
mkvirtualenv test
I get following error:
```
File "/usr/lib/python3/dist-packages/virtualenv.py", line 2378, in <module>
main()
File "/usr/lib/python3/dist-packages/virtualenv.py", line 830, in main
symlink=options.symlink)
File "/usr/lib/python3/dist-packages/virtualenv.py", line 999, in create_environment
site_packages=site_packages, clear=clear, symlink=symlink))
File "/usr/lib/python3/dist-packages/virtualenv.py", line 1198, in install_python
mkdir(lib_dir)
File "/usr/lib/python3/dist-packages/virtualenv.py", line 451, in mkdir
os.makedirs(path)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 150, in makedirs
makedirs(head, mode)
File "/usr/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 13] Permission denied: 'test'
```
Why is the 'test' virtual environment not created? I did try to chmode -R 777 the virtualenv folder, but that did not solve it. I do have python 2.7 and 3.4 installed on Ubuntu 15.04
|
2015/06/26
|
[
"https://Stackoverflow.com/questions/31073212",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3294412/"
] |
You are likely getting the error because you cannot create the virtualenv folder in the current working directory.
If you do an `ls -ld .` you'll see the output of the current directory you're running the command from, e.g.:
```
➜ ~ ls -ld .
drwxr-xr-x+ 114 tfisher staff 3876 Jun 26 08:46 .
```
and if you do a `whoami`, you'll get the name of your current user.
The interesting bit in the output is typically the first portion of that `ls -d .` command: `drwxr-xr-x+`. This means "this is a directory, with Read, Write, eXecution for the user, then Read eXecute for the group, and finally Read and eXecute for everyone else."
If you do not have `w`rite permission, you will not be able to create the files and folders that virtualenv needs.
If the current directory is one that you feel that you should personally own, e.g. `/home/musicformellons`, and you have sudo permission, you can rectify this by running:
```
sudo chown `whoami` .
```
The reason why this didn't just simply work is likely because you followed a guide that had you install a "virtualenvwrapper" using sudo permissions.
|
i have did the same the issue i found is :
>
> `echo $WORKON_HOME`
>
>
>
you will find : ***/home/user/.virtualenvs/extra\_path***
just yoy need to remove this extra\_path added after ***.virtualenvs*** path
from your ***.bashrc*** and then *source* it again try again creating *mkvirtualenv*
| 8,128
|
3,934,777
|
I have a few functions in my code where it makes much sense (seems even mandatory) to use memoization.
I don't want to implement that manually for every function separately. Is there some way (for example [like in Python](http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize)) I can just use an annotation or do something else so I get this automatically on those functions where I want it?
|
2010/10/14
|
[
"https://Stackoverflow.com/questions/3934777",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/133374/"
] |
I don't think there is a language native implementation of memoization.
But you can implement it easily, as a decorator of your method. You have to maintain a Map: the key of your Map is the parameter, the value the result.
Here is a simple implementation, for a one-arg method:
```
Map<Integer, Integer> memoizator = new HashMap<Integer, Integer>();
public Integer memoizedMethod(Integer param) {
if (!memoizator.containsKey(param)) {
memoizator.put(param, method(param));
}
return memoizator.get(param);
}
```
|
You could use the [Function](http://guava-libraries.googlecode.com/svn/trunk/javadoc/com/google/common/base/Function.html) interface in Google's [guava](http://code.google.com/p/guava-libraries/) library to easily achieve what you're after:
```
import java.util.HashMap;
import java.util.Map;
import com.google.common.base.Function;
public class MemoizerTest {
/**
* Memoizer takes a function as input, and returns a memoized version of the same function.
*
* @param <F>
* the input type of the function
* @param <T>
* the output type of the function
* @param inputFunction
* the input function to be memoized
* @return the new memoized function
*/
public static <F, T> Function<F, T> memoize(final Function<F, T> inputFunction) {
return new Function<F, T>() {
// Holds previous results
Map<F, T> memoization = new HashMap<F, T>();
@Override
public T apply(final F input) {
// Check for previous results
if (!memoization.containsKey(input)) {
// None exists, so compute and store a new one
memoization.put(input, inputFunction.apply(input));
}
// At this point a result is guaranteed in the memoization
return memoization.get(input);
}
};
}
public static void main(final String[] args) {
// Define a function (i.e. inplement apply)
final Function<Integer, Integer> add2 = new Function<Integer, Integer>() {
@Override
public Integer apply(final Integer input) {
System.out.println("Adding 2 to: " + input);
return input + 2;
}
};
// Memoize the function
final Function<Integer, Integer> memoizedAdd2 = MemoizerTest.memoize(add2);
// Exercise the memoized function
System.out.println(memoizedAdd2.apply(1));
System.out.println(memoizedAdd2.apply(2));
System.out.println(memoizedAdd2.apply(3));
System.out.println(memoizedAdd2.apply(2));
System.out.println(memoizedAdd2.apply(4));
System.out.println(memoizedAdd2.apply(1));
}
}
```
Should print:
Adding 2 to: 1
3
Adding 2 to: 2
4
Adding 2 to: 3
5
4
Adding 2 to: 4
6
3
You can see that the 2nd time memoizedAdd2 is called (applied) to the arguments 2 and 1, the computation in the apply is not actually ran, it just fetched the stored results.
| 8,131
|
71,972,703
|
I am trying to to a very simple python request using `requests.get` but am getting the following error using this code:
```
url = 'https://www.tesco.com/'
status = requests.get(url)
```
The error:
```
requests.exceptions.SSLError: HTTPSConnectionPool(host='www.tesco.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')))
```
Can anyone explain to me how to fix this and more importantly what the error means?
Many Thanks
|
2022/04/22
|
[
"https://Stackoverflow.com/questions/71972703",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10574250/"
] |
Explanation
===========
The errors is caused by an invalid or expired [SSL Certificate](https://www.gogetssl.com/wiki/ssl-basics/what-is-ssl-tls/)
When making a GET request to a server such as `www.tesco.com` you have 2 options, an [http](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) and an [https](https://en.wikipedia.org/wiki/HTTPS), in the case of https the server will provide your requestor (your script) with an SSL certificate which allows you to verify that you are connecting to a legitimate website, also this helps secure and encrypt the data being transfered between your script and the server
Solution
========
Just disable the SSL check
```py
url = 'https://www.tesco.com/'
requests.get(url, verify=False)
```
OR
--
Use Session and Disable the SSL Cert Check
```py
import requests, os
url = 'https://www.tesco.com/'
# Use Session and Disable the SSL Cert Check
session = requests.Session()
session.verify = False
session.trust_env = False
session.get(url=url)
```
[Similar post](https://stackoverflow.com/questions/15445981/how-do-i-disable-the-security-certificate-check-in-python-requests)
Extra Info 1
============
Ensure the date and time is set correctly, as the request library checks the valid date range that the SSL certificate is valid in compared to your local date and time. as this is sometimes a common issue
Extra Info 2
============
You may need to get the latest updated Root CA Certificates installed on your machine [Download Here](https://www.entrust.com/resources/certificate-solutions/tools/root-certificate-downloads)
|
Paraphrasing [similar post](https://stackoverflow.com/questions/41287979/cant-access-certain-sites-requests-get-in-python-3) to your specific question.
Response 403 means forbidden, in other words, the website understands the request but doesn't allow access. It could be a security measure to prevent scraping.
As a workaround, you can add a header in your request so that the code acts as if you're accessing it using a web browser.
```
url = "https://www.tesco.com"
headers = {'user-agent': 'Safari/537.36'}
response = requests.get(url, headers=headers)
print(response)
```
You should get response 200.
'user-agent' in the headers makes it seem that you're accessing through a Safari browser.
| 8,140
|
69,751,866
|
I am getting this error while Executing simple **Recursion Program** in **Python**.
```
RecursionError Traceback (most recent call last)
<ipython-input-19-e831d27779c8> in <module>
4 num = 7
5
----> 6 factorial(num)
<ipython-input-19-e831d27779c8> in factorial(n)
1 def factorial(n):
----> 2 return (n * factorial(n-1))
3
4 num = 7
5
... last 1 frames repeated, from the frame below ...
<ipython-input-19-e831d27779c8> in factorial(n)
1 def factorial(n):
----> 2 return (n * factorial(n-1))
3
4 num = 7
5
RecursionError: maximum recursion depth exceeded
```
**My program is:**
```
def factorial(n):
return (n * factorial(n-1))
num = 7
factorial(num)
```
Please help. Thanks in advance!
|
2021/10/28
|
[
"https://Stackoverflow.com/questions/69751866",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15926850/"
] |
A recursive function has a simple rule to follow.
1. Create an exit condition
2. Call yourself (the function) somewhere.
Your factorial function only calls itself. And it will not stop in any condition (goes on to negative).
Then you hit maximum recursion depth.
You should stop when you hit a certain point. In your example it's when `n==1`. Because `1!=1`
```
def factorial(n):
if n == 1:
return 1
return (n * factorial(n-1))
```
|
You have to return another value at some point.
Example below:
```
def factorial(n):
if n == 1:
return 1
return (n * factorial(n-1))
```
Else, your recursive loop will not stop and go to - infinity.
| 8,141
|
24,023,512
|
I know this is probably not a good style, but I was wondering if it is possible to construct a class when a static method is called
```
class myClass():
def __init__(self):
self.variable = "this worked"
@staticmethod
def test_class(var=myClass().variable):
print self.variable
if "__name__" == "__main__":
myClass.test_class()
```
Right now it returns
```
NameError: name 'myClass' is not defined
```
Here is what I am suspecting, in default, the python interpreter will scan the class and register each function, when it register the function, it checks the function's default variable, the default variable have to be defined?
|
2014/06/03
|
[
"https://Stackoverflow.com/questions/24023512",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3692553/"
] |
Perhaps the easiest is to turn it into a `classmethod` instead:
```
class myClass(object):
def __init__(self):
self.variable = "this worked"
@classmethod
def test_class(cls):
var = cls().variable
print var
if __name__ == "__main__":
myClass.test_class()
```
See [What is the difference between @staticmethod and @classmethod in Python?](https://stackoverflow.com/questions/136097/what-is-the-difference-between-staticmethod-and-classmethod-in-python)
It is not entirely clear from your question what the use case is; it could well be that there's a better way to do what you're actually trying to do.
|
Yes, the default value for a function argument has to be definable at the point that the function appears, and a class isn't actually finished defining until the end of the "class block." The easiest way to do what you're trying to do is:
```
@staticmethod
def test_class(var=None):
if var is None: var = myClass().variable
print var # I assume this is what you meant to write for this line; otherwise, the function makes no sense.
```
This has the one downside that you can't pass `None` to `myClass.test_class` and get it to print out. If you want to do this, try:
```
@staticmethod
def test_class(*vars):
if len(vars) == 0:
var = myClass().variable
#elif len(vars) > 1:
# Throw an error yelling at the user for calling the function wrong (optional)
else:
var = vars[0]
print var
```
| 8,142
|
59,867,504
|
I am very new to the Python language and have a small program. It had been working but something change and now I can't get it to run. It's having a problem with finding 'pyodbc'. I installed the 'pyodbc' package so I don't understand why there error. I am using Python 3.7.6. Thank you for your help!
**pip install pyodbc**
```
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
Requirement already satisfied: pyodbc in c:\users\c113850\appdata\roaming\python\python37\site-packages (4.0.28)
```
**Code:**
```
import requests
import pyodbc
from bs4 import BeautifulSoup
from datetime import datetime
import pytz
import time
import azure.functions
page = requests.get("https://samplepage.html")
if page.status_code == 200:
print(page.status_code)
#print(page.content)
soup = BeautifulSoup(page.content, 'html.parser')
print(soup.title)
rows = soup.find_all('tr')
# for row in rows: # Print all occurrences
# print(row.get_text())
print(rows[0])
print(rows[7])
pjmtime = rows[0].td.get_text()
print("PJM = ",pjmtime)
#dt_string = "Tue Jan 21 18:00:00 EST 2020"
dt_object = datetime.strptime(pjmtime, "%a %b %d %H:%M:%S EST %Y")
print("Timestamp =", dt_object)
eastern=pytz.timezone('US/Eastern')
date_eastern=eastern.localize(dt_object,is_dst=None)
date_utc=date_eastern.astimezone(pytz.utc)
print("UTC =", date_utc)
row = soup.find(text='PRICE').parent.parent
name = row.select('td')[0].get_text()
typed = row.select('td')[1].get_text()
weighted = row.select('td')[2].get_text()
hourly = row.select('td')[3].get_text()
server = 'db.database.windows.net'
database = '...'
username = '...'
password = '...'
driver = '{ODBC Driver 17 for SQL Server}'
cnxn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()
print("insert into [PJMLocationalMarginalPrice] ([Name],[Type],[WeightedAverage],[HourlyIntegrated],[TimeStamp]) values(?,?,?,?,?)",
(name,typed,weighted,hourly,date_utc))
cursor.execute("insert into [PJMLocationalMarginalPrice] ([Name],[Type],[WeightedAverage],[HourlyIntegrated],[TimeStamp]) values (?,?,?,?,?)",
(name,typed,weighted,hourly,date_utc))
cnxn.commit()
else:
print("Error: page not open")
```
**Error:**
```
Traceback (most recent call last):
File "c:/Users/C113850/PycharmProjects/Scraping101/Scraping.py", line 2, in <module>
import pyodbc
ImportError: DLL load failed: The specified module could not be found.
```
Update:
I was looking at the folders under site-packages and noticed that 'pyodbc' folder is not there but 'pyodbc-4.0.28.dist-info' folder is there.
[](https://i.stack.imgur.com/wXnrH.png)
|
2020/01/22
|
[
"https://Stackoverflow.com/questions/59867504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3216326/"
] |
If I'm understanding your question correctly and you're looking for how frequent two of the same categories are 1 in the same row (e.g. pairwise like @M-- asked), here's how I've done it in the past. I'm sure there's a more graceful way of going about it though :D
```
library(dplyr)
library(tidyr)
test.df <- structure(list(Type_SunflowerSeeds = c(1L, 1L, 1L, 0L, 0L, 1L,
1L, 0L, 0L, 0L), Type_SafflowerSeeds = c(0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L), Type_Nyjer = c(0L, 0L, 0L, 0L, 0L, 0L, 0L,
0L, 0L, 0L), Type_EconMix = c(1L, 1L, 0L, 1L, 1L, 0L, 1L, 0L,
0L, 0L), Type_PremMix = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L,
1L), Type_Grains = c(0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L),
Type_Nuts = c(0L, 0L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Suet = c(1L,
0L, 0L, 0L, 0L, 1L, 0L, 0L, 0L, 0L), Type_SugarWater = c(1L,
0L, 0L, 0L, 1L, 1L, 1L, 0L, 1L, 1L), Type_FruitOrJams = c(0L,
0L, 0L, 1L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Mealworms = c(0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Corn = c(0L, 0L,
0L, 0L, 0L, 0L, 0L, 1L, 0L, 0L), Type_BarkOrPeanutButter = c(0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Scraps = c(1L,
1L, 1L, 1L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Bread = c(0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L, 0L), Type_Other = c(0L, 0L, 0L,
0L, 0L, 0L, 0L, 0L, 0L, 0L), total = c(5, 3, 3, 4, 2, 3,
3, 1, 1, 2)), row.names = c(NA, 10L), class = "data.frame")
test.df %>%
mutate(food.id = 1:n()) %>%
gather(key = "type1", value = "val", -food.id, -total) %>% #create an ID column for each row
filter(val==1) %>%
select(food.id, type1) %>% #now we have a data.frame with one column for food.id and
# one column for every food.type it is associated with
left_join( # this left join is essentially doing the same thing we did before
test.df %>%
mutate(food.id = 1:n()) %>%
gather(key = "type2", value = "val", -food.id, -total) %>%
filter(val==1) %>%
select(food.id, type2),
by = c("food.id") # now we're matching each food with all of its associated types
) %>%
mutate(type1.n = as.numeric(factor(type1)), # quick way of making sure we're not counting duplicates
# (e.g. if type1 = Type_SunflowerSeeds and type2 = Type_SafflowerSeeds, that's the same if they were switched)
type2.n = as.numeric(factor(type2))) %>%
filter(type1 > type2) %>% # this filter step takes care of the flip flopping issue
group_by(type1, type2) %>%
summarise( #finally, count the combinations/pairwise values
n.times = n()
) %>%
ungroup() %>%
arrange(desc(n.times), type1, type2)
```
With the output of:
```
type1 type2 n.times
<chr> <chr> <int>
1 Type_Scraps Type_EconMix 3
2 Type_SugarWater Type_EconMix 3
3 Type_SunflowerSeeds Type_EconMix 3
4 Type_SunflowerSeeds Type_Scraps 3
5 Type_SunflowerSeeds Type_SugarWater 3
6 Type_Scraps Type_Nuts 2
7 Type_SugarWater Type_Suet 2
8 Type_SunflowerSeeds Type_Suet 2
9 Type_FruitOrJams Type_EconMix 1
10 Type_Nuts Type_EconMix 1
11 Type_Nuts Type_FruitOrJams 1
12 Type_Scraps Type_FruitOrJams 1
13 Type_Suet Type_EconMix 1
14 Type_Suet Type_Scraps 1
15 Type_SugarWater Type_PremMix 1
16 Type_SugarWater Type_Scraps 1
17 Type_SunflowerSeeds Type_Nuts 1
```
To extend this and do a three-way combination count, you can follow this code. I've also added some additional comments to walk-through what's going on:
```
# create a baseline data.frame with food.id and every food type that it matches
food.type.long.df <- test.df %>%
mutate(food.id = 1:n()) %>%
gather(key = "type1", value = "val", -food.id, -total) %>%
filter(val==1) %>%
select(food.id, type1) %>%
arrange(food.id)
# join the baseline data.frame to itself to see all possible combinations of food types
# note: this includes repeated types like type1=Type_Corn and type2=Type_Corn
# this also includes rows where the types are simply flip-flopped types
# ex. Row 2 is type1=Type_SunflowerSeeds and type2 = Type_EconMix
# but Row 6 is type1=Type_EconMix and type2 = Type_SunflowerSeeds - we don't want to count this combinations twice
food.2types.df <- food.type.long.df %>%
left_join(
select(food.type.long.df, food.id, type2 = type1),
by = "food.id"
) %>%
arrange(food.id)
# let's add the third type as well; as with before, the same issues are in this df but we'll fix the duplicates
# and flip flops later
food.3types.df <- food.2types.df %>%
left_join(
select(food.type.long.df, food.id, type3 = type1),
by = "food.id"
) %>%
arrange(food.id)
food.3types.df.fixed <- food.3types.df %>%
distinct() %>%
mutate(type1.n = as.numeric(factor(type1)), # assign each type1 a number (in alphabetical order)
type2.n = as.numeric(factor(type2)), # assign each type2 a number (in alphabetical order)
type3.n = as.numeric(factor(type3))) %>% # assign each type3 a number (in alphabetical order)
filter(type1 > type2) %>% # to remove duplicates and flip-flopped rows for types 1 and 2, use a strict inequality
filter(type2 > type3) # to remove duplicates and flip-flopped rows for types 2 and 3, use a strict inequality
food.3type.combination.count <- food.3types.df.fixed %>%
group_by(type1, type2, type3) %>% # group by all three types you want to count
summarise(
n.times = n()
) %>%
ungroup() %>%
arrange(desc(n.times), type1, type2, type3)
```
With the output:
```
type1 type2 type3 n.times
<chr> <chr> <chr> <int>
1 Type_SunflowerSeeds Type_Scraps Type_EconMix 2
2 Type_SunflowerSeeds Type_SugarWater Type_EconMix 2
3 Type_SunflowerSeeds Type_SugarWater Type_Suet 2
4 Type_Nuts Type_FruitOrJams Type_EconMix 1
5 Type_Scraps Type_FruitOrJams Type_EconMix 1
6 Type_Scraps Type_Nuts Type_EconMix 1
7 Type_Scraps Type_Nuts Type_FruitOrJams 1
8 Type_Suet Type_Scraps Type_EconMix 1
9 Type_SugarWater Type_Scraps Type_EconMix 1
10 Type_SugarWater Type_Suet Type_EconMix 1
11 Type_SugarWater Type_Suet Type_Scraps 1
12 Type_SunflowerSeeds Type_Scraps Type_Nuts 1
13 Type_SunflowerSeeds Type_Suet Type_EconMix 1
14 Type_SunflowerSeeds Type_Suet Type_Scraps 1
15 Type_SunflowerSeeds Type_SugarWater Type_Scraps 1
```
|
You can use arules which is geared from this kind of analysis. You can read more about some of its uses [here](https://cran.r-project.org/web/packages/arules/vignettes/arules.pdf)
So this is your data:
```
df = structure(list(Type_SunflowerSeeds = c(1L, 1L, 1L, 0L, 0L), Type_SafflowerSeeds = c(0L,
0L, 0L, 0L, 0L), Type_Nyjer = c(0L, 0L, 0L, 0L, 0L), Type_EconMix = c(1L,
1L, 0L, 1L, 1L), Type_PremMix = c(0L, 0L, 0L, 0L, 0L), Type_Grains = c(0L,
0L, 0L, 0L, 0L), Type_Nuts = c(0L, 0L, 1L, 1L, 0L), Type_Suet = c(1L,
0L, 0L, 0L, 0L), Type_SugarWater = c(1L, 0L, 0L, 0L, 1L), Type_FruitOrJams = c(0L,
0L, 0L, 1L, 0L), Type_Mealworms = c(0L, 0L, 0L, 0L, 0L), Type_Corn = c(0L,
0L, 0L, 0L, 0L), Type_BarkOrPeanutButter = c(0L, 0L, 0L, 0L,
0L), Type_Scraps = c(1L, 1L, 1L, 1L, 0L), Type_Bread = c(0L,
0L, 0L, 0L, 0L), Type_Other = c(0L, 0L, 0L, 0L, 0L), total = c(5,
3, 3, 4, 2)), row.names = c(NA, 5L), class = "data.frame")
```
We make it a matrix and convert it to a `transactions` object, i omit the last column because you don't need total:
```
library(arules)
m = as(as.matrix(df[,-ncol(df)]),"transactions")
summary(m)
#gives you a lot of information about this data
# now we get a co-occurence matrix
counts = crossTable(m)
```
To get the the data frame you stated, you need to use `dplyr` and `tidyr`:
```
# convert to data.frame
counts[upper.tri(counts)]=NA
diag(counts)=NA
data.frame(counts) %>%
# add rownames as item1
tibble::rownames_to_column("item1") %>%
# make it long format, like you wanted
pivot_longer(-item1,names_to="item2") %>%
# remove rows where item1 == item2
filter(!is.na(value)) %>%
# sort
arrange(desc(value))
# A tibble: 120 x 3
item1 item2 value
<chr> <chr> <int>
1 Type_Scraps Type_SunflowerSeeds 3
2 Type_Scraps Type_EconMix 3
3 Type_EconMix Type_SunflowerSeeds 2
4 Type_SugarWater Type_EconMix 2
```
The above can be simplified by using `apriori` in arules:
```
# number of combinations
N = 2
# create apriori object
rules = apriori(m,parameter=list(maxlen=N,minlen=N,conf =0.01,support=0.01))
gi <- generatingItemsets(rules)
d <- which(duplicated(gi))
rules = sort(rules[-d])
# output results
data.frame(
lhs=labels(lhs(rules)),
rhs=labels(rhs(rules)),
count=quality(rules)$count)
lhs rhs count
1 {Type_SunflowerSeeds} {Type_Scraps} 3
2 {Type_EconMix} {Type_Scraps} 3
3 {Type_SugarWater} {Type_EconMix} 2
4 {Type_Nuts} {Type_Scraps} 2
5 {Type_SunflowerSeeds} {Type_EconMix} 2
6 {Type_FruitOrJams} {Type_Nuts} 1
```
For occurrence of 3, simply change N above to 3.
| 8,143
|
55,483,057
|
I have the following task in one of my ansible playbook:
```
- name: Generate vault token
uri:
url: "{{vault_address}}/v1/auth/github/login"
method: POST
body: "{ \"token\": \"{{ token }}\" }"
validate_certs: no
body_format: json
register: vault_token
- name: debug vault
debug:
msg: "{{vault_address}}/v1/{{vault_path}}/db2w-flex/{{region_name}}"
- name: Create secret in vault
uri:
url: "{{vault_address}}/v1/{{vault_path}}/db2w-flex/{{region_name}}"
method: POST
body: '{ "secret_access_key": "{{ (new_access_key.stdout |from_json).AccessKey.SecretAccessKey }}", "access_key_id": "{{ (new_access_key.stdout |from_json).AccessKey.AccessKeyId }}" }'
validate_certs: no
status_code: 204
headers:
X-VAULT-TOKEN: '{{ vault_token.json.auth.client_token }}'
body_format: json
```
This task keeps failing with:
```
fatal: [localhost]: FAILED! => {"cache_control": "no-store", "changed": false, "content": "{\"errors\":[\"missing client token\"]}\n", "content_length": "36", "content_type": "application/json", "date": "Tue, 02 Apr 2019 20:16:03 GMT", "failed": true, "json": {"errors": ["missing client token"]}, "msg": "Status code was not [204]", "redirected": false, "status": 400}
```
The same task works fine on another host (a kubernetes POD). I am running this on a new system (docker container). The client token is generated properly. I am able to do a POST via curl so there are not network issues on the container. Is there any way to debug what headers are being passed , what url is being hit etc? I don't know what is missing.
I am not using vault cli. Its all HTTP based.
After adding -vvvv here is the log:
```
TASK [Create secret in vault] **************************************************
task path: /deployment/updateVault.yml:16
ESTABLISH LOCAL CONNECTION FOR USER: root
127.0.0.1 EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210 )" )
127.0.0.1 PUT /tmp/tmpDgx1F_ TO /root/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210/uri
127.0.0.1 EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /root/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210/uri; rm -rf "/root/.ansible/tmp/ansible-tmp-1554241833.81-120109582683210/" > /dev/null 2>&1
fatal: [localhost]: FAILED! => {"cache_control": "no-store", "changed": false, "content": "{\"errors\":[\"missing client token\"]}\n", "content_length": "36", "content_type": "application/json", "date": "Tue, 02 Apr 2019 21:50:34 GMT", "failed": true, "invocation": {"module_args": {"backup": null, "body": {"access_key_id": "xxxxxxxxxxxx", "secret_access_key": "5ovB684peTr8YpnNMQBn+xxxxxxxxxxx+"}, "body_format": "json", "content": null, "creates": null, "delimiter": null, "dest": null, "directory_mode": null, "follow": false, "follow_redirects": "safe", "force": null, "force_basic_auth": false, "group": null, "headers": {"X-VAULT-TOKEN": "eee90f1c-3c9e-edaf-32e6-b6cexxxxx"}, "method": "POST", "mode": null, "owner": null, "password": null, "regexp": null, "remote_src": null, "removes": null, "return_content": false, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "status_code": ["204"], "timeout": 30, "url": "https://vserv-us.sos.ibm.com:8200/v1/generic/user/akanksha-jain/db2w-flex/us", "user": null, "validate_certs": false}, "module_name": "uri"}, "json": {"errors": ["missing client token"]}, "msg": "Status code was not [204]", "redirected": false, "status": 400}
```
I tried running a curl with all the above details from the same pod and it went fine.
|
2019/04/02
|
[
"https://Stackoverflow.com/questions/55483057",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5996587/"
] |
Add `-vvvv` to your command line to debug.
As you specified `body_format: json`, you can simplify your `body` part:
```
- name: Generate vault token
uri:
url: "{{vault_address}}/v1/auth/github/login"
method: POST
body:
token: mytoken
validate_certs: no
body_format: json
```
|
I was able to get past this issue with ansible version `2.7.9` I was on `2.0.0.2`
| 8,144
|
11,639,577
|
I installed oauth2 by just downloading tar.gz package and doing `python setup.py install`. However I'm getting this error
```
bash-3.2$ python
Python 2.7.1 (r271:86832, Jul 31 2011, 19:30:53)
[GCC 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2335.15.00)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import oauth2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named oauth2
>>>
```
The path to oauth2 is in PYTHONPATH (so that shouldn't be the issue) as I added this line to ~/.bashrc:
```
PYTHOHPATH=$PYTHONPATH:/Users/me/Downloads/oauth2-1.5.211/
```
However, when I do this:
```
bash-3.2$ cd /System/Library/Frameworks/Python.framework/Versions/2.7/
bash-3.2$ ls
Extras Headers Mac Python Resources _CodeSignature bin include lib
bash-3.2$ Python
Python 2.7.1 (r271:86882M, Nov 30 2010, 09:39:13)
[GCC 4.0.1 (Apple Inc. build 5494)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import oauth2
>>>
```
it works just fine. Any idea how I should install oauth2 to avoid ImportError from `python`?
P/S: this is the simlink for `python` command
```
python -> ../../System/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7
```
|
2012/07/24
|
[
"https://Stackoverflow.com/questions/11639577",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/730403/"
] |
I don't have an answer, but I have some general suggestions:
Run `python setup.py install` with the same python that you intend to use it from (in your case one is capitalised, the other is not).
I always `export` my bashrc variables to ensure they are global, but I am not sure that is your issue here.
When running scripts in the pwd, always run them with `./`. In your case run python as `./Python` to have confidence that you are running the executable you think you are running.
Check your spelling of PYTHONPATH. If you think you have it right, do `import sys; print('\n'.join(sys.path))` from within your python session and ensure that the appropriate directory is there
|
it looks like you have two different versions of python installed, and one of them you launched using Python as opposed to python.
Since your second example workd, it looks like you've installed oauth2 using Python.
| 8,145
|
34,579,327
|
I am receiving this error in Python 3.5.1.
>
> json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
>
>
>
Here is my code:
```
import json
import urllib.request
connection = urllib.request.urlopen('http://python-data.dr-chuck.net/comments_220996.json')
js = connection.read()
print(js)
info = json.loads(str(js))
```
[](https://i.stack.imgur.com/sfOl3.png)
|
2016/01/03
|
[
"https://Stackoverflow.com/questions/34579327",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4679487/"
] |
If you look at the output you receive from `print()` and also in your Traceback, you'll see the value you get back is not a string, it's a bytes object (prefixed by `b`):
```none
b'{\n "note":"This file .....
```
If you fetch the URL using a tool such as `curl -v`, you will see that the content type is
```none
Content-Type: application/json; charset=utf-8
```
So it's JSON, encoded as UTF-8, and Python is considering it a byte stream, not a simple string. In order to parse this, you need to convert it into a string first.
Change the last line of code to this:
```py
info = json.loads(js.decode("utf-8"))
```
|
in my case, some characters like " , :"'{}[] " maybe corrupt the JSON format, so use *try json.loads(str) except* to check your input
| 8,146
|
25,937,443
|
In Python, I have three lists containing x and y coordinates. Each list contains 128 points. How can I find the the closest three points in an efficient way?
This is my working python code but it isn't efficient enough:
```
def findclosest(c1, c2, c3):
mina = 999999999
for i in c1:
for j in c2:
for k in c3:
# calculate sum of distances between points
d = xy3dist(i,j,k)
if d < mina:
mina = d
def xy3dist(a, b, c):
l1 = math.sqrt((a[0]-b[0]) ** 2 + (a[1]-b[1]) ** 2 )
l2 = math.sqrt((b[0]-c[0]) ** 2 + (b[1]-c[1]) ** 2 )
l3 = math.sqrt((a[0]-c[0]) ** 2 + (a[1]-c[1]) ** 2 )
return l1+l2+l3
```
Any idea how this can be done using numpy?
|
2014/09/19
|
[
"https://Stackoverflow.com/questions/25937443",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4058928/"
] |
As written, this is problematic, you are trying to write to a vector for which you did not yet allocate memory.
Option 1 - Resize your vectors ahead of time
```
vector< vector<int> > matrix;
cout << "Filling matrix with test numbers.";
matrix.resize(4); // resize top level vector
for (int i = 0; i < 4; i++)
{
matrix[i].resize(4); // resize each of the contained vectors
for (int j = 0; j < 4; j++)
{
matrix[i][j] = 5;
}
}
```
Option 2 - Size your vector when you declare it
```
vector<vector<int>> matrix(4, vector<int>(4));
```
Option 3 - Use `push_back` to resize the vector as needed.
```
vector< vector<int> > matrix;
cout << "Filling matrix with test numbers.";
for (int i = 0; i < 4; i++)
{
vector<int> temp;
for (int j = 0; j < 4; j++)
{
temp.push_back(5);
}
matrix.push_back(temp);
}
```
|
You have not allocated any space for your 2d vector. So in your current code, you are trying to access some memory that does not belong to your program's memory space. This will result in Segmentation Fault.
try:
```
vector<vector<int> > matrix(4, vector<int>(4));
```
If you want to give all elements the same value, you can try:
```
vector<vector<int> > matrix(4, vector<int>(4,5)); // all values are now 5
```
| 8,147
|
14,510,286
|
I'm currently writing an application which allows the user to extend it via a 'plugin' type architecture. They can write additional python classes based on a BaseClass object I provide, and these are loaded against various application signals. The exact number and names of the classes loaded as plugins is unknown before the application is started, but are only loaded once at startup.
During my research into the best way to tackle this I've come up with two common solutions.
**Option 1 - Roll your own using imp, pkgutil, etc.**
See for instance, [this answer](https://stackoverflow.com/questions/2267984/dynamic-class-loading-in-python-2-6-runtimewarning-parent-module-plugins-not) or [this one](https://stackoverflow.com/questions/301134/dynamic-module-import-in-python).
**Option 2 - Use a plugin manager library**
Randomly picking a couple
* [straight.plugin](https://github.com/ironfroggy/straight.plugin)
* [yapsy](http://yapsy.sourceforge.net/)
* [this approach](http://martyalchin.com/2008/jan/10/simple-plugin-framework/)
My question is - on the proviso that the application must be restarted in order to load new plugins - is there any benefit of the above methods over something inspired from [this SO answer](https://stackoverflow.com/questions/1796180/python-get-list-of-all-classes-within-current-module) and [this one](https://stackoverflow.com/a/8093671/233608) such as:
```
import inspect
import sys
import my_plugins
def predicate(c):
# filter to classes
return inspect.isclass(c)
def load_plugins():
for name, obj in inspect.getmembers(sys.modules['my_plugins'], predicate):
obj.register_signals()
```
Are there any disadvantages to this approach compared to the ones above? (other than all the plugins must be in the same file) Thanks!
**EDIT**
Comments request further information... the only additional thing I can think to add is that the plugins use the [blinker](http://discorporate.us/projects/Blinker/) library to provide signals that they subscribe to. Each plugin may subscribe to different signals of different types and hence must have its own specific "register" method.
|
2013/01/24
|
[
"https://Stackoverflow.com/questions/14510286",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/233608/"
] |
The [metaclass approach](http://martyalchin.com/2008/jan/10/simple-plugin-framework/) is useful for this issue in Python < 3.6 (see @quasoft's answer for Python 3.6+). It is very simple and acts automatically on any imported module. In addition, complex logic can be applied to plugin registration with very little effort. It requires:
The [metaclass](https://stackoverflow.com/questions/100003/what-is-a-metaclass-in-python) approach works like the following:
1) A custom `PluginMount` metaclass is defined which maintains a list of all plugins
2) A `Plugin` class is defined which sets `PluginMount` as its metaclass
3) When an object deriving from `Plugin` - for instance `MyPlugin` is imported, it triggers the `__init__` method on the metaclass. This registers the plugin and performs any application specific logic and event subscription.
Alternatively if you put the `PluginMount.__init__` logic in `PluginMount.__new__` it is called whenver a new instance of a `Plugin` derived class is created.
```
class PluginMount(type):
"""
A plugin mount point derived from:
http://martyalchin.com/2008/jan/10/simple-plugin-framework/
Acts as a metaclass which creates anything inheriting from Plugin
"""
def __init__(cls, name, bases, attrs):
"""Called when a Plugin derived class is imported"""
if not hasattr(cls, 'plugins'):
# Called when the metaclass is first instantiated
cls.plugins = []
else:
# Called when a plugin class is imported
cls.register_plugin(cls)
def register_plugin(cls, plugin):
"""Add the plugin to the plugin list and perform any registration logic"""
# create a plugin instance and store it
# optionally you could just store the plugin class and lazily instantiate
instance = plugin()
# save the plugin reference
cls.plugins.append(instance)
# apply plugin logic - in this case connect the plugin to blinker signals
# this must be defined in the derived class
instance.register_signals()
```
Then a base plugin class which looks like:
```
class Plugin(object):
"""A plugin which must provide a register_signals() method"""
__metaclass__ = PluginMount
```
Finally, an actual plugin class would look like the following:
```
class MyPlugin(Plugin):
def register_signals(self):
print "Class created and registering signals"
def other_plugin_stuff(self):
print "I can do other plugin stuff"
```
Plugins can be accessed from any python module that has imported `Plugin`:
```
for plugin in Plugin.plugins:
plugin.other_plugin_stuff()
```
See [the full working example](https://gist.github.com/will-hart/5899567)
|
The approach from will-hart was the most useful one to me!
For i needed more control I wrapped the Plugin Base class in a function like:
```
def get_plugin_base(name='Plugin',
cls=object,
metaclass=PluginMount):
def iter_func(self):
for mod in self._models:
yield mod
bases = not isinstance(cls, tuple) and (cls,) or cls
class_dict = dict(
_models=None,
session=None
)
class_dict['__iter__'] = iter_func
return metaclass(name, bases, class_dict)
```
and then:
```
from plugin import get_plugin_base
Plugin = get_plugin_base()
```
This allows to add additional baseclasses or switching to another metaclass.
| 8,150
|
1,933,217
|
I'm looking for a way to script a transparent forward proxy such as the ones that users point their browsers to in proxy settings.
I've discovered a distinct tradeoff in forward proxies between scriptability and robustness. For example, their are countless proxies developed in [Ruby](http://github.com/whymirror/mousehole) and [Python](http://proxies.xhaus.com/python/) that allow you to inspect each request response and log, modify, filter at will ... however these either fail to proxy everything needed or crash after 20 minutes of use.
On the other hand I suspect that Squid and Apache are quite robust and stable, however for the life of me I can't determine how I can develop dynamic behavior through scripting. Ultimately I would like to set quota's and dynamically filter on that quota. Part of me feels like mixing [mod\_proxy](http://httpd.apache.org/docs/1.3/mod/mod_proxy.html) and mod\_perl?? could allow interesting dynamic proxies, but its hard to know where to begin and know if its even possible.
Please advise.
|
2009/12/19
|
[
"https://Stackoverflow.com/questions/1933217",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/143725/"
] |
Squid and Apache both have mechanisms to call external scripts for allow/deny decisions per-request. This allows you to use either for their proxy engines, but call your external script per request for processing of arbitrary complexity. Your code only has to manage the business logic, not the heavy lifting.
In Apache, I've never used `mod_proxy` in this way, but I have used `mod_rewrite`. mod\_rewrite also allows you to proxy requests. The [`RequestMap`](http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html#rewritemap) directive allows you to pass the decision to an external script:
>
> MapType: prg, MapSource: Unix filesystem path to valid regular file
>
>
> Here the source is a program, not a map file. To create it you can use a language of your choice, but the result has to be an executable program (either object-code or a script with the magic cookie trick '#!/path/to/interpreter' as the first line).
>
>
> This program is started once, when the Apache server is started, and then communicates with the rewriting engine via its stdin and stdout file-handles. For each map-function lookup it will receive the key to lookup as a newline-terminated string on stdin. It then has to give back the looked-up value as a newline-terminated string on stdout or the four-character string ``NULL'' if it fails (i.e., there is no corresponding value for the given key).
>
>
>
With Squid, you can get similar functionality via the [`external_acl_type`](http://www.visolve.com/squid/squid30/externalsupport.php#external_acl_type) directive:
>
> This tag defines how the external acl classes using a helper program should look up the status.
>
>
>
g'luck!
|
If you looking for a Perl solution then take a look at [`HTTP::Proxy`](http://search.cpan.org/dist/HTTP-Proxy/)
Not sure of any mod\_perl solutions though. [CPAN](http://search.cpan.org) does bring up [`Apache::Proxy`](http://search.cpan.org/dist/Apache-Proxy/) and Googling brings up [MyProxy](http://sourceforge.net/projects/myproxy/). However note, both of these are a bit old so YMMV but you may find them a useful leg up.
| 8,152
|
29,397,839
|
I am SSHed into a remote machine and I do not have rights to download python packages but I want to use 3rd party applications for my project. I found `cx_freeze` but I'm not sure if that is what I need.
What I want to achieve is to be able to run different parts of my project (will mains everywhere) with command line arguments on the remote machine. My project will be filled with a few 3rd party python packages. Not sure how to get around this as I cannot `pip install` and am not a sudoer. I can SCP files to the remote machine
|
2015/04/01
|
[
"https://Stackoverflow.com/questions/29397839",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1815710/"
] |
When you pass an primitive array such as `char[]` to `Arrays.asList`, that method can't return a `List<char>`, because primitive types aren't allowed as type arguments. But it can and does produce a `List<char[]>`. Your random `char` is never equal to the single `char[]` inside the `List`, so any duplicate `char` is allowed. If you use a `Character[]` instead of the `char[]` for `cipherAlpha`, and change the return type of the method to `Character[]`, then `Arrays.asList` will infer the type argument `Character` correctly, allowing for your duplicate check to work correctly.
Second, `nextInt(25)` will generate a random index between `0` and `24`, not `25`. You can use `ALPHABET.length`, which is 26 here. With the first change but without this change, you will only have 25 distinct characters, and you will never find a 26th distinct character, looping forever.
|
Add you Alphabet in a ArrayList and remove the element selected at each turn of your while. Then update your rand.nextInt like:
```
rand.nextInt(AlphabetList.size());
```
And your ALPHABET like:
```
List<char> AlphabetList = Arrays.asList('a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j','k', 'l', 'm', 'n', 'o', 'p', 'r', 's', 't', 'u', 'v','w', 'x', 'y', 'z', ' ');
```
| 8,157
|
70,702,139
|
I am building a snap to test integration of a python script and a python SDK with snapcraft and there appears to be a conflict when two python 'parts' are built in the same snap.
What is the best way to build a snap with multiple python modules?
I have a simple script which imports the SDK and then prints some information. I also have the python SDK library (<https://help.iotconnect.io/documentation/sdk-reference/device-sdks-flavors/download-python-sdk/>) in a different folder.
I have defined the two parts, and each one can be built stand alone (snapcraft build PARTNAME), however it seems the python internals are conflicting at the next step of 'staging' them together.
tree output of structure
```
tree
.
├── basictest
│ ├── basictest.py
│ ├── __init__.py
│ └── setup.py
├── iotconnect-sdk-3.0.1
│ ├── iotconnect
│ │ ├── assets
│ │ │ ├── config.json
│ │ │ └── crt.txt
│ │ ├── client
│ │ │ ├── httpclient.py
│ │ │ ├── __init__.py
│ │ │ ├── mqttclient.py
│ │ │ └── offlineclient.py
│ │ ├── common
│ │ │ ├── data_evaluation.py
│ │ │ ├── infinite_timer.py
│ │ │ ├── __init__.py
│ │ │ └── rule_evaluation.py
│ │ ├── __init__.py
│ │ ├── IoTConnectSDKException.py
│ │ ├── IoTConnectSDK.py
│ │ └── __pycache__
│ │ ├── __init__.cpython-38.pyc
│ │ └── IoTConnectSDK.cpython-38.pyc
│ ├── iotconnect_sdk.egg-info
│ │ ├── dependency_links.txt
│ │ ├── not-zip-safe
│ │ ├── PKG-INFO
│ │ ├── requires.txt
│ │ ├── SOURCES.txt
│ │ └── top_level.txt
│ ├── PKG-INFO
│ ├── README.md
│ ├── setup.cfg
│ └── setup.py
└── snap
└── snapcraft.yaml
9 directories, 30 files
```
snapcraft.yaml
```
name: basictest
base: core20
version: '0.1'
summary: Test snap to verifiy integration with python SDK
description: |
Test snap to verifiy integration with python SDK
grade: devel
confinement: devmode
apps:
basictest:
command: bin/basictest
parts:
lib-basictest:
plugin: python
source: ./basictest/
after: [lib-pythonsdk]
disable-parallel: true
lib-pythonsdk:
plugin: python
source: ./iotconnect-sdk-3.0.1/
```
Running 'snapcraft' shows tons of errors which look like conflicts between the two 'parts' related to the internals of python.
snapcraft output
```
snapcraft
Launching a VM.
Skipping pull lib-pythonsdk (already ran)
Skipping pull lib-basictest (already ran)
Skipping build lib-pythonsdk (already ran)
Skipping build lib-basictest (already ran)
Failed to stage: Parts 'lib-pythonsdk' and 'lib-basictest' have the following files, but with different contents:
bin/activate
bin/activate.csh
bin/activate.fish
lib/python3.8/site-packages/_distutils_hack/__pycache__/__init__.cpython-38.pyc
lib/python3.8/site-packages/_distutils_hack/__pycache__/override.cpython-38.pyc
lib/python3.8/site-packages/pip-21.3.1.dist-info/RECORD
lib/python3.8/site-packages/pip/__pycache__/__init__.cpython-38.pyc
lib/python3.8/site-packages/pip/__pycache__/__main__.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/__init__.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/build_env.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/cache.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/configuration.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/exceptions.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/main.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/pyproject.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/self_outdated_check.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/__pycache__/wheel_builder.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/__init__.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/autocompletion.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/base_command.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/cmdoptions.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/command_context.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/main_parser.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/parser.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/progress_bars.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/req_command.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/spinners.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/cli/__pycache__/status_codes.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/__init__.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/cache.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/check.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/completion.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/configuration.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/debug.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/download.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/freeze.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/hash.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/help.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/index.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/install.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/list.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/search.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/show.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/uninstall.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/commands/__pycache__/wheel.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/__init__.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/base.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/installed.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/sdist.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/distributions/__pycache__/wheel.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/index/__pycache__/__init__.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/index/__pycache__/collector.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/index/__pycache__/package_finder.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/index/__pycache__/sources.cpython-38.pyc
lib/python3.8/site-packages/pip/_internal/locations/__pycache__/__init__.cpython-38.pyc
... Tons more removed
Snapcraft offers some capabilities to solve this by use of the following keywords:
- `filesets`
- `stage`
- `snap`
- `organize`
To learn more about these part keywords, run `snapcraft help plugins`.
Run the same command again with --debug to shell into the environment if you wish to introspect this failure.
```
**Main question**
What is the best way to build a snap with multiple python modules?
|
2022/01/13
|
[
"https://Stackoverflow.com/questions/70702139",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/17927115/"
] |
Looks like your problem is that you are trying to run python main.py from within the Python interpreter, which is why you're seeing that traceback.
Make sure you're out of the interpreter:
```
exit()
```
Then run the **python main.py** command from bash or command prompt or whatever.
|
Invoke python scripts like this:
```
PS C:\Users\sween\Desktop> python ./a.py
```
Not like this:
```
PS C:\Users\sween\Desktop> python
Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> ./a.py
File "<stdin>", line 1
./a.py
^
SyntaxError: invalid syntax
```
The three arrows `>>>` indicate a place to write Python code, not filenames or paths.
| 8,159
|
28,780,489
|
When am trying to run the chron job in django using below command
```
python manage.py runcrons
```
its showing one error like below
```
$ python manage.py runcrons
No handlers could be found for logger "django_cron"
```
Does any one have any idea about this error? Any help is appreciated.
|
2015/02/28
|
[
"https://Stackoverflow.com/questions/28780489",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4582293/"
] |
It is kind of given in the error you get. You are missing a handler for the "django\_cron" logger. See for example <https://stackoverflow.com/a/7048543/1197616>. Also have a look at the docs for Django, <https://docs.djangoproject.com/en/dev/topics/logging/>.
|
Actually the *django-cron* library does not require a 'django\_cron' logger. I resolved the same problem by running the migrations of django\_cron:
```
python manage.py migrate #migrate database
```
| 8,168
|
62,978,500
|
I have made a python program that uses Pygame. For some reason, I can't close the window when pressing the red cross. I tried using Command+Q but it doesn't work as well. I have to quit idle (my python interpreter) to close the window. Is there any other way to make the window close by pressing the red 'x' at the top right-hand corner?
My code:
```
import pygame
import sys
from pygame.locals import *
pygame.init()
screen = pygame.display.set_mode((800,800))
while 1:
pygame.display.update()
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
```
|
2020/07/19
|
[
"https://Stackoverflow.com/questions/62978500",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12987382/"
] |
A pygame window can be closed properly if you use a different python interpreter. Try using pycharm, you can close pygame windows using pycharm.
|
Try this:
```
import pygame, sys
from pygame.locals import *
pygame.init()
screen = pygame.display.set_mode((800,800))
while True:
pygame.display.update()
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
```
| 8,169
|
20,905,702
|
I'm currently working with Freeswitch and its [event socket library](http://wiki.freeswitch.org/wiki/Event_Socket_Library) (through the [mod event socket](http://wiki.freeswitch.org/wiki/Mod_event_socket)). For instance:
```
from ESL import ESLconnection
cmd = 'uuid_kill %s' % active_call # active_call comes from a Django db and is unicode
con = ESLconnection(config.HOST, config.PORT, config.PWD)
if con.connected():
e = con.api(str(cmd))
else:
logging.error('Couldn\'t connect to Freeswitch Mod Event Socket')
```
As you can see, I had to explicitly cast `con.api()`'s argument with `str()`. Without that, the call ends up in the following stack trace:
```
Traceback (most recent call last):
[...]
e = con.api(cmd)
File "/usr/lib64/python2.7/site-packages/ESL.py", line 87, in api
def api(*args): return apply(_ESL.ESLconnection_api, args)
TypeError: in method 'ESLconnection_api', argument 2 of type 'char const *'
```
I don't understand this TypeError: what does it mean ? `cmd` contains a string, so what does it fix it when I cast it with `str(cmd)` ?
Could it be related to Freeswitch's python API, generated through [SWIG](http://www.swig.org/) ?
|
2014/01/03
|
[
"https://Stackoverflow.com/questions/20905702",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1030960/"
] |
Short answer: `cmd` likely contains a Unicode string, which cannot be trivially converted to a `const char *`. The error message likely comes from a wrapper framework that automates writing Python bindings for C libraries, such as SWIG or ctypes. The framework knows what to do with a byte string, but punts on Unicode strings. Passing `str(cmd)` helps because it converts the Unicode string to a byte string, from which a `const char *` value expected by C code can be trivially extracted.
Long answer:
The C type `char const *`, more customarily spelled `const char *`, can be read as "read-only array of `char`", `char` being C's way to spell "byte". When a C function accepts a `const char *`, it expects a "C string", i.e. an array of `char` values terminated with a null character. Conveniently, Python strings are internally represented as C strings with some additional information such as type, reference count, and the length of the string (so the string length can be retrieved with O(1) complexity, and also so that the string may contain null characters themselves).
Unicode strings in Python 2 are represented as arrays of `Py_UNICODE`, which are either 16 or 32 bits wide, depending on the operating system and build-time flags. Such an array cannot be passed to code that expects an array of 8-bit chars — it needs to be *converted*, typically to a temporary buffer, and this buffer must be freed when no longer needed.
For example, a simple-minded (and quite unnecessary) wrapper for the C function `strlen` could look like this:
```
PyObject *strlen(PyObject *ignore, PyObject *obj)
{
const char *c_string;
size_t len;
if (!PyString_Check(obj)) {
PyErr_Format(PyExc_TypeError, "string expected, got %s", Py_TYPE(obj)->tp_name);
return NULL;
}
c_string = PyString_AsString(obj);
len = strlen(c_string);
return PyInt_FromLong((long) len);
}
```
The code simply calls `PyString_AsString` to retrieve the internal C string stored by every Python string and expected by `strlen`. For this code to also support Unicode objects (provided it even makes sense to call `strlen` on Unicode objects), it must handle them explicitly:
```
PyObject *strlen(PyObject *ignore, PyObject *obj)
{
const char *c_string;
size_t len;
PyObject *tmp = NULL;
if (PyString_Check(obj))
c_string = PyString_AsString(obj);
else if (PyUnicode_Check(obj)) {
if (!(tmp = PyUnicode_AsUTF8String(obj)))
return NULL;
c_string = PyString_AsString(tmp);
}
else {
PyErr_Format(PyExc_TypeError, "string or unicode expected, got %s",
Py_TYPE(obj)->tp_name);
return NULL;
}
len = strlen(c_string);
Py_XDECREF(tmp);
return PyInt_FromLong((long) len);
}
```
Note the additional complexity, not only in lines of boilerplate code, but in the different code paths that require different management of a temporary object that holds the byte representation of the Unicode string. Also note that the code needed to decide to on an *encoding* when converting a Unicode string to a byte string. UTF-8 is guaranteed to be able to encode any Unicode string, but passing a UTF-8-encoded sequence to a function expecting a C string might not make sense for some uses. The `str` function uses the ASCII codec to encode the Unicode string, so if the Unicode string actually contained any non-ASCII characters, you would get an exception.
There have been [requests to include this functionality in SWIG](http://sourceforge.net/p/swig/feature-requests/75/), but it is unclear from the linked report if they made it in.
|
I had similar problem, and I solved it by doing this:
`cmd = 'uuid_kill %s'.encode('utf-8')`
| 8,172
|
28,431,765
|
So I am trying to open websites on new tabs inside my WebDriver. I want to do this, because opening a new WebDriver for each website takes about 3.5secs using PhantomJS, I want more speed...
I'm using a multiprocess python script, and I want to get some elements from each page, so the workflow is like this:
```
Open Browser
Loop throught my array
For element in array -> Open website in new tab -> do my business -> close it
```
But I can't find any way to achieve this.
Here's the code I'm using. It takes forever between websites, I need it to be fast... Other tools are allowed, but I don't know too many tools for scrapping website content that loads with JavaScript (divs created when some event is triggered on load etc) That's why I need Selenium... BeautifulSoup can't be used for some of my pages.
```
#!/usr/bin/env python
import multiprocessing, time, pika, json, traceback, logging, sys, os, itertools, urllib, urllib2, cStringIO, mysql.connector, shutil, hashlib, socket, urllib2, re
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from PIL import Image
from os import listdir
from os.path import isfile, join
from bs4 import BeautifulSoup
from pprint import pprint
def getPhantomData(parameters):
try:
# We create WebDriver
browser = webdriver.Firefox()
# Navigate to URL
browser.get(parameters['target_url'])
# Find all links by Selector
links = browser.find_elements_by_css_selector(parameters['selector'])
result = []
for link in links:
# Extract link attribute and append to our list
result.append(link.get_attribute(parameters['attribute']))
browser.close()
browser.quit()
return json.dumps({'data': result})
except Exception, err:
browser.close()
browser.quit()
print err
def callback(ch, method, properties, body):
parameters = json.loads(body)
message = getPhantomData(parameters)
if message['data']:
ch.basic_ack(delivery_tag=method.delivery_tag)
else:
ch.basic_reject(delivery_tag=method.delivery_tag, requeue=True)
def consume():
credentials = pika.PlainCredentials('invitado', 'invitado')
rabbit = pika.ConnectionParameters('localhost',5672,'/',credentials)
connection = pika.BlockingConnection(rabbit)
channel = connection.channel()
# Conectamos al canal
channel.queue_declare(queue='com.stuff.images', durable=True)
channel.basic_consume(callback,queue='com.stuff.images')
print ' [*] Waiting for messages. To exit press CTRL^C'
try:
channel.start_consuming()
except KeyboardInterrupt:
pass
workers = 5
pool = multiprocessing.Pool(processes=workers)
for i in xrange(0, workers):
pool.apply_async(consume)
try:
while True:
continue
except KeyboardInterrupt:
print ' [*] Exiting...'
pool.terminate()
pool.join()
```
|
2015/02/10
|
[
"https://Stackoverflow.com/questions/28431765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1381537/"
] |
* OS: Win 10,
* Python 3.8.1
+ selenium==3.141.0
```
from selenium import webdriver
import time
driver = webdriver.Firefox(executable_path=r'TO\Your\Path\geckodriver.exe')
driver.get('https://www.google.com/')
# Open a new window
driver.execute_script("window.open('');")
# Switch to the new window
driver.switch_to.window(driver.window_handles[1])
driver.get("http://stackoverflow.com")
time.sleep(3)
# Open a new window
driver.execute_script("window.open('');")
# Switch to the new window
driver.switch_to.window(driver.window_handles[2])
driver.get("https://www.reddit.com/")
time.sleep(3)
# close the active tab
driver.close()
time.sleep(3)
# Switch back to the first tab
driver.switch_to.window(driver.window_handles[0])
driver.get("https://bing.com")
time.sleep(3)
# Close the only tab, will also close the browser.
driver.close()
```
Reference: [Need Help Opening A New Tab in Selenium](https://python-forum.io/Thread-Need-Help-Opening-A-New-Tab-in-Selenium)
|
I tried for a very long time to duplicate tabs in Chrome running using action\_keys and send\_keys on body. The only thing that worked for me was an answer [here](https://stackoverflow.com/a/41633373/10488716). This is what my duplicate tabs def ended up looking like, probably not the best but it works fine for me.
```
def duplicate_tabs(number, chromewebdriver):
#Once on the page we want to open a bunch of tabs
url = chromewebdriver.current_url
for i in range(number):
print('opened tab: '+str(i))
chromewebdriver.execute_script("window.open('"+url+"', 'new_window"+str(i)+"')")
```
It basically runs some java from inside of python, it's incredibly useful. Hope this helps somebody.
Note: I am using Ubuntu, it shouldn't make a difference but if it doesn't work for you this could be the reason.
| 8,173
|
27,767,937
|
Ive been trying to figure this out all night with no luck. Im assuming that this will be a simple question for the more experienced programmer.
Im working on a canonical request that I can sign.
something like this:
```
canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers
```
However when I print(canonical\_request) I get:
```
method
canonical_uri
canonical_querystring
canonical_headers
```
But this is what im after:
```
method\ncanonical_uri\ncanonical_querystring\ncanonical_headers
```
By the way Im using python34, I would really appreciate the help.
|
2015/01/04
|
[
"https://Stackoverflow.com/questions/27767937",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4400330/"
] |
So you want to not have "actual" newlines, but the escape character for newlines in your string? Just add a second slash to `'\n'` to escape it as well, `'\\n'`. Or prepend your strings with r to make them "raw"; in them the backslash is interpreted literally; `r'\n'` (commonly used for regular expressions).
```
canonical_request = method + r'\n' + canonical_uri + r'\n' + canonical_querystring + r'\n' + canonical_headers
```
For information about string literals, see [String and byte literals](https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals) in the docs.
|
As an alternative and more elegant way you can put your strings in a list and join them with escape the `\n` with add `\` to leading :
```
>>> l=['method', 'canonical_uri', 'canonical_querystring', 'canonical_headers']
>>> print '\\n'.join(l)
method\ncanonical_uri\ncanonical_querystring\ncanonical_headers
```
>
> The backslash (\) character is used to escape characters that otherwise have a special meaning, such as newline, backslash itself, or the quote character.
>
>
>
| 8,183
|
14,206,760
|
I have been working on a Django application lately, trying to get it to work with Amazon Elastic Beanstalk.
In my `.ebextensions/python.config` file, I have set the following:
```
option_settings:
- namespace: aws:elasticbeanstalk:application:environment
option_name: ProductionBucket
value: s3-bucket-name
- namespace: aws:elasticbeanstalk:application:environment
option_name: ProductionCache
value: memcached-server.site.com:11211
```
However, whenever I look on the server, no such environment variables are set (and as such, aren't accessible when I try `os.getenv('ProductionBucket')`
I came across this [this page](https://gist.github.com/808968) which appears to attempt to document all the namespaces. I've also tried using `PARAM1` as the option name, but have had similar results.
How can I set environment variables in Amazon Elastic Beanstalk?
**EDIT**:
I have also tried adding a command prior to all other commands which would just export an environment variable:
```
commands:
01_env_vars:
command: "source scripts/env_vars"
```
... This was also unsuccessful
|
2013/01/08
|
[
"https://Stackoverflow.com/questions/14206760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/165988/"
] |
I was having the same problem.
Believe it or not, you have to commit the `.ebextensions` directory and all `*.config` files to version control before you deploy in order for them to show up as environment variables on the server.
In order to keep sensitive information out of version control, you can use a config file like this:
```
option_settings:
- option_name: API_LOGIN
value: placeholder
- option_name: TRANS_KEY
value: placeholder
- option_name: PROVIDER_ID
value: placeholder
```
Then edit the configuration in the AWS admin panel (Configuration > Software Configuration > Environment Properties) and update the values there.
You may also find [this answer](https://stackoverflow.com/a/14491294/274695) helpful.
|
I know this is an old question but for those who still have the same question like I did here is the solution from AWS documentation: <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-softwaresettings.html>
>
> To configure environment properties in the Elastic Beanstalk console
>
>
> 1. Open the [Elastic Beanstalk console](https://console.aws.amazon.com/elasticbeanstalk), and then, in the regions drop-down
> list, select your region.
> 2. In the navigation pane, choose **Environments**, and then choose your
> environment's name on the list.
> 3. In the navigation pane, choose **Configuration**.
> 4. In the **Software** configuration category, choose Edit.
> 5. Under **Environment properties**, enter key-value pairs.
> 6. Choose **Apply**.
>
>
>
| 8,184
|
48,937,024
|
I am going to write down this pseudocode in python:
```
if (i < .1):
doX()
elif (i < .3):
doY()
elif (i < .5):
doZ()
.
.
else:
doW()
```
The range of numbers may be 20, and each float number which shapes the constraints is read from a list. For the above example (shorter version), it is the list:
```
[0.1, 0.3, 0.5, 1]
```
Is there any pythonic way or a function which can do different functions for different associated ranges?
|
2018/02/22
|
[
"https://Stackoverflow.com/questions/48937024",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8899386/"
] |
```
from bisect import *
a=[0.1, 0.3, 0.5, 1]
b=["a","b","c","d"]
print b[bisect_left(a,0.2)]
```
|
Here's an answer that you should not use:
```
doX = lambda x: x + 1
doY = lambda x: x + 10
doZ = lambda x: x + 100
ranges = [0.1, 0.3, 0.5, 1]
functions = [doX, doY, doZ]
answer = lambda x: [func(x) for (low, high), func in zip(zip(ranges[:-1],ranges[1:]), function) if low <= x < high][0]
```
The point is, that getting to fancy with it becomes unreadable. The if..elif..else is the *"one-- and preferably only one --obvious way to do it."*.
| 8,194
|
63,815,087
|
I'm porting some Python 2 legacy code and I have this class:
```
class myfile(file):
"Wrapper for file object whose read member returns a string buffer"
def __init__ (self, *args):
return file.__init__ (self, *args)
def read(self, size=-1):
return create_string_buffer(file.read(self, size))
```
It's used like a File object:
```
self._file = myfile(name, mode, buffering)
self._file.seek(self.si*self.blocksize)
```
I'm trying to implement it in Python 3 like so:
```
class myfile(io.FileIO):
"Wrapper for file object whose read member returns a string buffer"
def __init__(self, name, mode, *args, **kwargs):
super(myfile, self).__init__(name, mode, closefd=True, *args, **kwargs)
def read(self, size=-1):
return create_string_buffer(self.read(size))
```
The problem is that the constructor for FileIO doesn't take the `buffering` argument and Python throws a `TypeError: fileio() takes at most 3 arguments (4 given)` error.
The [Python 3 open function](https://docs.python.org/3/library/functions.html#open) is what I need. Can I inherit from that? I've looked at the [PyFile\_FromFd class](https://docs.python.org/3.8/c-api/file.html), but it needs an open file descriptor and I'm concerned that the behaviour is not going to be the same.
Thank you!!!
|
2020/09/09
|
[
"https://Stackoverflow.com/questions/63815087",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6271889/"
] |
If another way is fine , you can try the below, it is a little dirty though (you can try optimizing it)
```
cols = ['name','color','amount']
u = df[df.columns.difference(cols)].join(df[cols].agg(dict,1).rename('d'))
v = (u.groupby(['cat1','cat2','cat3'])['d'].agg(list).reset_index("cat3"))
v = v.groupby(v.index).apply(lambda x: dict(zip(x['cat3'],x['d'])))
v.index = pd.MultiIndex.from_tuples(v.index,names=['cat1','cat2'])
d = v.unstack(0).to_dict()
```
---
```
print(d)
{'A': {'BB': {'CC': [{'amount': 132, 'color': 'red', 'name': 'P1'},
{'amount': 51, 'color': 'blue', 'name': 'P2'}]},
'BC': {'CD': [{'amount': 12, 'color': 'green', 'name': 'P3'}]}},
'B': {'BB': {'CD': [{'amount': 421, 'color': 'green', 'name': 'P1'},
{'amount': 55, 'color': 'yellow', 'name': 'P4'}]},
'BC': nan},
'C': {'BB': {'CC': [{'amount': 11, 'color': 'red', 'name': 'P1'}]},
'BC': {'CD': [{'amount': 123, 'color': 'blue', 'name': 'P3'}],
'CE': [{'amount': 312, 'color': 'blue', 'name': 'P6'}]}}}
```
|
We can `groupby` on `cat1`, `cat2` and `cat3` and recursively build the dictionary based on the grouped categories:
```
def set_val(d, k, v):
if len(k) == 1:
d[k[0]] = v
else:
d[k[0]] = set_val(d.get(k[0], {}), k[1:], v)
return d
dct = {}
for k, g in df.groupby(['cat1', 'cat2', 'cat3']):
set_val(dct, k, {'products': g[['name', 'color', 'amount']].to_dict('r')})
```
---
```
print(dct)
{'A': {'BB': {'CC': {'products': [{'amount': 132, 'color': 'red', 'name': 'P1'},
{'amount': 51, 'color': 'blue', 'name': 'P2'}]}},
'BC': {'CD': {'products': [{'amount': 12, 'color': 'green', 'name': 'P3'}]}}},
'B': {'BB': {'CD': {'products': [{'amount': 421, 'color': 'green', 'name': 'P1'},
{'amount': 55, 'color': 'yellow', 'name': 'P4'}]}}},
'C': {'BB': {'CC': {'products': [{'amount': 11, 'color': 'red', 'name': 'P1'}]}},
'BC': {'CD': {'products': [{'amount': 123, 'color': 'blue', 'name': 'P3'}]},
'CE': {'products': [{'amount': 312, 'color': 'blue', 'name': 'P6'}]}}}}
```
| 8,197
|
21,962,250
|
I have a string that hold a binary number as a string
```
string = '0b100111'
```
I want to have that value not be a string type but a value (pseudo-code)
```
bin(string) = 0b100111
```
Any pythoners know an easy way to do this?
It is all part of this code for a Codecademy: (After answer implemented)
```
def flip_bit(number,n):
if type(number)==type('s'):
number = int(number,2)
mask=(0b1<<n-1)
print bin(mask)
print mask
desired = bin(number^mask)
return desired
flip_bit('0b111', 2)
```
|
2014/02/22
|
[
"https://Stackoverflow.com/questions/21962250",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2918785/"
] |
What about calling `int` function with base `2`?
```
>>>s = '0b100111'
>>>b = int(s, 2)
>>>print b
39
```
|
you can make it binary by putting a b before the quotes:
```
>>> s = b'hello'
>>> s.decode()
'hello'
```
| 8,200
|
9,787,741
|
What is the equivalent of the following in python?
```
for (i=0; i<n; i++)
for (j=i+1; j<n; j++)
//do stuff with A[i], A[j]
```
Or in some sense, the following. It should also remove the element from A at the completion of each round of the loop.
```
for a in A:
for a' in A/{a}: #i.e. rest of the elements of A
#do something with a,a'
#remove a from A
```
Is there a pythonic way of doing this without using enumerate()?
Edits:
Sorry for the bad description.
1. In the first example, I mean to use i & j only as indices. Their values do not matter. Its just a rough c++ equivalent of the latter.
2. The outer loop is executed n times. The inner loop is executed (n-1), (n-2)...0 times for each iteration of the outer loop.
Maybe this might help (pseudocode):
```
function next_iteration(list):
head = first element
tail = remaining elements #list
each element in tail interacts with head one by one
next_iteration(tail)
```
PS: All code samples above are pseudocodes. I'm trying to express something that is still a bit vague in my mind.
|
2012/03/20
|
[
"https://Stackoverflow.com/questions/9787741",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/408412/"
] |
Since your two questions are different, here is solution for your second problem:
```
for i in xrange(len(A)):
for j in xrange(len(A)):
if i != j:
do_stuff(A[i], A[j])
```
or **using** `itertools` (I think using the included **batteries** is very pythonic!):
```
import itertools
for a, b in itertools.permutations(A, 2):
do_stuff(a, b)
```
This applies do\_stuff to all combinations of 2 different elements from A. I you want to store the result just use:
```
[do_stuff(a, b) for a, b in itertools.permutations(A, 2)]
```
|
In the first for-loop, **enumerate()** walks through the array and makes the index,value of each element available to the second for-loop. In the second loop, **range()** makes j = i+1 --> len(a) available. At this point you'd have exactly what you need which is `i` & `j` to do your operation.
```
>>> a = [1,2,3,4]
>>> array_len = len(a)
>>> for i,v in enumerate(a):
... for j in range(i+1, array_len):
... print a[i], a[j]
...
1 2
1 3
1 4
2 3
2 4
3 4
>>>
```
| 8,202
|
9,403,415
|
I'm using the great [quantities](http://pypi.python.org/pypi/quantities) package for Python. I would like to know how I can get at just the numerical value of the quantity, without the unit.
I.e., if I have
```
E = 5.3*quantities.joule
```
I would like to get at just the 5.3. I know I can simply divide by the "undesired" unit, but hoping there was a better way to do this.
|
2012/02/22
|
[
"https://Stackoverflow.com/questions/9403415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/633318/"
] |
`E.item()` seems to be what you want, if you want a Python float. `E.magnitude`, offered by tzaman, is a 0-dimensional NumPy array with the value, if you'd prefer that.
The documentation for `quantities` doesn't seem to have a very good API reference.
|
I believe `E.magnitude` gets you what you want.
| 8,212
|
7,758,913
|
How can I implement graph colouring in python using adjacency matrix? Is it possible? I implemented it using list. But it has some problems. I want to implement it using matrix. Can anybody give me the answer or suggestions to this?
|
2011/10/13
|
[
"https://Stackoverflow.com/questions/7758913",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/992874/"
] |
Is it possible? Yes, of course. But are your problems with making Graphs, or coding algorithms that deal with them?
Separating the algorithm from the data type might make it easier for you. Here are a couple suggestions:
* create (or use) an abstract data type Graph
* code the coloring algorithm against the Graph interface
* then, vary the Graph implementation between list and matrix forms
If you just want to use Graphs, and don't need to implement them yourself, a quick Google search turned up this [python graph](http://code.google.com/p/python-graph/) library.
|
Implementing using adjacency is somewhat easier than using lists, as lists take a longer time and space. igraph has a quick method neighbors which can be used. However, with adjacency matrix alone, we can come up with our own graph coloring version which may not result in using minimum chromatic number. A quick strategy may be as follows:
Initalize: Put one distinct color for nodes on each row (where a 1 appears)
Start: With highest degree node (HDN) row as a reference, compare each row (meaning each node) with the HDN and see if it is also its neighbor by detecting a 1. If yes, then change that nodes color. Proceed like this to fine-tune. O(N^2) approach! Hope this helps.
| 8,214
|
34,567,484
|
I have a list that has several days in it. Each day have several timestamps. What I want to do is to make a new list that only takes the start time and the end time in the list for each date.
I also want to delete the Character between the date and the time on each one, the char is always the same type of letter.
the time stamps can vary in how many they are on each date.
Since I'm new to python it would be preferred to use a lot of simple to understand codes. I've been using a lot of regex so pleas if there is a way with this one.
the list has been sorted with the command list.sort() so it's in the correct order.
code used to extract the information was the following.
```
file1 = open("test.txt", "r")
for f in file1:
list1 += re.findall('20\d\d-\d\d-\d\dA\d\d\:\d\d', f)
listX = (len(list1))
list2 = list1[0:listX - 2]
list2.sort()
```
here is a list of how it looks:
```
2015-12-28A09:30
2015-12-28A09:30
2015-12-28A09:35
2015-12-28A09:35
2015-12-28A12:00
2015-12-28A12:00
2015-12-28A12:15
2015-12-28A12:15
2015-12-28A14:30
2015-12-28A14:30
2015-12-28A15:15
2015-12-28A15:15
2015-12-28A16:45
2015-12-28A16:45
2015-12-28A17:00
2015-12-28A17:00
2015-12-28A18:15
2015-12-28A18:15
2015-12-29A08:30
2015-12-29A08:30
2015-12-29A08:35
2015-12-29A08:35
2015-12-29A10:45
2015-12-29A10:45
2015-12-29A11:00
2015-12-29A11:00
2015-12-29A13:15
2015-12-29A13:15
2015-12-29A14:00
2015-12-29A14:00
2015-12-29A15:30
2015-12-29A15:30
2015-12-29A15:45
2015-12-29A15:45
2015-12-29A17:15
2015-12-29A17:15
2015-12-30A08:30
2015-12-30A08:30
2015-12-30A08:35
2015-12-30A08:35
2015-12-30A10:45
2015-12-30A10:45
2015-12-30A11:00
2015-12-30A11:00
2015-12-30A13:00
2015-12-30A13:00
2015-12-30A13:45
2015-12-30A13:45
2015-12-30A15:15
2015-12-30A15:15
2015-12-30A15:30
2015-12-30A15:30
2015-12-30A17:15
2015-12-30A17:15
```
And this is how I want it to look like:
```
2015-12-28 09:30
2015-12-28 18:15
2015-12-29 08:30
2015-12-29 17:15
2015-12-30 08:30
2015-12-30 17:15
```
|
2016/01/02
|
[
"https://Stackoverflow.com/questions/34567484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5738256/"
] |
First of all, you should convert all your strings into proper dates, Python can work with. That way, you have a lot more control on it, also to change the formatting later. So let’s parse your dates using [`datetime.strptime`](https://docs.python.org/3/library/datetime.html#datetime.datetime.strptime) in `list2`:
```
from datetime import datetime
dates = [datetime.strptime(item, '%Y-%m-%dA%H:%M') for item in list2]
```
This creates a new list `dates` that contains all your dates from `list2` but as parsed `datetime` object.
Now, since you want to get the first and the last date of each day, we somehow have to group your dates by the date component. There are various ways to do that. I’ll be using [`itertools.groupby`](https://docs.python.org/3/library/itertools.html#itertools.groupby) for it, with a key function that just looks at the date component of each entry:
```
from itertools import groupby
for day, times in groupby(dates, lambda x: x.date()):
first, *mid, last = times
print(first)
print(last)
```
If we run this, we already get your output (without date formatting):
```
2015-12-28 09:30:00
2015-12-28 18:15:00
2015-12-29 08:30:00
2015-12-29 17:15:00
2015-12-30 08:30:00
2015-12-30 17:15:00
```
Of course, you can also collect that first and last date in a list first to process the dates later:
```
filteredDates = []
for day, times in groupby(dates, lambda x: x.date()):
first, *mid, last = times
filteredDates.append(first)
filteredDates.append(last)
```
And you can also output your dates with a different format using [`datetime.strftime`](https://docs.python.org/3/library/datetime.html#datetime.datetime.strftime):
```
for date in filteredDates:
print(date.strftime('%Y-%m-%d %H:%M'))
```
That would give us the following output:
```
2015-12-28 09:30
2015-12-28 18:15
2015-12-29 08:30
2015-12-29 17:15
2015-12-30 08:30
2015-12-30 17:15
```
---
If you don’t want to go the route through parsing those dates, of course you could also do this simply by working on the strings. Since they are nicely formatted (i.e. they can be easily compared), you can do that as well. It would look like this then:
```
for day, times in groupby(list2, lambda x: x[:10]):
first, *mid, last = times
print(first)
print(last)
```
Producing the following output:
```
2015-12-28A09:30
2015-12-28A18:15
2015-12-29A08:30
2015-12-29A17:15
2015-12-30A08:30
2015-12-30A17:15
```
|
Because your data is ordered you just need to pull the first and last value from each group, you can use re.sub to remove the single letter replacing it with a space then split each date string just comparing the dates:
```
from re import sub
def grp(l):
it = iter(l)
prev = start = next(it).replace("A"," ")
for dte in it:
dte = dte.replace("A"," ")
# if we have a new date, yield that start and end
if dte.split(None, 1)[0] != prev.split(None,1)[0]:
yield start
yield prev
start = dte
prev = dte
yield start, prev
l=["2015-12-28A09:30", "2015-12-28A09:30", .....................
l[:] = grp(l)
```
This could also certainly be done as your process the file without sorting by using a dict to group:
```
from re import findall
from collections import OrderedDict
with open("dates.txt") as f:
od = defaultdict(lambda: {"min": "null", "max": ""})
for line in f:
for dte in findall('20\d\d-\d\d-\d\dA\d\d\:\d\d', line):
dte, tme = dte.split("A")
_dte = "{} {}".format(dte, tme)
if od[dte]["min"] > _dte:
od[dte]["min"] = _dte
if od[dte]["max"] < _dte:
od[dte]["max"] = _dt
print(list(od.values()))
```
Which will give you the start and end time for each date.
```
[{'min': '2016-01-03 23:59', 'max': '2016-01-03 23:59'},
{'min': '2015-12-28 00:00', 'max': '2015-12-28 18:15'},
{'min': '2015-12-30 08:30', 'max': '2015-12-30 17:15'},
{'min': '2015-12-29 08:30', 'max': '2015-12-29 17:15'},
{'min': '2015-12-15 08:41', 'max': '2015-12-15 08:41'}]
```
The start for `2015-12-28` is also `00:00` not `9:30`.
if you dates are actually as posted one per line you don't need a regex either:
```
from collections import defaultdict
with open("dates.txt") as f:
od = defaultdict(lambda: {"min": "null", "max": ""})
for line in f:
dte, tme = line.rstrip().split("A")
_dte = "{} {}".format(dte, tme)
if od[dte]["min"] > _dte:
od[dte]["min"] = _dte
if od[dte]["max"] < _dte:
od[dte]["max"] = _dte
print(list(od.values()
```
Which would give you the same output.
| 8,215
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.