qid
int64 46k
74.7M
| question
stringlengths 54
37.8k
| date
stringlengths 10
10
| metadata
listlengths 3
3
| response_j
stringlengths 17
26k
| response_k
stringlengths 26
26k
|
|---|---|---|---|---|---|
56,145,426
|
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found".
I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name.
I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04).
Retrying the installation command shows that the package was successfully installed.
`find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`)
None of these gave access to the executable. How do I do that?
|
2019/05/15
|
[
"https://Stackoverflow.com/questions/56145426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39242/"
] |
I already suggested it in the comments: You can use a map to count all the values.
Here is an **intentionally verbose** example (to make clear what happens):
```
String[] names = {"a","b","a","a","c","b"};
Integer[] numbers = {5,2,3,1,2,1};
Map<String, Integer> totals = new HashMap<String, Integer>();
for (int i = 0; i < names.length; i++) {
if (totals.containsKey(names[i])) {
totals.put(names[i], totals.get(names[i]) + numbers[i]);
} else {
totals.put(names[i], numbers[i]);
}
}
System.out.println(totals);
```
So if the name is already in the map, just increase the count by the new number. If it's not, add a new map entry with the number.
Beware that for this to work, your two arrays must be of equal length!
This will print:
```
{a=9, b=3, c=2}
```
|
Using Map will suffice your requirement. It could be done as below
String[] names = {"a", "b", "a", "a", "c", "b"};
Integer[] numbers = {5, 2, 3, 1, 2, 1};
Map expectedOut = new HashMap();
```
for (int i = 0; i < names.length; i++) {
if (expectedOut.containsKey(names[i]))
expectedOut.put(names[i], expectedOut.get(names[i]) + numbers[i]);
else
expectedOut.put(names[i], numbers[i]);
}
for (Map.Entry<String, Integer> out : expectedOut.entrySet())
System.out.println(out.getKey() + " = " + out.getValue());
```
}
|
56,145,426
|
I ran `pip3 install detect-secrets`; but running `detect-secrets` then gives "Command not found".
I also tried variations, for example the switch `--user`; `sudo`; and even `pip` rather than `pip3`. Also with underscore in the name.
I further added all directories shown in `python3.6 -m site` to my `PATH` (Ubuntu 18.04).
Retrying the installation command shows that the package was successfully installed.
`find . -name detect-secrets` (also `detect_secrets`) shows these in `./.local/bin/detect-secrets` and `./home/user/.local/lib/python3.6/site-packages/detect_secrets`)
None of these gave access to the executable. How do I do that?
|
2019/05/15
|
[
"https://Stackoverflow.com/questions/56145426",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/39242/"
] |
Here is a working example:
```
import java.util.HashMap;
import java.util.Map;
public class Main {
public static void main(String[] args) {
String[] names = {"a","b","a","a","c","b"};
Integer[] numbers = {5,2,3,1,2,1};
Map<String, Integer> occurrences = new HashMap<>();
for(int i = 0; i < names.length; ++i) {
String key = names[i];
Integer previousNumber = occurrences.getOrDefault(key, 0); //Previously stored number
Integer correspondingNumber = numbers[i]; //Delta for specified name
occurrences.put(key, previousNumber + correspondingNumber); //Storing new value
}
//Print result
occurrences.forEach((key, value) -> System.out.println("Name: " + key + " Amount: " + value));
}
}
```
Maps allows you to asign some kind of value to some unique key.
Generally put/retrieval operations of Maps are said to be of O(1) time complexity, which makes them a perfect solution to your problem.
The most basic implementation is HashMap, but if you want to keep the order of names when iterating, just use the LinkedHashMap.
On the other hand, if you want your data to be sorted in some manner, then you are likely to use TreeMap.
EDIT:
Like Sharon Ben Asher mentioned in comments section below, you can use merge() method to shorten the code:
```
import java.util.HashMap;
import java.util.Map;
public class Main {
public static void main(String[] args) {
String[] names = {"a","b","a","a","c","b"};
Integer[] numbers = {5,2,3,1,2,1};
Map<String, Integer> occurrences = new HashMap<>();
for(int i = 0; i < names.length; ++i)
occurrences.merge(names[i], numbers[i], Integer::sum);
//Print result
occurrences.forEach((key, value) -> System.out.println("Name: " + key + " Amount: " + value));
}
}
```
But I broke it down a little in the first answer just to give you a better explanation of how do the maps work. Basically you use methods like get() (getOrDefault() is a variant of it, which returns value even if no mapping was found for given key) and put() allow you to commit new mapping/override existing for a given key.
|
Using Map will suffice your requirement. It could be done as below
String[] names = {"a", "b", "a", "a", "c", "b"};
Integer[] numbers = {5, 2, 3, 1, 2, 1};
Map expectedOut = new HashMap();
```
for (int i = 0; i < names.length; i++) {
if (expectedOut.containsKey(names[i]))
expectedOut.put(names[i], expectedOut.get(names[i]) + numbers[i]);
else
expectedOut.put(names[i], numbers[i]);
}
for (Map.Entry<String, Integer> out : expectedOut.entrySet())
System.out.println(out.getKey() + " = " + out.getValue());
```
}
|
32,342,761
|
In my code pasted bellow (which is python 3 code) I expected the for loop to change the original objects (ie I expected NSTEPx to have been changed by the for loop). Since lists and arrays are mutable I should have edited the object by referring to it by the variable "data". However, after this code was run, and I called NSTEPx, it was not changed. Can someone explain why this is? I come from a background of C++ and the idea of mutable and immutable objects is something that I am only recently understanding the nuances of, or so I thought.
Here is the code:
```
NSTEPx = np.array(NSTEPx)
TIMEx = np.array(TIMEx)
TEMPx = np.array(TEMPx)
PRESSx = np.array(PRESSx)
Etotx = np.array(Etotx)
EKtotx = np.array(EKtotx)
EPtotx = np.array(EPtotx)
VOLUMEx = np.array(VOLUMEx)
alldata = [NSTEPx,TIMEx,TEMPx, PRESSx, Etotx, EKtotx, EPtotx]
for data in alldata:
temp = data[1001:-1]
data = np.insert(data,0,temp)
data = np.delete(data,np.s_[1001:-1])
```
|
2015/09/02
|
[
"https://Stackoverflow.com/questions/32342761",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/5286344/"
] |
In your loop, `data` refers to an array (some object). The object referred to is mutable. The variable `data` can be changed as well to refer to something else, but that won't change what's in `alldata` (values that refer to objects) or the variables whose contents you implicitly copied to construct `alldata`. Hence, all you change is a local variable (implicitly copied from `alldata`) to refer to a newly created array. Any other referring values are unchanged and still refer to the old array.
|
Python has **no** assignment! `data = value` is strictly a *binding* operation, not an assignment. This is really different then in eg C++
A Python variable is like a label, or a yellow sticky note: you can put it on *something* or move it to something else; it does not (**never**) change the *thing* (object) it is one.
The =-operator does move the label; it "binds" it. Although we usually say *assign*, it is really not the assign of C. (Where it is basically is a memory address)
To change a value is Python, you need a method: `aLabel.do_update()`, will (typically) change *self*, the object itself.
Note `aList[....]` is a method!
So, to change you data: change it (sec). Do not put a other label on it, nor put the existing label on other data!
Hope this explains you question
|
1,381,739
|
First of all, thank you for taking the time to read this. I am new to developing applications for the Mac and I am having some problems. My application works fine, and that is not the focus of my question. Rather, I have a python program which essentially does this:
```
for i in values:
os.system(java program_and_options[i])
```
However, every time my program executes the java program, a java window is created in my dock (with an annoying animation) and most importantly steals the focus of my mouse and keyboard. Then it goes away a second later, to be replaced by another Java instance. This means that my batch program cannot be used while I am interacting with my Mac, because I get a hiccup every second or more often and cannot get anything done. My problem is that the act of displaying something in the dock takes my focus, and I would like it not to. Is there a setting on OS X to never display something in the dock (such as Java or python)?
Is there a Mac setting or term that I should use to properly describe this problem I am having? I completely lack the vocabulary to describe this problem and I hope I make sense. I appreciate any help.
I am running Mac OS X, Version 10.5.7 with a 1.66 GHz Intel Core Due, 2 GB memory, Macintosh HD. I am running Python 2.5.1, java version "1.5.0\_16" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0\_16-b06-284) Java HotSpot(TM) Client VM (build 1.5.0\_16-133, mixed mode, sharing).
Thanks again,
-Brian J. Stinar-
|
2009/09/04
|
[
"https://Stackoverflow.com/questions/1381739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Does running Java with headless mode = true fix it?
<http://zzamboni.org/brt/2007/12/07/disable-dock-icon-for-java-programs-in-mac-osx-howto/>
|
As far as I am aware there is no way to disable the annoying double Java bounce without making your Java application a first class citizen on Mac OS X (much like NetBeans, or Eclipse). As for making certain programs not show in the dock, there are .plist modifications that can be made so that the program does not show up in the dock. See <http://www.macosxhints.com/article.php?story=20010701191518268>
|
1,381,739
|
First of all, thank you for taking the time to read this. I am new to developing applications for the Mac and I am having some problems. My application works fine, and that is not the focus of my question. Rather, I have a python program which essentially does this:
```
for i in values:
os.system(java program_and_options[i])
```
However, every time my program executes the java program, a java window is created in my dock (with an annoying animation) and most importantly steals the focus of my mouse and keyboard. Then it goes away a second later, to be replaced by another Java instance. This means that my batch program cannot be used while I am interacting with my Mac, because I get a hiccup every second or more often and cannot get anything done. My problem is that the act of displaying something in the dock takes my focus, and I would like it not to. Is there a setting on OS X to never display something in the dock (such as Java or python)?
Is there a Mac setting or term that I should use to properly describe this problem I am having? I completely lack the vocabulary to describe this problem and I hope I make sense. I appreciate any help.
I am running Mac OS X, Version 10.5.7 with a 1.66 GHz Intel Core Due, 2 GB memory, Macintosh HD. I am running Python 2.5.1, java version "1.5.0\_16" Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0\_16-b06-284) Java HotSpot(TM) Client VM (build 1.5.0\_16-133, mixed mode, sharing).
Thanks again,
-Brian J. Stinar-
|
2009/09/04
|
[
"https://Stackoverflow.com/questions/1381739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] |
Does running Java with headless mode = true fix it?
<http://zzamboni.org/brt/2007/12/07/disable-dock-icon-for-java-programs-in-mac-osx-howto/>
|
It's certainly possible to write a Java application which doesn't display in the Dock... in fact, it's the default. If your application *is* showing up, it must be doing something which triggers window server access -- your best bet is to try and figure out what that is.
|
47,314,905
|
I did import the module with a name, and import it again without a name and both seems to be working fine and gives the same class type.
```
>>> from collections import Counter as c
>>> c
<class 'collections.Counter'>
>>> from collections import Counter
>>> Counter
<class 'collections.Counter'>
```
How does that works in python, is that a single object points to the same reference?
Also why not that the previous name import got overwritten or removed.
*I'm not sure about the terminology as well*
|
2017/11/15
|
[
"https://Stackoverflow.com/questions/47314905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3950422/"
] |
Using python 2.7.13:
```
>>> from collections import Counter as c
>>> c
<class 'collections.Counter'>
>>> from collections import Counter
>>> Counter
<class 'collections.Counter'>
>>> id(c), id(Counter)
(140244739511392, 140244739511392)
>>> id(c) == id(Counter)
True
```
Yes, `c` and `Counter` are the same. Two variables (names) that reference the same object.
|
As I remember , everything you define in python is an object belongs to a class. And yes if a variable object has assigned some value and if you create another variable with same value then python wont create a new reference for the second variable but it will use first variables reference for second variable as well.
For example:
```
>>> a=10
>>> id(a)
2001255152
>>> b=20
>>> id(b)
2001255472
>>> c=10
>>> id(c)
2001255152
>>>
```
I may not explain in much better way but my example does I hope.
|
47,314,905
|
I did import the module with a name, and import it again without a name and both seems to be working fine and gives the same class type.
```
>>> from collections import Counter as c
>>> c
<class 'collections.Counter'>
>>> from collections import Counter
>>> Counter
<class 'collections.Counter'>
```
How does that works in python, is that a single object points to the same reference?
Also why not that the previous name import got overwritten or removed.
*I'm not sure about the terminology as well*
|
2017/11/15
|
[
"https://Stackoverflow.com/questions/47314905",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3950422/"
] |
Using python 2.7.13:
```
>>> from collections import Counter as c
>>> c
<class 'collections.Counter'>
>>> from collections import Counter
>>> Counter
<class 'collections.Counter'>
>>> id(c), id(Counter)
(140244739511392, 140244739511392)
>>> id(c) == id(Counter)
True
```
Yes, `c` and `Counter` are the same. Two variables (names) that reference the same object.
|
If you take a look at the disassembled code, you can see that it does load the same object. (line 2 and line 14)
```
>>> import dis
>>> codeObj = compile("from collections import Counter as c; from collections import Counter", "foo", "exec")
>>> dis.dis(codeObj)
1 0 LOAD_CONST 0 (0)
2 LOAD_CONST 1 (('Counter',))
4 IMPORT_NAME 0 (collections)
6 IMPORT_FROM 1 (Counter)
8 STORE_NAME 2 (c)
10 POP_TOP
12 LOAD_CONST 0 (0)
14 LOAD_CONST 1 (('Counter',))
16 IMPORT_NAME 0 (collections)
18 IMPORT_FROM 1 (Counter)
20 STORE_NAME 1 (Counter)
22 POP_TOP
24 LOAD_CONST 2 (None)
26 RETURN_VALUE
```
And as others have mentioned, you can use `id(c) == id(Counter)` or `c is Counter` to test if they have the same reference.
|
24,584,441
|
I'm trying to execute an operation to each file found by find - with a specific file extension (wma). For example, in python, I would simply write the following script:
```
for file in os.listdir('.'):
if file.endswith('wma'):
name = file[:-4]
command = "ffmpeg -i '{0}.wma' '{0}.mp3'".format(name)
os.system(command)
```
I know I need to execute something similar to
```
find -type f -name "*.wma" \
exec ffmpeg -i {}.wma {}.mp3;
```
But obviously this isn't working or else I wouldn't be asking this question =]
|
2014/07/05
|
[
"https://Stackoverflow.com/questions/24584441",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2014591/"
] |
Try to use setMainView method
```
class IndexController extends ControllerBase
{
public function onConstruct(){
}
public function indexAction()
{
return $this->view->setMainView("login/login");
}
}
```
setMainView method use to set the default view. Just put the view name as parameter.
<http://docs.phalconphp.com/en/latest/api/Phalcon_Mvc_View.html>
|
Remove the `return` keyword. I believe it is fetching the view you want and then returning it into the base template.
|
24,584,441
|
I'm trying to execute an operation to each file found by find - with a specific file extension (wma). For example, in python, I would simply write the following script:
```
for file in os.listdir('.'):
if file.endswith('wma'):
name = file[:-4]
command = "ffmpeg -i '{0}.wma' '{0}.mp3'".format(name)
os.system(command)
```
I know I need to execute something similar to
```
find -type f -name "*.wma" \
exec ffmpeg -i {}.wma {}.mp3;
```
But obviously this isn't working or else I wouldn't be asking this question =]
|
2014/07/05
|
[
"https://Stackoverflow.com/questions/24584441",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2014591/"
] |
Try to use setMainView method
```
class IndexController extends ControllerBase
{
public function onConstruct(){
}
public function indexAction()
{
return $this->view->setMainView("login/login");
}
}
```
setMainView method use to set the default view. Just put the view name as parameter.
<http://docs.phalconphp.com/en/latest/api/Phalcon_Mvc_View.html>
|
There are two way to use view in phalcon
```
$this->view->pick(array('login/login')); // with layout
$this->view->pick('login/login'); // without layout
```
|
24,584,441
|
I'm trying to execute an operation to each file found by find - with a specific file extension (wma). For example, in python, I would simply write the following script:
```
for file in os.listdir('.'):
if file.endswith('wma'):
name = file[:-4]
command = "ffmpeg -i '{0}.wma' '{0}.mp3'".format(name)
os.system(command)
```
I know I need to execute something similar to
```
find -type f -name "*.wma" \
exec ffmpeg -i {}.wma {}.mp3;
```
But obviously this isn't working or else I wouldn't be asking this question =]
|
2014/07/05
|
[
"https://Stackoverflow.com/questions/24584441",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2014591/"
] |
Try to use setMainView method
```
class IndexController extends ControllerBase
{
public function onConstruct(){
}
public function indexAction()
{
return $this->view->setMainView("login/login");
}
}
```
setMainView method use to set the default view. Just put the view name as parameter.
<http://docs.phalconphp.com/en/latest/api/Phalcon_Mvc_View.html>
|
You can set action view only by `setRenderLevel`
```
public function indexAction()
{
$this->view->setRenderLevel(View::LEVEL_ACTION_VIEW);
return $this->view->pick(array("login/login"));
}
```
|
22,949,270
|
Is there a way how to switch off (and on later) this check at runtime?
The motivation is that I need to use third party libraries which do not care about tabs and spaces mixing and thus running my code with [`-t` switch](https://docs.python.org/2/using/cmdline.html#cmdoption-t "switch") issues warnings.
(I hope that analogous method can be used for the `-b` switch.)
**edit:** I forgot to note, that the library already mixes tabs and spaces in one file and that's why I see the warnings.
|
2014/04/08
|
[
"https://Stackoverflow.com/questions/22949270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/542196/"
] |
Taking a straight percentage of views doesn't give an accurate representation of the item's popularity, either. Although 9 likes out of 18 is "stronger" than 9 likes out of 500, the fact that one video got 500 views and the other got only 18 is a much stronger indication of the video's popularity.
A video that gets a lot of views usually means that it's very popular across a wide range of viewers. That it only gets a small percentage of likes or dislikes is usually a secondary consideration. A video that gets a small number of views and a large number of likes is usually an indication of a video that's very narrowly targeted.
If you want to incorporate views in the equation, I would suggest multiplying the Bayesian average you get from the likes and dislikes by the logarithm of the number of views. That should sort things out pretty well.
Unless you want to go with multi-factor ranking, where likes, dislikes, and views are each counted separately and given individual weights. The math is more involved and it takes some tweaking, but it tends to give better results. Consider, for example, that people will often "like" a video that they find mildly amusing, but they'll only "dislike" if they find it objectionable. A dislike is a much stronger indication than a like.
|
A simple approach would be to come up with a suitable scale factor for each average - and then sum the "weights". The difficult part would be tweaking the scale factors to produce the desired ordering.
From your example data, a starting point might be something like:
```
Weighted Rating = (AV * (1 / 50)) + (AL * 3) - (AD * 6)
```
Key & Explanation
-----------------
**AV** = Average views per day:
*5000 is high so divide by 50 to bring the weight down to 100 in this case.*
**AL** = Average likes per day:
*100 in 3 days = 33.33 is high so multiply by 3 to bring the weight up to 100 in this case.*
**AD** = Average dislikes per day:
*10,000 seems an extreme value here - would agree with Jim Mischel's point that dislikes may be more significant than likes so am initially going with a negative scale factor of twice the size of the "likes" scale factor.*
This gives the following results (see [SQL Fiddle Demo](http://sqlfiddle.com/#!2/94d9a/4)):
```
ID TITLE SCORE
-----------------------------
3 Epic Fail 60.8
2 Silly Dog 4.166866
1 Funny Cat 1.396528
5 Trololool -1.666766
4 Duck Song -14950
```
[Am deliberately keeping this simple to present the idea of a starting point - but with real data you might find linear scaling isn't sufficient - in which case you could consider bandings or logarithmic scaling.]
|
22,949,270
|
Is there a way how to switch off (and on later) this check at runtime?
The motivation is that I need to use third party libraries which do not care about tabs and spaces mixing and thus running my code with [`-t` switch](https://docs.python.org/2/using/cmdline.html#cmdoption-t "switch") issues warnings.
(I hope that analogous method can be used for the `-b` switch.)
**edit:** I forgot to note, that the library already mixes tabs and spaces in one file and that's why I see the warnings.
|
2014/04/08
|
[
"https://Stackoverflow.com/questions/22949270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/542196/"
] |
Taking a straight percentage of views doesn't give an accurate representation of the item's popularity, either. Although 9 likes out of 18 is "stronger" than 9 likes out of 500, the fact that one video got 500 views and the other got only 18 is a much stronger indication of the video's popularity.
A video that gets a lot of views usually means that it's very popular across a wide range of viewers. That it only gets a small percentage of likes or dislikes is usually a secondary consideration. A video that gets a small number of views and a large number of likes is usually an indication of a video that's very narrowly targeted.
If you want to incorporate views in the equation, I would suggest multiplying the Bayesian average you get from the likes and dislikes by the logarithm of the number of views. That should sort things out pretty well.
Unless you want to go with multi-factor ranking, where likes, dislikes, and views are each counted separately and given individual weights. The math is more involved and it takes some tweaking, but it tends to give better results. Consider, for example, that people will often "like" a video that they find mildly amusing, but they'll only "dislike" if they find it objectionable. A dislike is a much stronger indication than a like.
|
Every video have:
* likes
* dislikes
* views
* upload\_date
So we can deduct the following parameters from them:
* like\_rate = likes/views
* dislike\_rate = likes/views
* view\_rate = views/number\_of\_website\_users
* video\_age = count\_days(upload\_date, today)
* avg\_views = views/upload\_age
* avg\_likes = likes/upload\_age
* avg\_dislikes = dislikes/upload\_age
Before we can set the formula to be used, we need to specify how different videos popularity should work like, one way is to explain in points the property of a popular video:
1. A popular video is a recent one in most cases
2. The older a video gets, the higher avg\_views it requires to become popular
3. A video with a like\_rate over like\_rate\_threshold or a dislike\_rate over dislike\_rate\_threshold, can compete by the difference from its threshold with how old it gets
4. A high view\_rate of a video is a good indicator to suggest that video to a user who have not watched it before
5. If avg\_likes or avg\_dislikes make most of avg\_views, the video is considered active in the meantime, in case of active videos we don't really need to check how old it's
Conclusion: I don't have a formula, but one can be constructed by converting one unit into another's axis, like cutting a video age by days based on a calculation made using avg\_likes, avg\_dislikes, and avg\_views
|
22,949,270
|
Is there a way how to switch off (and on later) this check at runtime?
The motivation is that I need to use third party libraries which do not care about tabs and spaces mixing and thus running my code with [`-t` switch](https://docs.python.org/2/using/cmdline.html#cmdoption-t "switch") issues warnings.
(I hope that analogous method can be used for the `-b` switch.)
**edit:** I forgot to note, that the library already mixes tabs and spaces in one file and that's why I see the warnings.
|
2014/04/08
|
[
"https://Stackoverflow.com/questions/22949270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/542196/"
] |
Taking a straight percentage of views doesn't give an accurate representation of the item's popularity, either. Although 9 likes out of 18 is "stronger" than 9 likes out of 500, the fact that one video got 500 views and the other got only 18 is a much stronger indication of the video's popularity.
A video that gets a lot of views usually means that it's very popular across a wide range of viewers. That it only gets a small percentage of likes or dislikes is usually a secondary consideration. A video that gets a small number of views and a large number of likes is usually an indication of a video that's very narrowly targeted.
If you want to incorporate views in the equation, I would suggest multiplying the Bayesian average you get from the likes and dislikes by the logarithm of the number of views. That should sort things out pretty well.
Unless you want to go with multi-factor ranking, where likes, dislikes, and views are each counted separately and given individual weights. The math is more involved and it takes some tweaking, but it tends to give better results. Consider, for example, that people will often "like" a video that they find mildly amusing, but they'll only "dislike" if they find it objectionable. A dislike is a much stronger indication than a like.
|
Since no one has pointed it out yet (and I'm a bit surprised), I'll do it. The problem with any ranking algorithm ***we*** might come up with is that it's based on ***our*** point of view. What you're certainly looking for is an algorithm that accomodates the ***median user*** point of view.
This is no new idea. Netflix had it some time ago, only they personalized it, basing theirs on individual selections. We are looking - as I said - for the median user best ranking.
So how to achieve it? As others have suggested, you are looking for a function R(L,D,V,U) that returns a real number for the sort key. R() is likely to be quite non-linear.
This is a classical machine learning problem. The "training data" consists of user selections. When a user selects a movie, it's a statement about the goodness of the ranking: selecting a high-ranked one is a vote of confidence. A low-ranked selection is a rebuke. Function R() should revise itself accordingly. Initially, the current ranking system can be used to train the system to mirror its selections. From there it will adapt to user feedback.
There are several schemes and a huge research literature on machine learning for problems like this: regression modeling, neural networks, representation learning, etc. See for example [the Wikipedia page](http://en.wikipedia.org/wiki/Machine_learning) for some pointers.
I could suggest some schemes, but won't unless there is interest in this approach. Say "yes" in comments if this is true.
Implementation will be non-trivial - certainly more than just tweaking your `SELECT` statement. But on the plus side you'll be able to claim your customers are getting what they're asking for in very good conscience!
|
22,949,270
|
Is there a way how to switch off (and on later) this check at runtime?
The motivation is that I need to use third party libraries which do not care about tabs and spaces mixing and thus running my code with [`-t` switch](https://docs.python.org/2/using/cmdline.html#cmdoption-t "switch") issues warnings.
(I hope that analogous method can be used for the `-b` switch.)
**edit:** I forgot to note, that the library already mixes tabs and spaces in one file and that's why I see the warnings.
|
2014/04/08
|
[
"https://Stackoverflow.com/questions/22949270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/542196/"
] |
I can point you to a non-parametric way to get the best ordering with respect to a weighted linear scoring system without knowing exactly what weights you want to use (just constraints on the weights). First though, note that average daily views might be misleading because movies are probably downloaded less in later years. So the first thing I would do is fit a polynomial model (degree 10 should be good enough) that predicts total number of views as a function of how many days the movie has been available. Then, once you have your fit, then for each date you get predicted total number of views, which is what you divide by to get "relative average number of views" which is a multiplier indicator which tells you how many times more likely (or less likely) the movie is to be watched compared to what you expect on average given the data. So 2 would mean the movie is watched twice as much, and 1/2 would mean the movie is watched half as much. If you want 2 and 1/2 to be "negatives" of each other which sort of makes sense from a scoring perspective, then take the log of the multiplier to get the score.
Now, there are several quantities you can compute to include in an overall score, like the (log) "relative average number of views" I mentioned above, and (likes/total views) and (dislikes / total views). US News and World Report ranks universities each year, and they just use a weighted sum of 7 different category scores to get an overall score for each university that they rank by. So using a weighted linear combination of category scores is definitely not a bad way to go. (Noting that you may want to do something like a log transform on some categories before taking the linear combination of scores). The problem is you might not know exactly what weights to use to give the "most desirable" ranking. The first thing to note is that if you want the weights on the same scale, then you should normalize each category score so that it has standard deviation equal to 1 across all movies. Then, e.g., if you use equal weights, then each category is truly weighted equally. So then the question is what kinds of weights you want to use. Clearly the weights for relative number of views and proportion of likes should be positive, and the weight for proportion of dislikes should be negative, so multiply the dislike score by -1 and then you can assume all weights are positive. If you believe each category should contribute at least 20%, then you get that each weight is at least 0.2 times the sum of weights. If you believe that dislikes are more important that likes, then you can say (dislike weight) >= c\*(like weight) for some c > 1, or (dislike\_weight) >= c\*(sum of weights) + (like weight) for some c > 0. Similarly you can define other linear constraints on the weights that reflect your beliefs about what the weights should be, without picking exact values for the weights.
Now here comes the fun part, which is the main thrust of my post. If you have linear inequality constraints on the weights, all of the form that a linear combination of the weights is greater than or equal to 0, but you don't know what weights to use, then you can simply compute all possible top-10 or top-20 rankings of movies that you can get for any choice of weights that satisfy your constraints, and then choose the top-k ordering which is supported by the largest VOLUME of weights, where the volume of weights is the solid angle of the polyhedral cone of weights which results in the particular top-k ordering. Then, once you've chosen the "most supported" top-k ranking, you can restrict the scoring parameters to be in the cone that gives you that ranking, and remove the top k movies, and compute all possibilities for the next top-10 or top-20 ranking of the remaining movies when the weights are restricted to respect the original top-k movies' ranking. Computing all obtainale top-k rankings of movies for restricted weights can be done much, much faster than enumerating all n(n-1)...(n-k+1) top-k possible rankings and trying them all out. If you have two or three categories then using polytope construction methods the obtainable top-k rankings can be computed in linear time in terms of the output size, i.e. the number of obtainable top-k rankings. The polyhedral computation approach also gives the inequalities that define the cone of scoring weights that give each top-k ranking, also in linear time if you have two or three categories. Then to get the volume of weights that give each ranking, you triangulate the cone and intersect with the unit sphere and compute the areas of the spherical triangles that you get. (Again linear complexity if the number of categories is 2 or 3). Furthermore, if you scale your categories to be in a range like [0,50] and round to the nearest integer, then you can prove that the number of obtainable top-k rankings is actually quite small if the number of categories is like 5 or less. (Even if you have a lot of movies and k is high). And when you fix the ordering for the current top group of movies and restrict the parameters to be in the cone that yields the fixed top ordering, this will further restrict the output size for the obtainable next best top-k movies. The output size does depend (polynomially) on k which is why I recommended setting k=10 or 20 and computing top-k movies and choosing the best (largest volume) ordering and fixing it, and then computing the next best top-k movies that respect the ordering of the original top-k etc.
Anyway if this approach sounds appealing to you (iteratively finding successive choices of top-k rankings that are supported by the largest volume of weights that satisfy your weight constraints), let me know and I can produce and post a write-up on the polyhedral computations needed as well as a link to software that will allow you to do it with minimal extra coding on your part. In the meantime here is a paper <http://arxiv.org/abs/0805.1026> I wrote on a similar study of 7-category university ranking data where the weights were simply restricted to all be non-negative (generalizing to arbitrary linear constraints on weights is straightforward).
|
A simple approach would be to come up with a suitable scale factor for each average - and then sum the "weights". The difficult part would be tweaking the scale factors to produce the desired ordering.
From your example data, a starting point might be something like:
```
Weighted Rating = (AV * (1 / 50)) + (AL * 3) - (AD * 6)
```
Key & Explanation
-----------------
**AV** = Average views per day:
*5000 is high so divide by 50 to bring the weight down to 100 in this case.*
**AL** = Average likes per day:
*100 in 3 days = 33.33 is high so multiply by 3 to bring the weight up to 100 in this case.*
**AD** = Average dislikes per day:
*10,000 seems an extreme value here - would agree with Jim Mischel's point that dislikes may be more significant than likes so am initially going with a negative scale factor of twice the size of the "likes" scale factor.*
This gives the following results (see [SQL Fiddle Demo](http://sqlfiddle.com/#!2/94d9a/4)):
```
ID TITLE SCORE
-----------------------------
3 Epic Fail 60.8
2 Silly Dog 4.166866
1 Funny Cat 1.396528
5 Trololool -1.666766
4 Duck Song -14950
```
[Am deliberately keeping this simple to present the idea of a starting point - but with real data you might find linear scaling isn't sufficient - in which case you could consider bandings or logarithmic scaling.]
|
22,949,270
|
Is there a way how to switch off (and on later) this check at runtime?
The motivation is that I need to use third party libraries which do not care about tabs and spaces mixing and thus running my code with [`-t` switch](https://docs.python.org/2/using/cmdline.html#cmdoption-t "switch") issues warnings.
(I hope that analogous method can be used for the `-b` switch.)
**edit:** I forgot to note, that the library already mixes tabs and spaces in one file and that's why I see the warnings.
|
2014/04/08
|
[
"https://Stackoverflow.com/questions/22949270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/542196/"
] |
I can point you to a non-parametric way to get the best ordering with respect to a weighted linear scoring system without knowing exactly what weights you want to use (just constraints on the weights). First though, note that average daily views might be misleading because movies are probably downloaded less in later years. So the first thing I would do is fit a polynomial model (degree 10 should be good enough) that predicts total number of views as a function of how many days the movie has been available. Then, once you have your fit, then for each date you get predicted total number of views, which is what you divide by to get "relative average number of views" which is a multiplier indicator which tells you how many times more likely (or less likely) the movie is to be watched compared to what you expect on average given the data. So 2 would mean the movie is watched twice as much, and 1/2 would mean the movie is watched half as much. If you want 2 and 1/2 to be "negatives" of each other which sort of makes sense from a scoring perspective, then take the log of the multiplier to get the score.
Now, there are several quantities you can compute to include in an overall score, like the (log) "relative average number of views" I mentioned above, and (likes/total views) and (dislikes / total views). US News and World Report ranks universities each year, and they just use a weighted sum of 7 different category scores to get an overall score for each university that they rank by. So using a weighted linear combination of category scores is definitely not a bad way to go. (Noting that you may want to do something like a log transform on some categories before taking the linear combination of scores). The problem is you might not know exactly what weights to use to give the "most desirable" ranking. The first thing to note is that if you want the weights on the same scale, then you should normalize each category score so that it has standard deviation equal to 1 across all movies. Then, e.g., if you use equal weights, then each category is truly weighted equally. So then the question is what kinds of weights you want to use. Clearly the weights for relative number of views and proportion of likes should be positive, and the weight for proportion of dislikes should be negative, so multiply the dislike score by -1 and then you can assume all weights are positive. If you believe each category should contribute at least 20%, then you get that each weight is at least 0.2 times the sum of weights. If you believe that dislikes are more important that likes, then you can say (dislike weight) >= c\*(like weight) for some c > 1, or (dislike\_weight) >= c\*(sum of weights) + (like weight) for some c > 0. Similarly you can define other linear constraints on the weights that reflect your beliefs about what the weights should be, without picking exact values for the weights.
Now here comes the fun part, which is the main thrust of my post. If you have linear inequality constraints on the weights, all of the form that a linear combination of the weights is greater than or equal to 0, but you don't know what weights to use, then you can simply compute all possible top-10 or top-20 rankings of movies that you can get for any choice of weights that satisfy your constraints, and then choose the top-k ordering which is supported by the largest VOLUME of weights, where the volume of weights is the solid angle of the polyhedral cone of weights which results in the particular top-k ordering. Then, once you've chosen the "most supported" top-k ranking, you can restrict the scoring parameters to be in the cone that gives you that ranking, and remove the top k movies, and compute all possibilities for the next top-10 or top-20 ranking of the remaining movies when the weights are restricted to respect the original top-k movies' ranking. Computing all obtainale top-k rankings of movies for restricted weights can be done much, much faster than enumerating all n(n-1)...(n-k+1) top-k possible rankings and trying them all out. If you have two or three categories then using polytope construction methods the obtainable top-k rankings can be computed in linear time in terms of the output size, i.e. the number of obtainable top-k rankings. The polyhedral computation approach also gives the inequalities that define the cone of scoring weights that give each top-k ranking, also in linear time if you have two or three categories. Then to get the volume of weights that give each ranking, you triangulate the cone and intersect with the unit sphere and compute the areas of the spherical triangles that you get. (Again linear complexity if the number of categories is 2 or 3). Furthermore, if you scale your categories to be in a range like [0,50] and round to the nearest integer, then you can prove that the number of obtainable top-k rankings is actually quite small if the number of categories is like 5 or less. (Even if you have a lot of movies and k is high). And when you fix the ordering for the current top group of movies and restrict the parameters to be in the cone that yields the fixed top ordering, this will further restrict the output size for the obtainable next best top-k movies. The output size does depend (polynomially) on k which is why I recommended setting k=10 or 20 and computing top-k movies and choosing the best (largest volume) ordering and fixing it, and then computing the next best top-k movies that respect the ordering of the original top-k etc.
Anyway if this approach sounds appealing to you (iteratively finding successive choices of top-k rankings that are supported by the largest volume of weights that satisfy your weight constraints), let me know and I can produce and post a write-up on the polyhedral computations needed as well as a link to software that will allow you to do it with minimal extra coding on your part. In the meantime here is a paper <http://arxiv.org/abs/0805.1026> I wrote on a similar study of 7-category university ranking data where the weights were simply restricted to all be non-negative (generalizing to arbitrary linear constraints on weights is straightforward).
|
Every video have:
* likes
* dislikes
* views
* upload\_date
So we can deduct the following parameters from them:
* like\_rate = likes/views
* dislike\_rate = likes/views
* view\_rate = views/number\_of\_website\_users
* video\_age = count\_days(upload\_date, today)
* avg\_views = views/upload\_age
* avg\_likes = likes/upload\_age
* avg\_dislikes = dislikes/upload\_age
Before we can set the formula to be used, we need to specify how different videos popularity should work like, one way is to explain in points the property of a popular video:
1. A popular video is a recent one in most cases
2. The older a video gets, the higher avg\_views it requires to become popular
3. A video with a like\_rate over like\_rate\_threshold or a dislike\_rate over dislike\_rate\_threshold, can compete by the difference from its threshold with how old it gets
4. A high view\_rate of a video is a good indicator to suggest that video to a user who have not watched it before
5. If avg\_likes or avg\_dislikes make most of avg\_views, the video is considered active in the meantime, in case of active videos we don't really need to check how old it's
Conclusion: I don't have a formula, but one can be constructed by converting one unit into another's axis, like cutting a video age by days based on a calculation made using avg\_likes, avg\_dislikes, and avg\_views
|
22,949,270
|
Is there a way how to switch off (and on later) this check at runtime?
The motivation is that I need to use third party libraries which do not care about tabs and spaces mixing and thus running my code with [`-t` switch](https://docs.python.org/2/using/cmdline.html#cmdoption-t "switch") issues warnings.
(I hope that analogous method can be used for the `-b` switch.)
**edit:** I forgot to note, that the library already mixes tabs and spaces in one file and that's why I see the warnings.
|
2014/04/08
|
[
"https://Stackoverflow.com/questions/22949270",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/542196/"
] |
I can point you to a non-parametric way to get the best ordering with respect to a weighted linear scoring system without knowing exactly what weights you want to use (just constraints on the weights). First though, note that average daily views might be misleading because movies are probably downloaded less in later years. So the first thing I would do is fit a polynomial model (degree 10 should be good enough) that predicts total number of views as a function of how many days the movie has been available. Then, once you have your fit, then for each date you get predicted total number of views, which is what you divide by to get "relative average number of views" which is a multiplier indicator which tells you how many times more likely (or less likely) the movie is to be watched compared to what you expect on average given the data. So 2 would mean the movie is watched twice as much, and 1/2 would mean the movie is watched half as much. If you want 2 and 1/2 to be "negatives" of each other which sort of makes sense from a scoring perspective, then take the log of the multiplier to get the score.
Now, there are several quantities you can compute to include in an overall score, like the (log) "relative average number of views" I mentioned above, and (likes/total views) and (dislikes / total views). US News and World Report ranks universities each year, and they just use a weighted sum of 7 different category scores to get an overall score for each university that they rank by. So using a weighted linear combination of category scores is definitely not a bad way to go. (Noting that you may want to do something like a log transform on some categories before taking the linear combination of scores). The problem is you might not know exactly what weights to use to give the "most desirable" ranking. The first thing to note is that if you want the weights on the same scale, then you should normalize each category score so that it has standard deviation equal to 1 across all movies. Then, e.g., if you use equal weights, then each category is truly weighted equally. So then the question is what kinds of weights you want to use. Clearly the weights for relative number of views and proportion of likes should be positive, and the weight for proportion of dislikes should be negative, so multiply the dislike score by -1 and then you can assume all weights are positive. If you believe each category should contribute at least 20%, then you get that each weight is at least 0.2 times the sum of weights. If you believe that dislikes are more important that likes, then you can say (dislike weight) >= c\*(like weight) for some c > 1, or (dislike\_weight) >= c\*(sum of weights) + (like weight) for some c > 0. Similarly you can define other linear constraints on the weights that reflect your beliefs about what the weights should be, without picking exact values for the weights.
Now here comes the fun part, which is the main thrust of my post. If you have linear inequality constraints on the weights, all of the form that a linear combination of the weights is greater than or equal to 0, but you don't know what weights to use, then you can simply compute all possible top-10 or top-20 rankings of movies that you can get for any choice of weights that satisfy your constraints, and then choose the top-k ordering which is supported by the largest VOLUME of weights, where the volume of weights is the solid angle of the polyhedral cone of weights which results in the particular top-k ordering. Then, once you've chosen the "most supported" top-k ranking, you can restrict the scoring parameters to be in the cone that gives you that ranking, and remove the top k movies, and compute all possibilities for the next top-10 or top-20 ranking of the remaining movies when the weights are restricted to respect the original top-k movies' ranking. Computing all obtainale top-k rankings of movies for restricted weights can be done much, much faster than enumerating all n(n-1)...(n-k+1) top-k possible rankings and trying them all out. If you have two or three categories then using polytope construction methods the obtainable top-k rankings can be computed in linear time in terms of the output size, i.e. the number of obtainable top-k rankings. The polyhedral computation approach also gives the inequalities that define the cone of scoring weights that give each top-k ranking, also in linear time if you have two or three categories. Then to get the volume of weights that give each ranking, you triangulate the cone and intersect with the unit sphere and compute the areas of the spherical triangles that you get. (Again linear complexity if the number of categories is 2 or 3). Furthermore, if you scale your categories to be in a range like [0,50] and round to the nearest integer, then you can prove that the number of obtainable top-k rankings is actually quite small if the number of categories is like 5 or less. (Even if you have a lot of movies and k is high). And when you fix the ordering for the current top group of movies and restrict the parameters to be in the cone that yields the fixed top ordering, this will further restrict the output size for the obtainable next best top-k movies. The output size does depend (polynomially) on k which is why I recommended setting k=10 or 20 and computing top-k movies and choosing the best (largest volume) ordering and fixing it, and then computing the next best top-k movies that respect the ordering of the original top-k etc.
Anyway if this approach sounds appealing to you (iteratively finding successive choices of top-k rankings that are supported by the largest volume of weights that satisfy your weight constraints), let me know and I can produce and post a write-up on the polyhedral computations needed as well as a link to software that will allow you to do it with minimal extra coding on your part. In the meantime here is a paper <http://arxiv.org/abs/0805.1026> I wrote on a similar study of 7-category university ranking data where the weights were simply restricted to all be non-negative (generalizing to arbitrary linear constraints on weights is straightforward).
|
Since no one has pointed it out yet (and I'm a bit surprised), I'll do it. The problem with any ranking algorithm ***we*** might come up with is that it's based on ***our*** point of view. What you're certainly looking for is an algorithm that accomodates the ***median user*** point of view.
This is no new idea. Netflix had it some time ago, only they personalized it, basing theirs on individual selections. We are looking - as I said - for the median user best ranking.
So how to achieve it? As others have suggested, you are looking for a function R(L,D,V,U) that returns a real number for the sort key. R() is likely to be quite non-linear.
This is a classical machine learning problem. The "training data" consists of user selections. When a user selects a movie, it's a statement about the goodness of the ranking: selecting a high-ranked one is a vote of confidence. A low-ranked selection is a rebuke. Function R() should revise itself accordingly. Initially, the current ranking system can be used to train the system to mirror its selections. From there it will adapt to user feedback.
There are several schemes and a huge research literature on machine learning for problems like this: regression modeling, neural networks, representation learning, etc. See for example [the Wikipedia page](http://en.wikipedia.org/wiki/Machine_learning) for some pointers.
I could suggest some schemes, but won't unless there is interest in this approach. Say "yes" in comments if this is true.
Implementation will be non-trivial - certainly more than just tweaking your `SELECT` statement. But on the plus side you'll be able to claim your customers are getting what they're asking for in very good conscience!
|
73,668,351
|
I have connected an Arduino to a raspberry pi so that a specific event is triggered when I send a signal(in this case a number). When I send a number with the script and tell it just to print in serial monitor it works, when I try and just have it run the motors on start it works fine, however when combining the two: having it run a specific command if a particular number is received nothing happens. If anyone could point to the flaw here, I would be very grateful.
Python Code:
```py
import serial, time
arduino = serial.Serial('/dev/ttyUSB0', 9600, timeout=1)
cmd = ''
while cmd != '0':
cmd = input('Enter a cmd ')
arduino.write(cmd.encode('ascii'))
```
Arduino Code:
```cpp
#include <Arduino.h>
const byte MOTOR_A = 3; // Motor 2 Interrupt Pin - INT 1 - Right Motor
const byte MOTOR_B = 2; // Motor 1 Interrupt Pin - INT 0 - Left Motor
// Constant for steps in disk
const float stepcount = 20.00; // 20 Slots in disk, change if different
// Constant for wheel diameter
const float wheeldiameter = 66.10; // Wheel diameter in millimeters, change if different
const float gear_ratio = 34;
const float PPR = 12;
// Integers for pulse counters
volatile int counter_A = 0;
volatile int counter_B = 0;
// Motor A
int enA = 10;
int in1 = 9;
int in2 = 8;
// Motor B
int enB = 5;
int in3 = 7;
int in4 = 6;
// Interrupt Service Routines
// Motor A pulse count ISR
void ISR_countA()
{
counter_A++; // increment Motor A counter value
}
// Motor B pulse count ISR
void ISR_countB()
{
counter_B++; // increment Motor B counter value
}
// Function to convert from centimeters to steps
int CMtoSteps(float cm)
{
float circumference = (wheeldiameter * 3.14) / 10; // Calculate wheel circumference in cm
return int(cm * gear_ratio * PPR / circumference);
}
// Function to Move Forward
void MoveForward(int steps, int mspeed)
{
counter_A = 0; // reset counter A to zero
counter_B = 0; // reset counter B to zero
// Set Motor A forward
digitalWrite(in1, HIGH);
digitalWrite(in2, LOW);
// Set Motor B forward
digitalWrite(in3, HIGH);
digitalWrite(in4, LOW);
// Go forward until step value is reached
while (steps > counter_A or steps > counter_B) {
if (steps > counter_A) {
analogWrite(enA, mspeed);
} else {
analogWrite(enA, 0);
}
if (steps > counter_B) {
analogWrite(enB, mspeed);
} else {
analogWrite(enB, 0);
}
}
// Stop when done
analogWrite(enA, 0);
analogWrite(enB, 0);
counter_A = 0; // reset counter A to zero
counter_B = 0; // reset counter B to zero
}
// Function to Move in Reverse
void MoveReverse(int steps, int mspeed)
{
counter_A = 0; // reset counter A to zero
counter_B = 0; // reset counter B to zero
// Set Motor A reverse
digitalWrite(in1, LOW);
digitalWrite(in2, HIGH);
// Set Motor B reverse
digitalWrite(in3, LOW);
digitalWrite(in4, HIGH);
// Go in reverse until step value is reached
while (steps > counter_A && steps > counter_B) {
if (steps > counter_A) {
analogWrite(enA, mspeed);
} else {
analogWrite(enA, 0);
}
if (steps > counter_B) {
analogWrite(enB, mspeed);
} else {
analogWrite(enB, 0);
}
}
// Stop when done
analogWrite(enA, 0);
analogWrite(enB, 0);
counter_A = 0; // reset counter A to zero
counter_B = 0; // reset counter B to zero
}
// Function to Spin Right
void SpinRight(int steps, int mspeed)
{
counter_A = 0; // reset counter A to zero
counter_B = 0; // reset counter B to zero
// Set Motor A reverse
digitalWrite(in1, LOW);
digitalWrite(in2, HIGH);
// Set Motor B forward
digitalWrite(in3, HIGH);
digitalWrite(in4, LOW);
// Go until step value is reached
while (steps > counter_A && steps > counter_B) {
if (steps > counter_A) {
analogWrite(enA, mspeed);
} else {
analogWrite(enA, 0);
}
if (steps > counter_B) {
analogWrite(enB, mspeed);
} else {
analogWrite(enB, 0);
}
}
// Stop when done
analogWrite(enA, 0);
analogWrite(enB, 0);
counter_A = 0; // reset counter A to zero
counter_B = 0; // reset counter B to zero
}
// Function to Spin Left
void SpinLeft(int steps, int mspeed)
{
counter_A = 0; // reset counter A to zero
counter_B = 0; // reset counter B to zero
// Set Motor A forward
digitalWrite(in1, HIGH);
digitalWrite(in2, LOW);
// Set Motor B reverse
digitalWrite(in3, LOW);
digitalWrite(in4, HIGH);
// Go until step value is reached
while (steps > counter_A && steps > counter_B) {
if (steps > counter_A) {
analogWrite(enA, mspeed);
} else {
analogWrite(enA, 0);
}
if (steps > counter_B) {
analogWrite(enB, mspeed);
} else {
analogWrite(enB, 0);
}
}
// Stop when done
analogWrite(enA, 0);
analogWrite(enB, 0);
counter_A = 0; // reset counter A to zero
counter_B = 0; // reset counter B to zero
}
void setup()
{
Serial.begin(9600);
// Attach the Interrupts to their ISR's
pinMode(MOTOR_A,INPUT);
pinMode(MOTOR_B,INPUT);
pinMode(in1,OUTPUT);
pinMode(in2,OUTPUT);
pinMode(in3,OUTPUT);
pinMode(in4,OUTPUT);
pinMode(enA,OUTPUT);
pinMode(enB,OUTPUT);
attachInterrupt(digitalPinToInterrupt (MOTOR_A), ISR_countA, RISING); // Increase counter A when speed sensor pin goes High
attachInterrupt(digitalPinToInterrupt (MOTOR_B), ISR_countB, RISING); // Increase counter B when speed sensor pin goes High
}
void loop()
{
delay(100);
int compareOne = 1;
int compareTwo = 2;
int compareThree = 3;
if (Serial.available() > 0){
String stringFromSerial = Serial.readString();
if (stringFromSerial.toInt() == compareOne){
Serial.println("Forward");
MoveForward(CMtoSteps(50), 255); // Forward half a metre at 255 speed
}
if (stringFromSerial.toInt() == compareTwo){
Serial.println("Spin Right");
SpinRight(CMtoSteps(10), 255); // Right half a metre at 255 speed
}
if (stringFromSerial.toInt() == compareThree){
Serial.println("Spin Left");
SpinLeft(CMtoSteps(10), 255); // Right half a metre at 255 speed
}
else {
Serial.println("Not equal");
}
}
Put whatever you want here!
MoveReverse(CMtoSteps(25.4),255); // Reverse 25.4 cm at 255 speed
}
```
UPDATE: I have changed the `loop` so that it compares ints instead of strings as per @GrooverFromHolland suggestion. Still, nothing happens when I input from python but it is printed in the serial monitor. Why the motors spin when I just trigger it in the loop directly for testing, but not when commanded to via serial monitor is my issue. As well as this, I have discovered that the interrupts are not working for some reason. Any help appreciated.
|
2022/09/09
|
[
"https://Stackoverflow.com/questions/73668351",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12527861/"
] |
Since you're using func `firstIndex` of an array in this `func indexOfItem(_ item: Item) -> Int?` therefore the `Item` has to be a concrete object (behind the scene of `firstIndex` func is comparing each element of an array and print out the index of the element).
There are 2 ways to do this
* First is using associatedtype to keep your protocol generic
```
protocol Item: Equatable {
var name: String { get }
}
protocol Container {
associatedtype Item
var items: [Item] { get }
}
struct MyItem: Item {
var name: String
}
extension Container where Item == MyItem {
func indexOfItem(_ item: Item) -> Int? {
return items.firstIndex(of: item)
}
}
```
* Second is using an equatable object `MyItem` instead a protocol `Item` inside the `Container` protocol
```
protocol Item {
var name: String { get }
}
protocol Container {
var items: [MyItem] { get }
}
struct MyItem: Item, Equatable {
var name: String
}
extension Container {
func findIndex(of item: MyItem) -> Int? {
return items.firstIndex(of: item)
}
}
```
|
Finally find simple enough solution:
То make protocol generic with associated type and constraint
this type to Equatable.
```
public protocol Container {
associatedtype EquatableItem: Item, Equatable
var items: [EquatableItem] {get}
}
public protocol Item {
var name: String {get}
}
public extension Container {
func indexOfItem(_ item: EquatableItem) -> Int? {
items.firstIndex(of: item)
}
}
```
This compiles and now if I have some types
```
struct SomeContainer {
var items: [SomeItem]
}
struct SomeItem: Item, Equatable {
var name: String
}
```
I only need to resolve associatedtype to provide protocol conformance for SomeContainer type:
```
extension SomeContainer: Container {
typealias EquatableItem = SomeItem
}
```
|
59,796,680
|
I recently moved to a place with terrible internet connection. Ever since then I have been having huge issues getting my programming environments set up with all the tools I need - you don't realize how many things you need to download until each one of those things takes over a day.
For this post I would like to try to figure out how to deal with this in pip.
**The Problem**
Almost every time I `pip install` something it ends out timing out somewhere in the middle. It takes many tries until I get lucky enough to have it complete without a time out. This happens with many different things I have tried, big or small. Every time an install fails the next time starts all over again from 0%, no matter how far I got before.
I get something along the lines of
```
pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
```
**What I want to happen**
Ideally I would like to either extend the definition of time pip uses before it declares a timeout or be able to disable the option of a timeout all together.
I am not sure either of these are possible, so if anyone has any other solution for me that would be greatly appreciated as well.
**Other Information**
Not sure this helps any but what I found is that the only reliable way for me to download anything here is using torrents, as they do not restart a download once they lose connection, rather they always continue where they left off. If there is a way to use this fact in any way that would be nice too.
|
2020/01/18
|
[
"https://Stackoverflow.com/questions/59796680",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6036156/"
] |
Use option `--timeout <sec>` to set socket time out.
Also, as @Iain Shelvington mentioned, `timeout = <sec>` in [pip configuration](https://pip.pypa.io/en/stable/user_guide/#configuration) will also work.
*TIP: Every time you want to know something (maybe an option) about a command (tool), before googling, check the manual page of the command by using `man <command>` or use `<command> --help` or `check that command's docs online` will be very useful too (Maybe better than Google).*
|
To set the `timeout` time to 30sec for example. The easiest way is executing: `pip config global.timeout 30` or going to the pip configuration file ***pip.ini*** located in the directory ***~\AppData\Roaming\pip*** in the case of Windows operating system. If the file does not exist there, create it and write:
```
[global]
timeout = 30
```
.
|
45,934,259
|
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project)
```
C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update .
10:08 AM Application: gtmdocx; version: None
10:08 AM Host: appengine.google.com
Traceback (most recent call last):
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module>
run_file(__file__, globals())
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module>
main(sys.argv)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main
result = AppCfgApp(argv).Run()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run
self.action(self)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__
return method()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates
runtime=runtime))
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send
url, method=method, body=payload, headers=headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request
conn.connect()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Process finished with exit code 1
```
I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem
```
ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
```
I hope anyone has encountered this problem before and can help me with this.
Here is my App.yaml file:
```
runtime: php55
api_version: 1
threadsafe: true
service: default
application: gtmdocx
handlers:
- url: .*
script: main.php
login: admin
```
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45934259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6428568/"
] |
The traceback indicates the failure happens when trying to check for SDK updates, so you *should* be able to work around it by using `appcfg.py`'s `--skip_sdk_update_check` option.
I'm not using the PHP SDK, but I found a similar failure in the SDK upgrade check for the python development server, my solution for that could be applicable in your case as well. See [Google App Engine SSL Certificate Error](https://stackoverflow.com/questions/43221963/google-app-engine-ssl-certificate-error/43233424#43233424).
|
If it is really a SSL handshake error than check to see if machine that you are using to access is behind a firewall. If you are than you will have a problem you might have to ask you network guys to open network up. alternatively you can try to get on to network that is not behind firewall. I might be wrong but I have been in this situation.
|
45,934,259
|
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project)
```
C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update .
10:08 AM Application: gtmdocx; version: None
10:08 AM Host: appengine.google.com
Traceback (most recent call last):
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module>
run_file(__file__, globals())
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module>
main(sys.argv)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main
result = AppCfgApp(argv).Run()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run
self.action(self)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__
return method()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates
runtime=runtime))
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send
url, method=method, body=payload, headers=headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request
conn.connect()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Process finished with exit code 1
```
I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem
```
ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
```
I hope anyone has encountered this problem before and can help me with this.
Here is my App.yaml file:
```
runtime: php55
api_version: 1
threadsafe: true
service: default
application: gtmdocx
handlers:
- url: .*
script: main.php
login: admin
```
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45934259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6428568/"
] |
Finally got it working. Using PHPstorm IDE for deploying don't work, but using gcloud in command line works perfectly for deploying. Maby PHPstorm adds some config or parameters when deploying but i used the command line and it worked like charm
```
gcloud app deploy app.yaml --project <project name> --promote --quiet
```
Hope this helps someone.
|
The traceback indicates the failure happens when trying to check for SDK updates, so you *should* be able to work around it by using `appcfg.py`'s `--skip_sdk_update_check` option.
I'm not using the PHP SDK, but I found a similar failure in the SDK upgrade check for the python development server, my solution for that could be applicable in your case as well. See [Google App Engine SSL Certificate Error](https://stackoverflow.com/questions/43221963/google-app-engine-ssl-certificate-error/43233424#43233424).
|
45,934,259
|
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project)
```
C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update .
10:08 AM Application: gtmdocx; version: None
10:08 AM Host: appengine.google.com
Traceback (most recent call last):
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module>
run_file(__file__, globals())
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module>
main(sys.argv)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main
result = AppCfgApp(argv).Run()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run
self.action(self)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__
return method()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates
runtime=runtime))
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send
url, method=method, body=payload, headers=headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request
conn.connect()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Process finished with exit code 1
```
I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem
```
ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
```
I hope anyone has encountered this problem before and can help me with this.
Here is my App.yaml file:
```
runtime: php55
api_version: 1
threadsafe: true
service: default
application: gtmdocx
handlers:
- url: .*
script: main.php
login: admin
```
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45934259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6428568/"
] |
The traceback indicates the failure happens when trying to check for SDK updates, so you *should* be able to work around it by using `appcfg.py`'s `--skip_sdk_update_check` option.
I'm not using the PHP SDK, but I found a similar failure in the SDK upgrade check for the python development server, my solution for that could be applicable in your case as well. See [Google App Engine SSL Certificate Error](https://stackoverflow.com/questions/43221963/google-app-engine-ssl-certificate-error/43233424#43233424).
|
Scene is very clear. Google want you be moved to a premature version of Google cloud SDK CLI tool, for which even documentation is still half way. There are still basic features pending on Google Cloud SDK cli tool. Testing patience.
What you can do now is flush current GAE version 57/58 and install a old version of GAE launcher. I am using version 49. [Download Link](https://storage.googleapis.com/appengine-sdks/featured/GoogleAppEngine-1.9.49.msi)
|
45,934,259
|
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project)
```
C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update .
10:08 AM Application: gtmdocx; version: None
10:08 AM Host: appengine.google.com
Traceback (most recent call last):
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module>
run_file(__file__, globals())
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module>
main(sys.argv)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main
result = AppCfgApp(argv).Run()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run
self.action(self)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__
return method()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates
runtime=runtime))
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send
url, method=method, body=payload, headers=headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request
conn.connect()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Process finished with exit code 1
```
I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem
```
ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
```
I hope anyone has encountered this problem before and can help me with this.
Here is my App.yaml file:
```
runtime: php55
api_version: 1
threadsafe: true
service: default
application: gtmdocx
handlers:
- url: .*
script: main.php
login: admin
```
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45934259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6428568/"
] |
Finally got it working. Using PHPstorm IDE for deploying don't work, but using gcloud in command line works perfectly for deploying. Maby PHPstorm adds some config or parameters when deploying but i used the command line and it worked like charm
```
gcloud app deploy app.yaml --project <project name> --promote --quiet
```
Hope this helps someone.
|
If it is really a SSL handshake error than check to see if machine that you are using to access is behind a firewall. If you are than you will have a problem you might have to ask you network guys to open network up. alternatively you can try to get on to network that is not behind firewall. I might be wrong but I have been in this situation.
|
45,934,259
|
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project)
```
C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update .
10:08 AM Application: gtmdocx; version: None
10:08 AM Host: appengine.google.com
Traceback (most recent call last):
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module>
run_file(__file__, globals())
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module>
main(sys.argv)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main
result = AppCfgApp(argv).Run()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run
self.action(self)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__
return method()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates
runtime=runtime))
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send
url, method=method, body=payload, headers=headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request
conn.connect()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Process finished with exit code 1
```
I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem
```
ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
```
I hope anyone has encountered this problem before and can help me with this.
Here is my App.yaml file:
```
runtime: php55
api_version: 1
threadsafe: true
service: default
application: gtmdocx
handlers:
- url: .*
script: main.php
login: admin
```
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45934259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6428568/"
] |
Upgrading httplib2 fixed to me!
```
sudo pip2 install --upgrade httplib2 -t /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2/
```
|
If it is really a SSL handshake error than check to see if machine that you are using to access is behind a firewall. If you are than you will have a problem you might have to ask you network guys to open network up. alternatively you can try to get on to network that is not behind firewall. I might be wrong but I have been in this situation.
|
45,934,259
|
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project)
```
C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update .
10:08 AM Application: gtmdocx; version: None
10:08 AM Host: appengine.google.com
Traceback (most recent call last):
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module>
run_file(__file__, globals())
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module>
main(sys.argv)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main
result = AppCfgApp(argv).Run()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run
self.action(self)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__
return method()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates
runtime=runtime))
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send
url, method=method, body=payload, headers=headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request
conn.connect()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Process finished with exit code 1
```
I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem
```
ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
```
I hope anyone has encountered this problem before and can help me with this.
Here is my App.yaml file:
```
runtime: php55
api_version: 1
threadsafe: true
service: default
application: gtmdocx
handlers:
- url: .*
script: main.php
login: admin
```
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45934259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6428568/"
] |
Finally got it working. Using PHPstorm IDE for deploying don't work, but using gcloud in command line works perfectly for deploying. Maby PHPstorm adds some config or parameters when deploying but i used the command line and it worked like charm
```
gcloud app deploy app.yaml --project <project name> --promote --quiet
```
Hope this helps someone.
|
Scene is very clear. Google want you be moved to a premature version of Google cloud SDK CLI tool, for which even documentation is still half way. There are still basic features pending on Google Cloud SDK cli tool. Testing patience.
What you can do now is flush current GAE version 57/58 and install a old version of GAE launcher. I am using version 49. [Download Link](https://storage.googleapis.com/appengine-sdks/featured/GoogleAppEngine-1.9.49.msi)
|
45,934,259
|
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project)
```
C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update .
10:08 AM Application: gtmdocx; version: None
10:08 AM Host: appengine.google.com
Traceback (most recent call last):
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module>
run_file(__file__, globals())
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module>
main(sys.argv)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main
result = AppCfgApp(argv).Run()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run
self.action(self)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__
return method()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates
runtime=runtime))
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send
url, method=method, body=payload, headers=headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request
conn.connect()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Process finished with exit code 1
```
I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem
```
ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
```
I hope anyone has encountered this problem before and can help me with this.
Here is my App.yaml file:
```
runtime: php55
api_version: 1
threadsafe: true
service: default
application: gtmdocx
handlers:
- url: .*
script: main.php
login: admin
```
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45934259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6428568/"
] |
Finally got it working. Using PHPstorm IDE for deploying don't work, but using gcloud in command line works perfectly for deploying. Maby PHPstorm adds some config or parameters when deploying but i used the command line and it worked like charm
```
gcloud app deploy app.yaml --project <project name> --promote --quiet
```
Hope this helps someone.
|
Upgrading httplib2 fixed to me!
```
sudo pip2 install --upgrade httplib2 -t /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2/
```
|
45,934,259
|
I am working on a simple project on PhpStorm and installed GAE plugin and SDK. Running a server and show the project works, but when I try to deploy my application I get this kind of error: (This is a PHP project)
```
C:\Python27\python.exe "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py" update .
10:08 AM Application: gtmdocx; version: None
10:08 AM Host: appengine.google.com
Traceback (most recent call last):
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 133, in <module>
run_file(__file__, globals())
File "C:/Users/asim/AppData/Local/Google/Cloud SDK/google-cloud-sdk/platform/google_appengine/appcfg.py", line 129, in run_file
execfile(_PATHS.script_file(script_name), globals_)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5518, in <module>
main(sys.argv)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5509, in main
result = AppCfgApp(argv).Run()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 2969, in Run
self.action(self)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 5165, in __call__
return method()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3897, in Update
self._UpdateWithParsedAppYaml(appyaml, self.basepath)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appcfg.py", line 3918, in _UpdateWithParsedAppYaml
updatecheck.CheckForUpdates()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\sdk_update_checker.py", line 245, in CheckForUpdates
runtime=runtime))
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\tools\appengine_rpc_httplib2.py", line 246, in Send
url, method=method, body=payload, headers=headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1626, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1368, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1288, in _conn_request
conn.connect()
File "C:\Users\asim\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\lib\httplib2\httplib2\__init__.py", line 1082, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Process finished with exit code 1
```
I've tried to uninstall and upgrade Python, now I'm using 2.7.9 but still this error wont remove. I tried also removing `cacerts.txt` but still no luck still this problem
```
ttplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
```
I hope anyone has encountered this problem before and can help me with this.
Here is my App.yaml file:
```
runtime: php55
api_version: 1
threadsafe: true
service: default
application: gtmdocx
handlers:
- url: .*
script: main.php
login: admin
```
|
2017/08/29
|
[
"https://Stackoverflow.com/questions/45934259",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6428568/"
] |
Upgrading httplib2 fixed to me!
```
sudo pip2 install --upgrade httplib2 -t /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/httplib2/
```
|
Scene is very clear. Google want you be moved to a premature version of Google cloud SDK CLI tool, for which even documentation is still half way. There are still basic features pending on Google Cloud SDK cli tool. Testing patience.
What you can do now is flush current GAE version 57/58 and install a old version of GAE launcher. I am using version 49. [Download Link](https://storage.googleapis.com/appengine-sdks/featured/GoogleAppEngine-1.9.49.msi)
|
42,081,376
|
I have exactly opposite issue described [here](https://stackoverflow.com/q/11489330/2215679).
In my case I have:
logging.py
```
import logging
log = logging.getLogger(..)
```
I got this error:
```
AttributeError: 'module' object has no attribute 'getLogger'
```
This happens only on project with python 2.7 run under Pyramid framework.
When I run it in another project, python 3.6 without any framework it works perfect.
PS. there is a [similar issue](https://stackoverflow.com/questions/5299199/python-importing-a-global-site-packages-module-rather-than-the-file-of-the-sam), but it is different case, in my case it is global package that is not present in any `sys.path` folder. So none of solutions from that question worked for me.
Please don't mark this issue as duplicated.
|
2017/02/07
|
[
"https://Stackoverflow.com/questions/42081376",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2215679/"
] |
I found solution, just putting:
```
from __future__ import absolute_import
```
on top of the file will resolve the issue.
source: [https://docs.python.org/2/library/**future**.html](https://docs.python.org/2/library/__future__.html)
As you may see, in python 3>= absolute import is by default
|
>
> It is better to rename your local file to be different with builtin module name.
>
>
>
|
44,395,941
|
OK, so I am currently messing around coding hangman in python and was wondering if I can clear what it says in the python shell as I don't just wan't the person to read the word.
```
import time
keyword = input(" Please enter the word you want the person to guess")
lives = int(input("How many lives would you like to have?"))
print ("There are ", len(keyword), "letters in the word")
time.sleep(2)
guess = input("please enter your guess")
```
I would like to remove all the text in the shell.
|
2017/06/06
|
[
"https://Stackoverflow.com/questions/44395941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8020756/"
] |
if you are a windows user use this:
```
import os
os.system("cls")
```
Mac/linux then :
```
import os
os.system("clear")
```
|
Try this:
```
import subprocess
import time
tmp=subprocess.call('clear', shell=True) # 'cls' in windows
keyword = input(" Please enter the word you want the person to guess")
lives = int(input("How many lives would you like to have?"))
print ("There are ", len(keyword), "letters in the word")
time.sleep(2)
```
Save the code in a python file. Then execute it from shell.
|
44,395,941
|
OK, so I am currently messing around coding hangman in python and was wondering if I can clear what it says in the python shell as I don't just wan't the person to read the word.
```
import time
keyword = input(" Please enter the word you want the person to guess")
lives = int(input("How many lives would you like to have?"))
print ("There are ", len(keyword), "letters in the word")
time.sleep(2)
guess = input("please enter your guess")
```
I would like to remove all the text in the shell.
|
2017/06/06
|
[
"https://Stackoverflow.com/questions/44395941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8020756/"
] |
if you are a windows user use this:
```
import os
os.system("cls")
```
Mac/linux then :
```
import os
os.system("clear")
```
|
```
print("\n" * 100)
```
There is no other way to do it then to just spam the console.
|
44,395,941
|
OK, so I am currently messing around coding hangman in python and was wondering if I can clear what it says in the python shell as I don't just wan't the person to read the word.
```
import time
keyword = input(" Please enter the word you want the person to guess")
lives = int(input("How many lives would you like to have?"))
print ("There are ", len(keyword), "letters in the word")
time.sleep(2)
guess = input("please enter your guess")
```
I would like to remove all the text in the shell.
|
2017/06/06
|
[
"https://Stackoverflow.com/questions/44395941",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8020756/"
] |
if you are a windows user use this:
```
import os
os.system("cls")
```
Mac/linux then :
```
import os
os.system("clear")
```
|
`os.system("cls")` for windows or `os.system("clear")` for mac/linux.
Then put that line of code where you wish for the program to delete all the text in the shell.
|
67,611,765
|
```
i = SomeIndex()
while mylist[i] is not None:
if mylist[i] == name:
return
foo()
i+=1
```
I want foo() to always run on 1st iteration of loop, if mylist[i] isn't 'name', but never run if its any iteration but the first. I know I could the following, but I don't know if it's the most efficient and prettiest python code:
```
i = SomeIndex()
FirstIter = True
while mylist[i] is not None:
if mylist[i] == name:
return
if FirstIter:
foo()
FirstIter = False
i+=1
```
|
2021/05/19
|
[
"https://Stackoverflow.com/questions/67611765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14725111/"
] |
Let's "pythonize" your example, step by step.
**1. Remove the `first_index` flag:**
```
start_idx = SomeIndex()
i = start_idx
while mylist[i] is not None:
if mylist[i] == name:
return
if i == start_idx:
foo()
i += 1
```
**2. Convert to `while True`:**
```
start_idx = SomeIndex()
i = start_idx
while True:
if mylist[i] is not None:
break
if mylist[i] == name:
return
if i == start_idx:
foo()
i += 1
```
**3. Convert to `for i in range` loop:**
```
start_idx = SomeIndex()
for i in range(start_idx, len(mylist)):
if mylist[i] is not None:
break
if mylist[i] == name:
return
if i == start_idx:
foo()
```
Note that this also fixes a bug by bounding `i < len(mylist)`.
---
Variations
----------
These are not equivalent to your code sample, but they might be relevant patterns for you to use.
**Variation 1:**
Find the index where `mylist[i] == name`:
```
index = next(
(
i
for i in range(start_idx, len(mylist))
if mylist[i] == name
),
None, # If no index is found, set to None.
)
```
|
You are trying to emulate a do-while, take a look at [this question](https://stackoverflow.com/questions/743164/how-to-emulate-a-do-while-loop) if you want.
Since there is no do-while equivalent in Python, the simple idea is to move the first iteration out of the loop
```
i = SomeIndex()
foo()
while mylist[i] is not None:
if mylist[i] == name:
return
foo()
i+=1
```
Also, the way you iterate over the list is unnecessarily complex, you can simplify it like:
```
i = SomeIndex()
if myList[i] != name:
foo()
for element in myList[i+1:]:
if element == name:
return
foo()
```
|
67,611,765
|
```
i = SomeIndex()
while mylist[i] is not None:
if mylist[i] == name:
return
foo()
i+=1
```
I want foo() to always run on 1st iteration of loop, if mylist[i] isn't 'name', but never run if its any iteration but the first. I know I could the following, but I don't know if it's the most efficient and prettiest python code:
```
i = SomeIndex()
FirstIter = True
while mylist[i] is not None:
if mylist[i] == name:
return
if FirstIter:
foo()
FirstIter = False
i+=1
```
|
2021/05/19
|
[
"https://Stackoverflow.com/questions/67611765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14725111/"
] |
Personally I like `mark_ends` from the third party library `more-itertools`
```
from more_itertools import mark_ends
i = SomeIndex()
for first, last, elem in mark_ends(mylist[i:]):
if elem == name:
return
if first:
foo()
```
`mark_ends` gives you a 3-tuple for every element in your iterable, in this case the sliced `mylist`. The tuples are `(True, False, elem_0), (False, False, elem_1), ..., (False, False, elem_n-2), (False, True, elem_n-1)`. In your use case you never use the middle element of the tuples.
If for some reason you can't or don't want to use the external library you can swipe the code from <https://more-itertools.readthedocs.io/en/stable/_modules/more_itertools/more.html#mark_ends>
**Addendum:**
In light of the OP's requirement to let `foo` change the list, here's a quick modification:
```
from more_itertools import mark_ends
i = SomeIndex()
for first, last, (j, elem) in mark_ends(enumerate(mylist[i:], start=i)):
if elem == name:
return
if first:
foo(mylist, j)
```
`j` now gives you the index that you need to tell `foo` what to change.
|
You are trying to emulate a do-while, take a look at [this question](https://stackoverflow.com/questions/743164/how-to-emulate-a-do-while-loop) if you want.
Since there is no do-while equivalent in Python, the simple idea is to move the first iteration out of the loop
```
i = SomeIndex()
foo()
while mylist[i] is not None:
if mylist[i] == name:
return
foo()
i+=1
```
Also, the way you iterate over the list is unnecessarily complex, you can simplify it like:
```
i = SomeIndex()
if myList[i] != name:
foo()
for element in myList[i+1:]:
if element == name:
return
foo()
```
|
67,611,765
|
```
i = SomeIndex()
while mylist[i] is not None:
if mylist[i] == name:
return
foo()
i+=1
```
I want foo() to always run on 1st iteration of loop, if mylist[i] isn't 'name', but never run if its any iteration but the first. I know I could the following, but I don't know if it's the most efficient and prettiest python code:
```
i = SomeIndex()
FirstIter = True
while mylist[i] is not None:
if mylist[i] == name:
return
if FirstIter:
foo()
FirstIter = False
i+=1
```
|
2021/05/19
|
[
"https://Stackoverflow.com/questions/67611765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14725111/"
] |
Let's "pythonize" your example, step by step.
**1. Remove the `first_index` flag:**
```
start_idx = SomeIndex()
i = start_idx
while mylist[i] is not None:
if mylist[i] == name:
return
if i == start_idx:
foo()
i += 1
```
**2. Convert to `while True`:**
```
start_idx = SomeIndex()
i = start_idx
while True:
if mylist[i] is not None:
break
if mylist[i] == name:
return
if i == start_idx:
foo()
i += 1
```
**3. Convert to `for i in range` loop:**
```
start_idx = SomeIndex()
for i in range(start_idx, len(mylist)):
if mylist[i] is not None:
break
if mylist[i] == name:
return
if i == start_idx:
foo()
```
Note that this also fixes a bug by bounding `i < len(mylist)`.
---
Variations
----------
These are not equivalent to your code sample, but they might be relevant patterns for you to use.
**Variation 1:**
Find the index where `mylist[i] == name`:
```
index = next(
(
i
for i in range(start_idx, len(mylist))
if mylist[i] == name
),
None, # If no index is found, set to None.
)
```
|
Simplify your logic to independent steps. Yes, you will make a Boolean value check a second time. This takes far less time than you spent with your design problem.
```
start = SomeIndex()
if mylist[start] != name:
foo()
for idx in range(start, len(mylist)):
if mylist[idx] == name:
return
# remainder of loop
```
|
67,611,765
|
```
i = SomeIndex()
while mylist[i] is not None:
if mylist[i] == name:
return
foo()
i+=1
```
I want foo() to always run on 1st iteration of loop, if mylist[i] isn't 'name', but never run if its any iteration but the first. I know I could the following, but I don't know if it's the most efficient and prettiest python code:
```
i = SomeIndex()
FirstIter = True
while mylist[i] is not None:
if mylist[i] == name:
return
if FirstIter:
foo()
FirstIter = False
i+=1
```
|
2021/05/19
|
[
"https://Stackoverflow.com/questions/67611765",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/14725111/"
] |
Personally I like `mark_ends` from the third party library `more-itertools`
```
from more_itertools import mark_ends
i = SomeIndex()
for first, last, elem in mark_ends(mylist[i:]):
if elem == name:
return
if first:
foo()
```
`mark_ends` gives you a 3-tuple for every element in your iterable, in this case the sliced `mylist`. The tuples are `(True, False, elem_0), (False, False, elem_1), ..., (False, False, elem_n-2), (False, True, elem_n-1)`. In your use case you never use the middle element of the tuples.
If for some reason you can't or don't want to use the external library you can swipe the code from <https://more-itertools.readthedocs.io/en/stable/_modules/more_itertools/more.html#mark_ends>
**Addendum:**
In light of the OP's requirement to let `foo` change the list, here's a quick modification:
```
from more_itertools import mark_ends
i = SomeIndex()
for first, last, (j, elem) in mark_ends(enumerate(mylist[i:], start=i)):
if elem == name:
return
if first:
foo(mylist, j)
```
`j` now gives you the index that you need to tell `foo` what to change.
|
Simplify your logic to independent steps. Yes, you will make a Boolean value check a second time. This takes far less time than you spent with your design problem.
```
start = SomeIndex()
if mylist[start] != name:
foo()
for idx in range(start, len(mylist)):
if mylist[idx] == name:
return
# remainder of loop
```
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
`urlparse` quite happily takes invalid URLs, it is more a string string-splitting library than any kind of validator. For example:
```
from urlparse import urlparse
urlparse('http://----')
# returns: ParseResult(scheme='http', netloc='----', path='', params='', query='', fragment='')
```
Depending on the situation, this might be fine..
If you mostly trust the data, and just want to verify the protocol is HTTP, then `urlparse` is perfect.
If you want to make the URL is actually a legal URL, use [the ridiculous regex](https://stackoverflow.com/questions/827557/how-do-you-validate-a-url-with-a-regular-expression-in-python/835527#835527)
If you want to make sure it's a real web address,
```
import urllib
try:
urllib.urlopen(url)
except IOError:
print "Not a real URL"
```
|
The regex provided should match any url of the form <http://www.ietf.org/rfc/rfc3986.txt>; and does when tested in the python interpreter.
What format have the URLs you've been having trouble parsing had?
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
Nowadays, in 90% of case if you working with URL in Python you probably use python-requests. Hence the question here - why not reuse URL validation from requests?
```
from requests.models import PreparedRequest
import requests.exceptions
def check_url(url):
prepared_request = PreparedRequest()
try:
prepared_request.prepare_url(url, None)
return prepared_request.url
except requests.exceptions.MissingSchema, e:
raise SomeException
```
Features:
* Don't reinvent the wheel
* DRY
* Work offline
* Minimal resource
|
The regex provided should match any url of the form <http://www.ietf.org/rfc/rfc3986.txt>; and does when tested in the python interpreter.
What format have the URLs you've been having trouble parsing had?
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
Here's the complete regexp to parse a URL.
```none
(?:https?://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)
\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d
+)){3}))(?::(?:\d+))?)(?:/(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA
-F\d]{2}))|[;:@&=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d
]{2}))|[;:@&=])*))*)(?:\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d
]{2}))|[;:@&=])*))?)?)|(?:s?ftp://(?:(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),
]|(?:%[a-fA-F\d]{2}))|[;?&=])*)(?::(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:
%[a-fA-F\d]{2}))|[;?&=])*))?@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\
d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(
?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?))(?:/(?:(?:(?:(?:[a-zA-Z\d$\-
_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!
*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*))*)(?:;type=[AIDaid])?)?)|(?:news
:(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;/?:&=])+@(?:
(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:
(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3})))|(?:[a-zA
-Z](?:[a-zA-Z\d]|[_.+-])*)|\*))|(?:nntp://(?:(?:(?:(?:(?:[a-zA-Z\d](?:
(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA
-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?)/(?:[a-zA-Z](?:[a-
zA-Z\d]|[_.+-])*)(?:/(?:\d+))?)|(?:telnet://(?:(?:(?:(?:(?:[a-zA-Z\d$\
-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?&=])*)(?::(?:(?:(?:[a-zA-Z\d$\-_.+!
*'(),]|(?:%[a-fA-F\d]{2}))|[;?&=])*))?@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(
?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-
Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?))/?)|(?:gopher://(?
:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z
](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?
:\d+))?)(?:/(?:[a-zA-Z\d$\-_.+!*'(),;/?:@&=]|(?:%[a-fA-F\d]{2}))(?:(?:
(?:[a-zA-Z\d$\-_.+!*'(),;/?:@&=]|(?:%[a-fA-F\d]{2}))*)(?:%09(?:(?:(?:[
a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;:@&=])*)(?:%09(?:(?:[a-zA-
Z\d$\-_.+!*'(),;/?:@&=]|(?:%[a-fA-F\d]{2}))*))?)?)?)?)|(?:wais://(?:(?
:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?
:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d
+))?)/(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)(?:(?:/(?:(?:[
a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)/(?:(?:[a-zA-Z\d$\-_.+!*'()
,]|(?:%[a-fA-F\d]{2}))*))|\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-
F\d]{2}))|[;:@&=])*))?)|(?:mailto:(?:(?:[a-zA-Z\d$\-_.+!*'(),;/?:@&=]|
(?:%[a-fA-F\d]{2}))+))|(?:file://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-
Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))
|(?:(?:\d+)(?:\.(?:\d+)){3}))|localhost)?/(?:(?:(?:(?:[a-zA-Z\d$\-_.+!
*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'()
,]|(?:%[a-fA-F\d]{2}))|[?:@&=])*))*))|(?:prospero://(?:(?:(?:(?:(?:[a-
zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d
]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?)/(?:(?:(
?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*)(?:/(?:(?:(?
:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*))*)(?:(?:;(?:(?:
(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&])*)=(?:(?:(?:[a-zA
-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&])*)))*)|(?:ldap://(?:(?:(?
:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?
:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d
+))?))?/(?:(?:(?:(?:(?:(?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa
\d]))|(?:%20))+|(?:OID|oid)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(
?:%20)*)=(?:(?:%0[Aa])?(?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-
fA-F\d]{2}))*))(?:(?:(?:%0[Aa])?(?:%20)*)\+(?:(?:%0[Aa])?(?:%20)*)(?:(
?:(?:(?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?
:OID|oid)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[
Aa])?(?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)))*)
(?:(?:(?:(?:%0[Aa])?(?:%20)*)(?:[;,])(?:(?:%0[Aa])?(?:%20)*))(?:(?:(?:
(?:(?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?:O
ID|oid)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[Aa
])?(?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*))(?:(?
:(?:%0[Aa])?(?:%20)*)\+(?:(?:%0[Aa])?(?:%20)*)(?:(?:(?:(?:(?:[a-zA-Z\d
]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?:OID|oid)\.(?:(?:\d+
)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[Aa])?(?:%20)*))?(?:(
?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)))*))*(?:(?:(?:%0[Aa])?(
?:%20)*)(?:[;,])(?:(?:%0[Aa])?(?:%20)*))?)(?:\?(?:(?:(?:(?:[a-zA-Z\d$\
-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+)(?:,(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%
[a-fA-F\d]{2}))+))*)?)(?:\?(?:base|one|sub)(?:\?(?:((?:[a-zA-Z\d$\-_.+
!*'(),;/?:@&=]|(?:%[a-fA-F\d]{2}))+)))?)?)?)|(?:(?:z39\.50[rs])://(?:(
?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](
?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\
d+))?)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+)(?:\+(?
:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+))*(?:\?(?:(?:[a-zA-Z\d
$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+))?)?(?:;esn=(?:(?:[a-zA-Z\d$\-_.+!*
'(),]|(?:%[a-fA-F\d]{2}))+))?(?:;rs=(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[
a-fA-F\d]{2}))+)(?:\+(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+
))*)?))|(?:cid:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?
:@&=])*))|(?:mid:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[
;?:@&=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?:
@&=])*))?)|(?:vemmi://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-
zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)
(?:\.(?:\d+)){3}))(?::(?:\d+))?)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?
:%[a-fA-F\d]{2}))|[/?:@&=])*)(?:(?:;(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?
:%[a-fA-F\d]{2}))|[/?:@&])*)=(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA
-F\d]{2}))|[/?:@&])*))*))?)|(?:imap://(?:(?:(?:(?:(?:(?:(?:[a-zA-Z\d$\
-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&=~])+)(?:(?:;[Aa][Uu][Tt][Hh]=(?:\*|
(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&=~])+))))?)|(?:(
?:;[Aa][Uu][Tt][Hh]=(?:\*|(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\
d]{2}))|[&=~])+)))(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}
))|[&=~])+))?))@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z
\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\
.(?:\d+)){3}))(?::(?:\d+))?))/(?:(?:(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]
|(?:%[a-fA-F\d]{2}))|[&=~:@/])+)?;[Tt][Yy][Pp][Ee]=(?:[Ll](?:[Ii][Ss][
Tt]|[Ss][Uu][Bb])))|(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{
2}))|[&=~:@/])+)(?:\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}
))|[&=~:@/])+))?(?:(?:;[Uu][Ii][Dd][Vv][Aa][Ll][Ii][Dd][Ii][Tt][Yy]=(?
:[1-9]\d*)))?)|(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|
[&=~:@/])+)(?:(?:;[Uu][Ii][Dd][Vv][Aa][Ll][Ii][Dd][Ii][Tt][Yy]=(?:[1-9
]\d*)))?(?:/;[Uu][Ii][Dd]=(?:[1-9]\d*))(?:(?:/;[Ss][Ee][Cc][Tt][Ii][Oo
][Nn]=(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&=~:@/])+))
)?)))?)|(?:nfs:(?:(?://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a
-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+
)(?:\.(?:\d+)){3}))(?::(?:\d+))?)(?:(?:/(?:(?:(?:(?:(?:[a-zA-Z\d\$\-_.
!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*)(?:/(?:(?:(?:[a-zA-Z\d\$\-_.!~*
'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*))*)?)))?)|(?:/(?:(?:(?:(?:(?:[a-zA
-Z\d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*)(?:/(?:(?:(?:[a-zA-Z\
d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*))*)?))|(?:(?:(?:(?:(?:[a
-zA-Z\d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*)(?:/(?:(?:(?:[a-zA
-Z\d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*))*)?)))
```
Given its complexibility, I think you should go the urlparse way.
For completeness, here's the pseudo-BNF of the above regex (as a documentation):
```
; The generic form of a URL is:
genericurl = scheme ":" schemepart
; Specific predefined schemes are defined here; new schemes
; may be registered with IANA
url = httpurl | ftpurl | newsurl |
nntpurl | telneturl | gopherurl |
waisurl | mailtourl | fileurl |
prosperourl | otherurl
; new schemes follow the general syntax
otherurl = genericurl
; the scheme is in lower case; interpreters should use case-ignore
scheme = 1*[ lowalpha | digit | "+" | "-" | "." ]
schemepart = *xchar | ip-schemepart
; URL schemeparts for ip based protocols:
ip-schemepart = "//" login [ "/" urlpath ]
login = [ user [ ":" password ] "@" ] hostport
hostport = host [ ":" port ]
host = hostname | hostnumber
hostname = *[ domainlabel "." ] toplabel
domainlabel = alphadigit | alphadigit *[ alphadigit | "-" ] alphadigit
toplabel = alpha | alpha *[ alphadigit | "-" ] alphadigit
alphadigit = alpha | digit
hostnumber = digits "." digits "." digits "." digits
port = digits
user = *[ uchar | ";" | "?" | "&" | "=" ]
password = *[ uchar | ";" | "?" | "&" | "=" ]
urlpath = *xchar ; depends on protocol see section 3.1
; The predefined schemes:
; FTP (see also RFC959)
ftpurl = "ftp://" login [ "/" fpath [ ";type=" ftptype ]]
fpath = fsegment *[ "/" fsegment ]
fsegment = *[ uchar | "?" | ":" | "@" | "&" | "=" ]
ftptype = "A" | "I" | "D" | "a" | "i" | "d"
; FILE
fileurl = "file://" [ host | "localhost" ] "/" fpath
; HTTP
httpurl = "http://" hostport [ "/" hpath [ "?" search ]]
hpath = hsegment *[ "/" hsegment ]
hsegment = *[ uchar | ";" | ":" | "@" | "&" | "=" ]
search = *[ uchar | ";" | ":" | "@" | "&" | "=" ]
; GOPHER (see also RFC1436)
gopherurl = "gopher://" hostport [ / [ gtype [ selector
[ "%09" search [ "%09" gopher+_string ] ] ] ] ]
gtype = xchar
selector = *xchar
gopher+_string = *xchar
; MAILTO (see also RFC822)
mailtourl = "mailto:" encoded822addr
encoded822addr = 1*xchar ; further defined in RFC822
; NEWS (see also RFC1036)
newsurl = "news:" grouppart
grouppart = "*" | group | article
group = alpha *[ alpha | digit | "-" | "." | "+" | "_" ]
article = 1*[ uchar | ";" | "/" | "?" | ":" | "&" | "=" ] "@" host
; NNTP (see also RFC977)
nntpurl = "nntp://" hostport "/" group [ "/" digits ]
; TELNET
telneturl = "telnet://" login [ "/" ]
; WAIS (see also RFC1625)
waisurl = waisdatabase | waisindex | waisdoc
waisdatabase = "wais://" hostport "/" database
waisindex = "wais://" hostport "/" database "?" search
waisdoc = "wais://" hostport "/" database "/" wtype "/" wpath
database = *uchar
wtype = *uchar
wpath = *uchar
; PROSPERO
prosperourl = "prospero://" hostport "/" ppath *[ fieldspec ]
ppath = psegment *[ "/" psegment ]
psegment = *[ uchar | "?" | ":" | "@" | "&" | "=" ]
fieldspec = ";" fieldname "=" fieldvalue
fieldname = *[ uchar | "?" | ":" | "@" | "&" ]
fieldvalue = *[ uchar | "?" | ":" | "@" | "&" ]
; Miscellaneous definitions
lowalpha = "a" | "b" | "c" | "d" | "e" | "f" | "g" | "h" |
"i" | "j" | "k" | "l" | "m" | "n" | "o" | "p" |
"q" | "r" | "s" | "t" | "u" | "v" | "w" | "x" |
"y" | "z"
hialpha = "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" |
"J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" |
"S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z"
alpha = lowalpha | hialpha
digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" |
"8" | "9"
safe = "$" | "-" | "_" | "." | "+"
extra = "!" | "*" | "'" | "(" | ")" | ","
national = "{" | "}" | "|" | "\" | "^" | "~" | "[" | "]" | "`"
punctuation = "" | "#" | "%" |
reserved = ";" | "/" | "?" | ":" | "@" | "&" | "="
hex = digit | "A" | "B" | "C" | "D" | "E" | "F" |
"a" | "b" | "c" | "d" | "e" | "f"
escape = "%" hex hex
unreserved = alpha | digit | safe | extra
uchar = unreserved | escape
xchar = unreserved | reserved | escape
digits = 1*digit
```
|
modified django url validation regex:
=====================================
```
import re
ul = '\u00a1-\uffff' # unicode letters range (must not be a raw string)
# IP patterns
ipv4_re = r'(?:25[0-5]|2[0-4]\d|[0-1]?\d?\d)(?:\.(?:25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}'
ipv6_re = r'\[[0-9a-f:\.]+\]'
# Host patterns
hostname_re = r'[a-z' + ul + r'0-9](?:[a-z' + ul + r'0-9-]{0,61}[a-z' + ul + r'0-9])?'
domain_re = r'(?:\.(?!-)[a-z' + ul + r'0-9-]{1,63}(?<!-))*' # domain names have max length of 63 characters
tld_re = (
r'\.' # dot
r'(?!-)' # can't start with a dash
r'(?:[a-z' + ul + '-]{2,63}' # domain label
r'|xn--[a-z0-9]{1,59})' # or punycode label
r'(?<!-)' # can't end with a dash
r'\.?' # may have a trailing dot
)
host_re = '(' + hostname_re + domain_re + tld_re + '|localhost)'
regex = re.compile(
r'^(?:http|ftp)s?://' # http(s):// or ftp(s)://
r'(?:\S+(?::\S*)?@)?' # user:pass authentication
r'(?:' + ipv4_re + '|' + ipv6_re + '|' + host_re + ')' # localhost or ip
r'(?::\d{2,5})?' # optional port
r'(?:[/?#][^\s]*)?' # resource path
r'\Z', re.IGNORECASE)
```
source: <https://github.com/django/django/blob/master/django/core/validators.py#L74>
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
<http://pypi.python.org/pypi/rfc3987> gives regular expressions for consistency with the rules in RFC 3986 and RFC 3987 (that is, not with scheme-specific rules).
A regexp for IRI\_reference is:
```
(?P<scheme>[a-zA-Z][a-zA-Z0-9+.-]*):(?://(?P<iauthority>(?:(?P<iuserinfo>(?:(?:[
a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U0002
0000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U
00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009ff
fd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U00
0dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:)*)@)?(?P<ihost>\
\[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4]
[0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0
-9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]
?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(
?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|
[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F
]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?
:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[
0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:
[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3
}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,
4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0
-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-
9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]
|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|
(?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6
}[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&'()*+,;=]|:)+)\\]|(?:(?:(
?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][
0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\
U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U000500
00-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00
090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd
\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=])*)(
?::(?P<port>[0-9]*))?)(?P<ipath>(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\uf
dcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\
U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007f
ffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U0
00bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-
F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7
ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000
-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U0007
0000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U
000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000eff
fd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff
\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\
U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U000700
00-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U00
0b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd
])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)?)|(?P<ipath>(?:(?:[a-zA-Z0-9._~-]|[\
xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U
00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006ff
fd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U00
0afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-
\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa
0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00
030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd
\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000a
fffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U
000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>))(?:\\?(?P<iquery
>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U000
1fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\
U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U000900
00-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U00
0d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|[\
ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P<ifra
gment>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-
\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050
000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U0
0090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfff
d\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|
@)|/|\\?)*))?|(?:(?://(?P<iauthority>(?:(?P<iuserinfo>(?:(?:[a-zA-Z0-9._~-]|[\xa
0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00
030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd
\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000a
fffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U
000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:)*)@)?(?P<ihost>\\[(?:(?:[0-9A-F]{1,
4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-
9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?:
[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3
}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4
}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\
.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1
,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-
4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?
:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A
-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][
0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1
,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]
?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0-
9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[
0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}
:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::|
v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0-
9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA
-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000
-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U0006
0000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U
000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dff
fd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=])*)(?::(?P<port>[0-9]*)
)?)(?P<ipath>(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U0
0010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fff
d\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U000
8fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\
U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*
+,;=]|:|@)*)*)|(?P<ipath>/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufd
f0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U000400
00-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00
080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd
\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A
-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0
-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000
-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U0008
0000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U
000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F
]|[!$&'()*+,;=]|:|@)*)*)?)|(?P<ipath>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\u
fdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd
\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007
fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U
000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A
-F][0-9A-F]|[!$&'()*+,;=]|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf
\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00
040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd
\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000b
fffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][
0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>))(?:\\?(?P<iquery>(?:(?:(?:[a-zA-Z0-9.
_~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U000
2fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\
U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a00
00-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U00
0e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f000
0-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P<ifragment>(?:(?:(?:[a-zA-
Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-
\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060
000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U0
00a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfff
d\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|/|\\?)*))?)
```
In one line:
```
(?P<scheme>[a-zA-Z][a-zA-Z0-9+.-]*):(?://(?P<iauthority>(?:(?P<iuserinfo>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:)*)@)?(?P<ihost>\\[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=])*)(?::(?P<port>[0-9]*))?)(?P<ipath>(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)?)|(?P<ipath>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>))(?:\\?(?P<iquery>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P<ifragment>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|/|\\?)*))?|(?:(?://(?P<iauthority>(?:(?P<iuserinfo>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:)*)@)?(?P<ihost>\\[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=])*)(?::(?P<port>[0-9]*))?)(?P<ipath>(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)?)|(?P<ipath>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>))(?:\\?(?P<iquery>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P<ifragment>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|/|\\?)*))?)
```
|
The regex provided should match any url of the form <http://www.ietf.org/rfc/rfc3986.txt>; and does when tested in the python interpreter.
What format have the URLs you've been having trouble parsing had?
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
I admit, I find your regular expression totally incomprehensible. I wonder if you could use urlparse instead? Something like:
```
pieces = urlparse.urlparse(url)
assert all([pieces.scheme, pieces.netloc])
assert set(pieces.netloc) <= set(string.letters + string.digits + '-.') # and others?
assert pieces.scheme in ['http', 'https', 'ftp'] # etc.
```
It might be slower, and maybe you'll miss conditions, but it seems (to me) a lot easier to read and debug than [a regular expression for URLs](https://blog.codinghorror.com/the-problem-with-urls/).
|
The regex provided should match any url of the form <http://www.ietf.org/rfc/rfc3986.txt>; and does when tested in the python interpreter.
What format have the URLs you've been having trouble parsing had?
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
<http://pypi.python.org/pypi/rfc3987> gives regular expressions for consistency with the rules in RFC 3986 and RFC 3987 (that is, not with scheme-specific rules).
A regexp for IRI\_reference is:
```
(?P<scheme>[a-zA-Z][a-zA-Z0-9+.-]*):(?://(?P<iauthority>(?:(?P<iuserinfo>(?:(?:[
a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U0002
0000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U
00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009ff
fd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U00
0dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:)*)@)?(?P<ihost>\
\[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4]
[0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0
-9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]
?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(
?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|
[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F
]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?
:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[
0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:
[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3
}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,
4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0
-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-
9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]
|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|
(?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6
}[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&'()*+,;=]|:)+)\\]|(?:(?:(
?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][
0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\
U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U000500
00-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00
090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd
\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=])*)(
?::(?P<port>[0-9]*))?)(?P<ipath>(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\uf
dcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\
U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007f
ffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U0
00bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-
F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7
ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000
-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U0007
0000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U
000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000eff
fd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff
\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\
U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U000700
00-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U00
0b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd
])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)?)|(?P<ipath>(?:(?:[a-zA-Z0-9._~-]|[\
xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U
00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006ff
fd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U00
0afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-
\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa
0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00
030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd
\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000a
fffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U
000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>))(?:\\?(?P<iquery
>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U000
1fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\
U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U000900
00-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U00
0d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|[\
ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P<ifra
gment>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-
\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050
000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U0
0090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfff
d\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|
@)|/|\\?)*))?|(?:(?://(?P<iauthority>(?:(?P<iuserinfo>(?:(?:[a-zA-Z0-9._~-]|[\xa
0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00
030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd
\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000a
fffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U
000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:)*)@)?(?P<ihost>\\[(?:(?:[0-9A-F]{1,
4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-
9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?:
[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3
}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4
}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\
.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1
,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-
4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?
:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A
-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][
0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1
,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]
?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0-
9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[
0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}
:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::|
v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0-
9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA
-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000
-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U0006
0000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U
000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dff
fd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=])*)(?::(?P<port>[0-9]*)
)?)(?P<ipath>(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U0
0010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fff
d\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U000
8fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\
U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*
+,;=]|:|@)*)*)|(?P<ipath>/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufd
f0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U000400
00-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00
080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd
\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A
-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0
-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000
-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U0008
0000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U
000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F
]|[!$&'()*+,;=]|:|@)*)*)?)|(?P<ipath>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\u
fdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd
\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007
fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U
000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A
-F][0-9A-F]|[!$&'()*+,;=]|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf
\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00
040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd
\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000b
fffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][
0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>))(?:\\?(?P<iquery>(?:(?:(?:[a-zA-Z0-9.
_~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U000
2fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\
U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a00
00-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U00
0e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f000
0-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P<ifragment>(?:(?:(?:[a-zA-
Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-
\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060
000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U0
00a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfff
d\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|/|\\?)*))?)
```
In one line:
```
(?P<scheme>[a-zA-Z][a-zA-Z0-9+.-]*):(?://(?P<iauthority>(?:(?P<iuserinfo>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:)*)@)?(?P<ihost>\\[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=])*)(?::(?P<port>[0-9]*))?)(?P<ipath>(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)?)|(?P<ipath>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>))(?:\\?(?P<iquery>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P<ifragment>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|/|\\?)*))?|(?:(?://(?P<iauthority>(?:(?P<iuserinfo>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:)*)@)?(?P<ihost>\\[(?:(?:[0-9A-F]{1,4}:){6}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|::(?:[0-9A-F]{1,4}:){5}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|[0-9A-F]{1,4}?::(?:[0-9A-F]{1,4}:){4}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:)?[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){3}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,2}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:){2}(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,3}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:)(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,4}[0-9A-F]{1,4})?::(?:[0-9A-F]{1,4}:[0-9A-F]{1,4}|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))|(?:(?:[0-9A-F]{1,4}:){,5}[0-9A-F]{1,4})?::[0-9A-F]{1,4}|(?:(?:[0-9A-F]{1,4}:){,6}[0-9A-F]{1,4})?::|v[0-9A-F]+\\.(?:[a-zA-Z0-9_.~-]|[!$&'()*+,;=]|:)+)\\]|(?:(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))|(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=])*)(?::(?P<port>[0-9]*))?)(?P<ipath>(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>/(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)?)|(?P<ipath>(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|@)+(?:/(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)*)*)|(?P<ipath>))(?:\\?(?P<iquery>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|[\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd]|/|\\?)*))?(?:\\#(?P<ifragment>(?:(?:(?:[a-zA-Z0-9._~-]|[\xa0-\ud7ff\uf900-\ufdcf\ufdf0-\uffef\U00010000-\U0001fffd\U00020000-\U0002fffd\U00030000-\U0003fffd\U00040000-\U0004fffd\U00050000-\U0005fffd\U00060000-\U0006fffd\U00070000-\U0007fffd\U00080000-\U0008fffd\U00090000-\U0009fffd\U000a0000-\U000afffd\U000b0000-\U000bfffd\U000c0000-\U000cfffd\U000d0000-\U000dfffd\U000e1000-\U000efffd])|%[0-9A-F][0-9A-F]|[!$&'()*+,;=]|:|@)|/|\\?)*))?)
```
|
I've needed to do this many times over the years and always end up copying someone else's regular expression who has thought about it way more than I *want* to think about it.
Having said that, there is a regex in the Django forms code which should do the trick:
<http://code.djangoproject.com/browser/django/trunk/django/forms/fields.py#L534>
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
Here's the complete regexp to parse a URL.
```none
(?:https?://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)
\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d
+)){3}))(?::(?:\d+))?)(?:/(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA
-F\d]{2}))|[;:@&=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d
]{2}))|[;:@&=])*))*)(?:\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d
]{2}))|[;:@&=])*))?)?)|(?:s?ftp://(?:(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),
]|(?:%[a-fA-F\d]{2}))|[;?&=])*)(?::(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:
%[a-fA-F\d]{2}))|[;?&=])*))?@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\
d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(
?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?))(?:/(?:(?:(?:(?:[a-zA-Z\d$\-
_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!
*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*))*)(?:;type=[AIDaid])?)?)|(?:news
:(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;/?:&=])+@(?:
(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:
(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3})))|(?:[a-zA
-Z](?:[a-zA-Z\d]|[_.+-])*)|\*))|(?:nntp://(?:(?:(?:(?:(?:[a-zA-Z\d](?:
(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA
-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?)/(?:[a-zA-Z](?:[a-
zA-Z\d]|[_.+-])*)(?:/(?:\d+))?)|(?:telnet://(?:(?:(?:(?:(?:[a-zA-Z\d$\
-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?&=])*)(?::(?:(?:(?:[a-zA-Z\d$\-_.+!
*'(),]|(?:%[a-fA-F\d]{2}))|[;?&=])*))?@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(
?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-
Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?))/?)|(?:gopher://(?
:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z
](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?
:\d+))?)(?:/(?:[a-zA-Z\d$\-_.+!*'(),;/?:@&=]|(?:%[a-fA-F\d]{2}))(?:(?:
(?:[a-zA-Z\d$\-_.+!*'(),;/?:@&=]|(?:%[a-fA-F\d]{2}))*)(?:%09(?:(?:(?:[
a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;:@&=])*)(?:%09(?:(?:[a-zA-
Z\d$\-_.+!*'(),;/?:@&=]|(?:%[a-fA-F\d]{2}))*))?)?)?)?)|(?:wais://(?:(?
:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?
:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d
+))?)/(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)(?:(?:/(?:(?:[
a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)/(?:(?:[a-zA-Z\d$\-_.+!*'()
,]|(?:%[a-fA-F\d]{2}))*))|\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-
F\d]{2}))|[;:@&=])*))?)|(?:mailto:(?:(?:[a-zA-Z\d$\-_.+!*'(),;/?:@&=]|
(?:%[a-fA-F\d]{2}))+))|(?:file://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-
Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))
|(?:(?:\d+)(?:\.(?:\d+)){3}))|localhost)?/(?:(?:(?:(?:[a-zA-Z\d$\-_.+!
*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'()
,]|(?:%[a-fA-F\d]{2}))|[?:@&=])*))*))|(?:prospero://(?:(?:(?:(?:(?:[a-
zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d
]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d+))?)/(?:(?:(
?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*)(?:/(?:(?:(?
:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&=])*))*)(?:(?:;(?:(?:
(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&])*)=(?:(?:(?:[a-zA
-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[?:@&])*)))*)|(?:ldap://(?:(?:(?
:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](?
:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\d
+))?))?/(?:(?:(?:(?:(?:(?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa
\d]))|(?:%20))+|(?:OID|oid)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(
?:%20)*)=(?:(?:%0[Aa])?(?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-
fA-F\d]{2}))*))(?:(?:(?:%0[Aa])?(?:%20)*)\+(?:(?:%0[Aa])?(?:%20)*)(?:(
?:(?:(?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?
:OID|oid)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[
Aa])?(?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)))*)
(?:(?:(?:(?:%0[Aa])?(?:%20)*)(?:[;,])(?:(?:%0[Aa])?(?:%20)*))(?:(?:(?:
(?:(?:(?:[a-zA-Z\d]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?:O
ID|oid)\.(?:(?:\d+)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[Aa
])?(?:%20)*))?(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*))(?:(?
:(?:%0[Aa])?(?:%20)*)\+(?:(?:%0[Aa])?(?:%20)*)(?:(?:(?:(?:(?:[a-zA-Z\d
]|%(?:3\d|[46][a-fA-F\d]|[57][Aa\d]))|(?:%20))+|(?:OID|oid)\.(?:(?:\d+
)(?:\.(?:\d+))*))(?:(?:%0[Aa])?(?:%20)*)=(?:(?:%0[Aa])?(?:%20)*))?(?:(
?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))*)))*))*(?:(?:(?:%0[Aa])?(
?:%20)*)(?:[;,])(?:(?:%0[Aa])?(?:%20)*))?)(?:\?(?:(?:(?:(?:[a-zA-Z\d$\
-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+)(?:,(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%
[a-fA-F\d]{2}))+))*)?)(?:\?(?:base|one|sub)(?:\?(?:((?:[a-zA-Z\d$\-_.+
!*'(),;/?:@&=]|(?:%[a-fA-F\d]{2}))+)))?)?)?)|(?:(?:z39\.50[rs])://(?:(
?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?)\.)*(?:[a-zA-Z](
?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\.(?:\d+)){3}))(?::(?:\
d+))?)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+)(?:\+(?
:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+))*(?:\?(?:(?:[a-zA-Z\d
$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+))?)?(?:;esn=(?:(?:[a-zA-Z\d$\-_.+!*
'(),]|(?:%[a-fA-F\d]{2}))+))?(?:;rs=(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[
a-fA-F\d]{2}))+)(?:\+(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))+
))*)?))|(?:cid:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?
:@&=])*))|(?:mid:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[
;?:@&=])*)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[;?:
@&=])*))?)|(?:vemmi://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-
zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)
(?:\.(?:\d+)){3}))(?::(?:\d+))?)(?:/(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?
:%[a-fA-F\d]{2}))|[/?:@&=])*)(?:(?:;(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?
:%[a-fA-F\d]{2}))|[/?:@&])*)=(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA
-F\d]{2}))|[/?:@&])*))*))?)|(?:imap://(?:(?:(?:(?:(?:(?:(?:[a-zA-Z\d$\
-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&=~])+)(?:(?:;[Aa][Uu][Tt][Hh]=(?:\*|
(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&=~])+))))?)|(?:(
?:;[Aa][Uu][Tt][Hh]=(?:\*|(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\
d]{2}))|[&=~])+)))(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}
))|[&=~])+))?))@)?(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a-zA-Z
\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+)(?:\
.(?:\d+)){3}))(?::(?:\d+))?))/(?:(?:(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]
|(?:%[a-fA-F\d]{2}))|[&=~:@/])+)?;[Tt][Yy][Pp][Ee]=(?:[Ll](?:[Ii][Ss][
Tt]|[Ss][Uu][Bb])))|(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{
2}))|[&=~:@/])+)(?:\?(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}
))|[&=~:@/])+))?(?:(?:;[Uu][Ii][Dd][Vv][Aa][Ll][Ii][Dd][Ii][Tt][Yy]=(?
:[1-9]\d*)))?)|(?:(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|
[&=~:@/])+)(?:(?:;[Uu][Ii][Dd][Vv][Aa][Ll][Ii][Dd][Ii][Tt][Yy]=(?:[1-9
]\d*)))?(?:/;[Uu][Ii][Dd]=(?:[1-9]\d*))(?:(?:/;[Ss][Ee][Cc][Tt][Ii][Oo
][Nn]=(?:(?:(?:[a-zA-Z\d$\-_.+!*'(),]|(?:%[a-fA-F\d]{2}))|[&=~:@/])+))
)?)))?)|(?:nfs:(?:(?://(?:(?:(?:(?:(?:[a-zA-Z\d](?:(?:[a-zA-Z\d]|-)*[a
-zA-Z\d])?)\.)*(?:[a-zA-Z](?:(?:[a-zA-Z\d]|-)*[a-zA-Z\d])?))|(?:(?:\d+
)(?:\.(?:\d+)){3}))(?::(?:\d+))?)(?:(?:/(?:(?:(?:(?:(?:[a-zA-Z\d\$\-_.
!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*)(?:/(?:(?:(?:[a-zA-Z\d\$\-_.!~*
'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*))*)?)))?)|(?:/(?:(?:(?:(?:(?:[a-zA
-Z\d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*)(?:/(?:(?:(?:[a-zA-Z\
d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*))*)?))|(?:(?:(?:(?:(?:[a
-zA-Z\d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*)(?:/(?:(?:(?:[a-zA
-Z\d\$\-_.!~*'(),])|(?:%[a-fA-F\d]{2})|[:@&=+])*))*)?)))
```
Given its complexibility, I think you should go the urlparse way.
For completeness, here's the pseudo-BNF of the above regex (as a documentation):
```
; The generic form of a URL is:
genericurl = scheme ":" schemepart
; Specific predefined schemes are defined here; new schemes
; may be registered with IANA
url = httpurl | ftpurl | newsurl |
nntpurl | telneturl | gopherurl |
waisurl | mailtourl | fileurl |
prosperourl | otherurl
; new schemes follow the general syntax
otherurl = genericurl
; the scheme is in lower case; interpreters should use case-ignore
scheme = 1*[ lowalpha | digit | "+" | "-" | "." ]
schemepart = *xchar | ip-schemepart
; URL schemeparts for ip based protocols:
ip-schemepart = "//" login [ "/" urlpath ]
login = [ user [ ":" password ] "@" ] hostport
hostport = host [ ":" port ]
host = hostname | hostnumber
hostname = *[ domainlabel "." ] toplabel
domainlabel = alphadigit | alphadigit *[ alphadigit | "-" ] alphadigit
toplabel = alpha | alpha *[ alphadigit | "-" ] alphadigit
alphadigit = alpha | digit
hostnumber = digits "." digits "." digits "." digits
port = digits
user = *[ uchar | ";" | "?" | "&" | "=" ]
password = *[ uchar | ";" | "?" | "&" | "=" ]
urlpath = *xchar ; depends on protocol see section 3.1
; The predefined schemes:
; FTP (see also RFC959)
ftpurl = "ftp://" login [ "/" fpath [ ";type=" ftptype ]]
fpath = fsegment *[ "/" fsegment ]
fsegment = *[ uchar | "?" | ":" | "@" | "&" | "=" ]
ftptype = "A" | "I" | "D" | "a" | "i" | "d"
; FILE
fileurl = "file://" [ host | "localhost" ] "/" fpath
; HTTP
httpurl = "http://" hostport [ "/" hpath [ "?" search ]]
hpath = hsegment *[ "/" hsegment ]
hsegment = *[ uchar | ";" | ":" | "@" | "&" | "=" ]
search = *[ uchar | ";" | ":" | "@" | "&" | "=" ]
; GOPHER (see also RFC1436)
gopherurl = "gopher://" hostport [ / [ gtype [ selector
[ "%09" search [ "%09" gopher+_string ] ] ] ] ]
gtype = xchar
selector = *xchar
gopher+_string = *xchar
; MAILTO (see also RFC822)
mailtourl = "mailto:" encoded822addr
encoded822addr = 1*xchar ; further defined in RFC822
; NEWS (see also RFC1036)
newsurl = "news:" grouppart
grouppart = "*" | group | article
group = alpha *[ alpha | digit | "-" | "." | "+" | "_" ]
article = 1*[ uchar | ";" | "/" | "?" | ":" | "&" | "=" ] "@" host
; NNTP (see also RFC977)
nntpurl = "nntp://" hostport "/" group [ "/" digits ]
; TELNET
telneturl = "telnet://" login [ "/" ]
; WAIS (see also RFC1625)
waisurl = waisdatabase | waisindex | waisdoc
waisdatabase = "wais://" hostport "/" database
waisindex = "wais://" hostport "/" database "?" search
waisdoc = "wais://" hostport "/" database "/" wtype "/" wpath
database = *uchar
wtype = *uchar
wpath = *uchar
; PROSPERO
prosperourl = "prospero://" hostport "/" ppath *[ fieldspec ]
ppath = psegment *[ "/" psegment ]
psegment = *[ uchar | "?" | ":" | "@" | "&" | "=" ]
fieldspec = ";" fieldname "=" fieldvalue
fieldname = *[ uchar | "?" | ":" | "@" | "&" ]
fieldvalue = *[ uchar | "?" | ":" | "@" | "&" ]
; Miscellaneous definitions
lowalpha = "a" | "b" | "c" | "d" | "e" | "f" | "g" | "h" |
"i" | "j" | "k" | "l" | "m" | "n" | "o" | "p" |
"q" | "r" | "s" | "t" | "u" | "v" | "w" | "x" |
"y" | "z"
hialpha = "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" |
"J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" |
"S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z"
alpha = lowalpha | hialpha
digit = "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" |
"8" | "9"
safe = "$" | "-" | "_" | "." | "+"
extra = "!" | "*" | "'" | "(" | ")" | ","
national = "{" | "}" | "|" | "\" | "^" | "~" | "[" | "]" | "`"
punctuation = "" | "#" | "%" |
reserved = ";" | "/" | "?" | ":" | "@" | "&" | "="
hex = digit | "A" | "B" | "C" | "D" | "E" | "F" |
"a" | "b" | "c" | "d" | "e" | "f"
escape = "%" hex hex
unreserved = alpha | digit | safe | extra
uchar = unreserved | escape
xchar = unreserved | reserved | escape
digits = 1*digit
```
|
I've needed to do this many times over the years and always end up copying someone else's regular expression who has thought about it way more than I *want* to think about it.
Having said that, there is a regex in the Django forms code which should do the trick:
<http://code.djangoproject.com/browser/django/trunk/django/forms/fields.py#L534>
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
An easy way to parse (and validate) URL's is the `urlparse` ([py2](https://docs.python.org/2/library/urlparse.html), [py3](https://docs.python.org/3.0/library/urllib.parse.html)) module.
A regex is too much work.
---
There's no "validate" method because almost anything is a valid URL. There are some punctuation rules for splitting it up. Absent any punctuation, you still have a valid URL.
Check the RFC carefully and see if you can construct an "invalid" URL. The rules are very flexible.
For example `:::::` is a valid URL. The path is `":::::"`. A pretty stupid filename, but a valid filename.
Also, `/////` is a valid URL. The netloc ("hostname") is `""`. The path is `"///"`. Again, stupid. Also valid. This URL normalizes to `"///"` which is the equivalent.
Something like `"bad://///worse/////"` is perfectly valid. Dumb but valid.
**Bottom Line**. Parse it, and look at the pieces to see if they're displeasing in some way.
Do you want the scheme to always be "http"? Do you want the netloc to always be "www.somename.somedomain"? Do you want the path to look unix-like? Or windows-like? Do you want to remove the query string? Or preserve it?
These are not RFC-specified validations. These are validations unique to your application.
|
```
urlfinders = [
re.compile("([0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}|(((news|telnet|nttp|file|http|ftp|https)://)|(www|ftp)[-A-Za-z0-9]*\\.)[-A-Za-z0-9\\.]+)(:[0-9]*)?/[-A-Za-z0-9_\\$\\.\\+\\!\\*\\(\\),;:@&=\\?/~\\#\\%]*[^]'\\.}>\\),\\\"]"),
re.compile("([0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}\\.[0-9]{1,3}|(((news|telnet|nttp|file|http|ftp|https)://)|(www|ftp)[-A-Za-z0-9]*\\.)[-A-Za-z0-9\\.]+)(:[0-9]*)?"),
re.compile("(~/|/|\\./)([-A-Za-z0-9_\\$\\.\\+\\!\\*\\(\\),;:@&=\\?/~\\#\\%]|\\\\
)+"),
re.compile("'\\<((mailto:)|)[-A-Za-z0-9\\.]+@[-A-Za-z0-9\\.]+"),
]
```
NOTE: As ugly as it looks in your browser just copy paste and the formatting should be good
Found at the python mailing lists and used for the gnome-terminal
source: <http://mail.python.org/pipermail/python-list/2007-January/595436.html>
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
An easy way to parse (and validate) URL's is the `urlparse` ([py2](https://docs.python.org/2/library/urlparse.html), [py3](https://docs.python.org/3.0/library/urllib.parse.html)) module.
A regex is too much work.
---
There's no "validate" method because almost anything is a valid URL. There are some punctuation rules for splitting it up. Absent any punctuation, you still have a valid URL.
Check the RFC carefully and see if you can construct an "invalid" URL. The rules are very flexible.
For example `:::::` is a valid URL. The path is `":::::"`. A pretty stupid filename, but a valid filename.
Also, `/////` is a valid URL. The netloc ("hostname") is `""`. The path is `"///"`. Again, stupid. Also valid. This URL normalizes to `"///"` which is the equivalent.
Something like `"bad://///worse/////"` is perfectly valid. Dumb but valid.
**Bottom Line**. Parse it, and look at the pieces to see if they're displeasing in some way.
Do you want the scheme to always be "http"? Do you want the netloc to always be "www.somename.somedomain"? Do you want the path to look unix-like? Or windows-like? Do you want to remove the query string? Or preserve it?
These are not RFC-specified validations. These are validations unique to your application.
|
modified django url validation regex:
=====================================
```
import re
ul = '\u00a1-\uffff' # unicode letters range (must not be a raw string)
# IP patterns
ipv4_re = r'(?:25[0-5]|2[0-4]\d|[0-1]?\d?\d)(?:\.(?:25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}'
ipv6_re = r'\[[0-9a-f:\.]+\]'
# Host patterns
hostname_re = r'[a-z' + ul + r'0-9](?:[a-z' + ul + r'0-9-]{0,61}[a-z' + ul + r'0-9])?'
domain_re = r'(?:\.(?!-)[a-z' + ul + r'0-9-]{1,63}(?<!-))*' # domain names have max length of 63 characters
tld_re = (
r'\.' # dot
r'(?!-)' # can't start with a dash
r'(?:[a-z' + ul + '-]{2,63}' # domain label
r'|xn--[a-z0-9]{1,59})' # or punycode label
r'(?<!-)' # can't end with a dash
r'\.?' # may have a trailing dot
)
host_re = '(' + hostname_re + domain_re + tld_re + '|localhost)'
regex = re.compile(
r'^(?:http|ftp)s?://' # http(s):// or ftp(s)://
r'(?:\S+(?::\S*)?@)?' # user:pass authentication
r'(?:' + ipv4_re + '|' + ipv6_re + '|' + host_re + ')' # localhost or ip
r'(?::\d{2,5})?' # optional port
r'(?:[/?#][^\s]*)?' # resource path
r'\Z', re.IGNORECASE)
```
source: <https://github.com/django/django/blob/master/django/core/validators.py#L74>
|
827,557
|
I'm building an app on Google App Engine. I'm incredibly new to Python and have been beating my head against the following problem for the past 3 days.
I have a class to represent an RSS Feed and in this class I have a method called setUrl. Input to this method is a URL.
I'm trying to use the re python module to validate off of the RFC 3986 Reg-ex (<http://www.ietf.org/rfc/rfc3986.txt>)
Below is a snipped which *should* work?
```
p = re.compile('^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?')
m = p.match(url)
if m:
self.url = url
return url
```
|
2009/05/06
|
[
"https://Stackoverflow.com/questions/827557",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/91163/"
] |
I'm using the one used by Django and it seems to work pretty well:
```
def is_valid_url(url):
import re
regex = re.compile(
r'^https?://' # http:// or https://
r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+[A-Z]{2,6}\.?|' # domain...
r'localhost|' # localhost...
r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
r'(?::\d+)?' # optional port
r'(?:/?|[/?]\S+)$', re.IGNORECASE)
return url is not None and regex.search(url)
```
You can always check the latest version here: <https://github.com/django/django/blob/master/django/core/validators.py#L74>
|
The regex provided should match any url of the form <http://www.ietf.org/rfc/rfc3986.txt>; and does when tested in the python interpreter.
What format have the URLs you've been having trouble parsing had?
|
55,381,039
|
I am trying to get a dynamic text displayed in the system tray (this will be 2 numbers (from 1 to 100) changing every 2 minutes).
I found this [script](http://code.activestate.com/recipes/475155-dynamic-system-tray-icon-wxpython/) as a starting point (but I am not commited to it!).
But I get this error :
```
TypeError: Image.SetData(): arguments did not match any overloaded call:
overload 1: argument 1 has unexpected type 'str'
overload 2: argument 1 has unexpected type 'str'
OnInit returned false, exiting...
```
The relevant part of the code is:
```
def Get(self,l,r):
s=""+self.s_line
for i in range(5):
if i<(5-l):
sl = self.sl_off
else:
sl = self.sl_on
if i<(5-r):
sr = self.sr_off
else:
sr = self.sr_on
s+=self.s_border+sl+self.s_point+sr+self.s_point
s+=self.s_border+sl+self.s_point+sr+self.s_point
s+=self.s_line
image = wx.EmptyImage(16,16)
image.SetData(s)
bmp = image.ConvertToBitmap()
bmp.SetMask(wx.Mask(bmp, wx.WHITE)) #sets the transparency colour to white
icon = wx.EmptyIcon()
icon.CopyFromBitmap(bmp)
return icon
```
I add to update the script by adding `import wx.adv` and by replacing the 2 `wx.TaskBarIcon` by `wx.adv.TaskBarIcon`.
I am on Windows 10 with Python 3.6
|
2019/03/27
|
[
"https://Stackoverflow.com/questions/55381039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3154274/"
] |
I think this issue was occurring due to using the OpenJDK and not the OracleJDK.
I am no longer having this issue since changing the project SDK to the OracleJDK, so if anyone else ever has this issue in the future... that may be the fix.
|
* Be sure to see also the Swing/Seesaw section [from the Clojure Cookbook](https://github.com/clojure-cookbook/clojure-cookbook/blob/master/04_local-io/4-25_seesaw/4-25_making-a-window.asciidoc)
* [The newer fn/fx lib](https://github.com/fn-fx/fn-fx) for using JavaFX from Clojure.
|
55,381,039
|
I am trying to get a dynamic text displayed in the system tray (this will be 2 numbers (from 1 to 100) changing every 2 minutes).
I found this [script](http://code.activestate.com/recipes/475155-dynamic-system-tray-icon-wxpython/) as a starting point (but I am not commited to it!).
But I get this error :
```
TypeError: Image.SetData(): arguments did not match any overloaded call:
overload 1: argument 1 has unexpected type 'str'
overload 2: argument 1 has unexpected type 'str'
OnInit returned false, exiting...
```
The relevant part of the code is:
```
def Get(self,l,r):
s=""+self.s_line
for i in range(5):
if i<(5-l):
sl = self.sl_off
else:
sl = self.sl_on
if i<(5-r):
sr = self.sr_off
else:
sr = self.sr_on
s+=self.s_border+sl+self.s_point+sr+self.s_point
s+=self.s_border+sl+self.s_point+sr+self.s_point
s+=self.s_line
image = wx.EmptyImage(16,16)
image.SetData(s)
bmp = image.ConvertToBitmap()
bmp.SetMask(wx.Mask(bmp, wx.WHITE)) #sets the transparency colour to white
icon = wx.EmptyIcon()
icon.CopyFromBitmap(bmp)
return icon
```
I add to update the script by adding `import wx.adv` and by replacing the 2 `wx.TaskBarIcon` by `wx.adv.TaskBarIcon`.
I am on Windows 10 with Python 3.6
|
2019/03/27
|
[
"https://Stackoverflow.com/questions/55381039",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3154274/"
] |
I've seen the `CompilerException java.awt.AWTError: Assistive Technology not found` when trying to run a PDF generation code (which uses AWT) on a linux server with OpenJDK 8. After a switch to JDK 10/11 the error went away.
There might be lots of "fun" issues with graphics-related code, especially when you run on a server without a proper display.
I know we get the `CompilerException java.lang.NoClassDefFoundError: Could not initialize class` error in that case when running the Alpine Linux distribution in docker - although it was a different one: `java.lang.NoClassDefFoundError: Could not initialize class sun.awt.image.IntegerInterleavedRaster`
In our case, it was ultimately related to the `fontconfig` package. For the Apline Linux following helped: `apk --update add ttf-dejavu`
More about this problem:
* OpenJDK bug <https://bugs.alpinelinux.org/issues/7372>
* [Could not initialize class sun.awt.X11FontManager using openjdk 8 on alpine linux](https://stackoverflow.com/questions/45315251/could-not-initialize-class-sun-awt-x11fontmanager-using-openjdk-8-on-alpine-linu)
|
* Be sure to see also the Swing/Seesaw section [from the Clojure Cookbook](https://github.com/clojure-cookbook/clojure-cookbook/blob/master/04_local-io/4-25_seesaw/4-25_making-a-window.asciidoc)
* [The newer fn/fx lib](https://github.com/fn-fx/fn-fx) for using JavaFX from Clojure.
|
50,431,371
|
I am trying to create a python program that uses user input in an equation. When I run the program, it gives this error code, "answer = ((((A\*10**A)\*\*2)**(B\*C))\*D\*\*E) TypeError: unsupported operand type(s) for \*\* or pow(): 'int' and 'str'". My code is:
```
import cmath
A = input("Enter a number for A: ")
B = input("Enter a number for B: ")
C = input("Enter a number for C: ")
D = input("Enter a number for D: ")
E = input("Enter a number for E: ")
answer = ((((A*10**A)**2)**(B*C))*D**E)
print(answer)`
```
|
2018/05/20
|
[
"https://Stackoverflow.com/questions/50431371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6754577/"
] |
The [`input()`](https://docs.python.org/3/library/functions.html#input) function returns a string value: you need to convert to a number using `Decimal`:
```
from decimal import Decimal
A = Decimal(input("Enter a number for A: "))
# ... etc
```
But your user might enter something that isn't a decimal number, so you might want to do some checking:
```
from decimal import Decimal, InvalidOperation
def get_decimal_input(variableName):
x = None
while x is None:
try:
x = Decimal(input('Enter a number for ' + variableName + ': '))
except InvalidOperation:
print("That's not a number")
return x
A = get_decimal_input('A')
B = get_decimal_input('B')
C = get_decimal_input('C')
D = get_decimal_input('D')
E = get_decimal_input('E')
print((((A * 10 ** A) ** 2) ** (B * C)) * D ** E)
```
|
The compiler thinks your inputs are of string type. You can wrap each of A, B, C, D, E with float() to cast the input into float type, provided you're actually inputting numbers at the terminal. This way, you're taking powers of float numbers instead of strings, which python doesn't know how to handle.
```
A = float(input("Enter a number for A: "))
B = float(input("Enter a number for B: "))
C = float(input("Enter a number for C: "))
D = float(input("Enter a number for D: "))
E = float(input("Enter a number for E: "))
```
|
50,431,371
|
I am trying to create a python program that uses user input in an equation. When I run the program, it gives this error code, "answer = ((((A\*10**A)\*\*2)**(B\*C))\*D\*\*E) TypeError: unsupported operand type(s) for \*\* or pow(): 'int' and 'str'". My code is:
```
import cmath
A = input("Enter a number for A: ")
B = input("Enter a number for B: ")
C = input("Enter a number for C: ")
D = input("Enter a number for D: ")
E = input("Enter a number for E: ")
answer = ((((A*10**A)**2)**(B*C))*D**E)
print(answer)`
```
|
2018/05/20
|
[
"https://Stackoverflow.com/questions/50431371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6754577/"
] |
The [`input()`](https://docs.python.org/3/library/functions.html#input) function returns a string value: you need to convert to a number using `Decimal`:
```
from decimal import Decimal
A = Decimal(input("Enter a number for A: "))
# ... etc
```
But your user might enter something that isn't a decimal number, so you might want to do some checking:
```
from decimal import Decimal, InvalidOperation
def get_decimal_input(variableName):
x = None
while x is None:
try:
x = Decimal(input('Enter a number for ' + variableName + ': '))
except InvalidOperation:
print("That's not a number")
return x
A = get_decimal_input('A')
B = get_decimal_input('B')
C = get_decimal_input('C')
D = get_decimal_input('D')
E = get_decimal_input('E')
print((((A * 10 ** A) ** 2) ** (B * C)) * D ** E)
```
|
[`input()`](https://docs.python.org/3/library/functions.html#input) returns a string, you have to convert your inputs to [integers](https://docs.python.org/3/library/functions.html#int) (or [floats](https://docs.python.org/3/library/functions.html#float), or [decimals](https://docs.python.org/3/library/decimal.html#decimal.Decimal)...) before you can use them in math equations. I'd suggest creating a separate function to wrap your inputs, e.g.:
```
def num_input(msg):
# you can also do some basic validation before returning the value
return int(input(msg)) # or float(...), or decimal.Decimal(...) ...
A = num_input("Enter a number for A: ")
B = num_input("Enter a number for B: ")
C = num_input("Enter a number for C: ")
D = num_input("Enter a number for D: ")
E = num_input("Enter a number for E: ")
```
|
50,431,371
|
I am trying to create a python program that uses user input in an equation. When I run the program, it gives this error code, "answer = ((((A\*10**A)\*\*2)**(B\*C))\*D\*\*E) TypeError: unsupported operand type(s) for \*\* or pow(): 'int' and 'str'". My code is:
```
import cmath
A = input("Enter a number for A: ")
B = input("Enter a number for B: ")
C = input("Enter a number for C: ")
D = input("Enter a number for D: ")
E = input("Enter a number for E: ")
answer = ((((A*10**A)**2)**(B*C))*D**E)
print(answer)`
```
|
2018/05/20
|
[
"https://Stackoverflow.com/questions/50431371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6754577/"
] |
The [`input()`](https://docs.python.org/3/library/functions.html#input) function returns a string value: you need to convert to a number using `Decimal`:
```
from decimal import Decimal
A = Decimal(input("Enter a number for A: "))
# ... etc
```
But your user might enter something that isn't a decimal number, so you might want to do some checking:
```
from decimal import Decimal, InvalidOperation
def get_decimal_input(variableName):
x = None
while x is None:
try:
x = Decimal(input('Enter a number for ' + variableName + ': '))
except InvalidOperation:
print("That's not a number")
return x
A = get_decimal_input('A')
B = get_decimal_input('B')
C = get_decimal_input('C')
D = get_decimal_input('D')
E = get_decimal_input('E')
print((((A * 10 ** A) ** 2) ** (B * C)) * D ** E)
```
|
That code would run fine for python 2.7 I think you are using python 3.5+ so you have to cast the variable so this would become like this
```
import cmath
A = int(input("Enter a number for A: "))
B = int(input("Enter a number for B: "))
C = int(input("Enter a number for C: "))
D = int(input("Enter a number for D: "))
E = int(input("Enter a number for E: "))
answer = ((((A*10**A)**2)**(B*C))*D**E)
print(answer)
```
I tested it
```
Enter a number for A: 2
Enter a number for B: 2
Enter a number for C: 2
Enter a number for D: 2
Enter a number for E: 2
10240000000000000000
```
|
50,431,371
|
I am trying to create a python program that uses user input in an equation. When I run the program, it gives this error code, "answer = ((((A\*10**A)\*\*2)**(B\*C))\*D\*\*E) TypeError: unsupported operand type(s) for \*\* or pow(): 'int' and 'str'". My code is:
```
import cmath
A = input("Enter a number for A: ")
B = input("Enter a number for B: ")
C = input("Enter a number for C: ")
D = input("Enter a number for D: ")
E = input("Enter a number for E: ")
answer = ((((A*10**A)**2)**(B*C))*D**E)
print(answer)`
```
|
2018/05/20
|
[
"https://Stackoverflow.com/questions/50431371",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6754577/"
] |
The [`input()`](https://docs.python.org/3/library/functions.html#input) function returns a string value: you need to convert to a number using `Decimal`:
```
from decimal import Decimal
A = Decimal(input("Enter a number for A: "))
# ... etc
```
But your user might enter something that isn't a decimal number, so you might want to do some checking:
```
from decimal import Decimal, InvalidOperation
def get_decimal_input(variableName):
x = None
while x is None:
try:
x = Decimal(input('Enter a number for ' + variableName + ': '))
except InvalidOperation:
print("That's not a number")
return x
A = get_decimal_input('A')
B = get_decimal_input('B')
C = get_decimal_input('C')
D = get_decimal_input('D')
E = get_decimal_input('E')
print((((A * 10 ** A) ** 2) ** (B * C)) * D ** E)
```
|
there are three ways to fix it, either
```
A = int(input("Enter a number for A: "))
B = int(input("Enter a number for B: "))
C = int(input("Enter a number for C: "))
D = int(input("Enter a number for D: "))
E = int(input("Enter a number for E: "))
```
which limits you to integers (whole numbers)
or:
```
A = float(input("Enter a number for A: "))
B = float(input("Enter a number for B: "))
C = float(input("Enter a number for C: "))
D = float(input("Enter a number for D: "))
E = float(input("Enter a number for E: "))
```
which limits you to float numbers (which have numbers on both sides of the decimal point, [which can act a bit weird](https://stackoverflow.com/questions/7545015/can-someone-explain-this-0-2-0-1-0-30000000000000004))
the third way is not as recommended as the other two, as I am not sure if it works in python 3.x but it is
```
A = num_input("Enter a number for A: ")
B = num_input("Enter a number for B: ")
C = num_input("Enter a number for C: ")
D = num_input("Enter a number for D: ")
E = num_input("Enter a number for E: ")
```
|
14,228,659
|
I can add the XML node using the ElementTree, but this returns the output in one single line instead of a tree structure look alike when I open the xml file in text format. I also tried using the minidom.toprettyxml but I do not know how to add the output to original XML. Since I would like the script to be reproducible in other environments, I prefer not using external libraries such as lxml. Can someone please help how I can pretty print the output? - python 2.7
The Sample XML. This is how it looks both in text format and Explorer.
```
<?xml version="1.0" encoding="utf-8"?>
<default_locators >
<locator_ref>
<name>cherry</name>
<display_name>cherrycherry</display_name>
<workspace_properties>
<factory_progid>Workspace</factory_progid>
<path>InstallDir</path>
</workspace_properties>
</locator_ref>
</default_locators>
```
Expected Output in both text format and Explorer.
```
<?xml version="1.0" encoding="utf-8"?>
<default_locators >
<locator_ref>
<name>cherry</name>
<display_name>cherrycherry</display_name>
<workspace_properties>
<factory_progid>Workspace</factory_progid>
<path>InstallDir</path>
</workspace_properties>
</locator_ref>
<locator_ref>
<name>berry</name>
<display_name>berryberry</display_name>
<workspace_properties>
<factory_progid>Workspace</factory_progid>
<path>C:\temp\temp</path>
</workspace_properties>
</locator_ref>
</default_locators>
```
My script
```
#coding: cp932
import xml.etree.ElementTree as ET
tree = ET.parse(r"C:\DefaultLocators.xml")
root = tree.getroot()
locator_ref = ET.SubElement(root, "locator_ref")
name = ET.SubElement(locator_ref, "name")
name.text = " berry"
display_name = ET.SubElement(locator_ref, "display_name")
display_name.text = "berryberry"
workspace_properties = ET.SubElement(locator_ref, "workspace_properties")
factory_progid = ET.SubElement(workspace_properties,"factory_progid")
factory_progid.text = "Workspace"
path = ET.SubElement(workspace_properties, "path")
path.text = r"c:\temp\temp"
tree.write(r"C:\DefaultLocators.xml", encoding='utf-8')
```
Returned output. After running my script, new nodes are added to my sample.xml file, but it returns output in one single line, with all newlines and indents removed from the original sample.xml file. At least thats how it looks when I open the sample.xml file in text format. However, When I open the sample.xml file in Explorer, it looks fine. I still see the newlines and indents as they were before. How can I keep the original tree structure in text format even after running the script?
```
<default_locators>
<locator_ref>
<name>cherry</name>
<display_name>cherrycherry</display_name>
<workspace_properties>
<factory_progid>Workspace</factory_progid>
<path>InstallDir</path>
</workspace_properties>
</locator_ref>
<locator_ref><name> berry</name><display_name>berryberry</display_name><workspace_properties><factory_progid>Workspace</factory_progid><path>c:\temp\temp</path></workspace_properties></locator_ref></default_locators>
```
|
2013/01/09
|
[
"https://Stackoverflow.com/questions/14228659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1027101/"
] |
when dealing with element, you can do like this: `element.tail = '\n'`
then,it will be written in single line.
|
write your xml in elementTree as:
```
import xml.etree.ElementTree as ET
def serialize_xml(write, elem, encoding, qnames, namespaces):
tag = elem.tag
text = elem.text
if tag is ET.Comment:
write("<!--%s-->" % _encode(text, encoding))
elif tag is ET.ProcessingInstruction:
write("<?%s?>" % _encode(text, encoding))
else:
tag = qnames[tag]
if tag is None:
if text:
write(_escape_cdata(text, encoding))
for e in elem:
serialize_xml(write, e, encoding, qnames, None)
else:
write("\n<" + tag) ## '\n' added by namit
items = elem.items()
if items or namespaces:
if namespaces:
for v, k in sorted(namespaces.items(),
key=lambda x: x[1]): # sort on prefix
if k:
k = ":" + k
write(" xmlns%s=\"%s\"" % (
k.encode(encoding),
_escape_attrib(v, encoding)
))
for k, v in sorted(items): # lexical order
if isinstance(k, QName):
k = k.text
if isinstance(v, QName):
v = qnames[v.text]
else:
v = _escape_attrib(v, encoding)
write(" %s=\"%s\"" % (qnames[k], v))
if text or len(elem):
write(">")
if text:
write(ET._escape_cdata(text, encoding))
for e in elem:
serialize_xml(write, e, encoding, qnames, None)
write("</" + tag + ">")
else:
write(" />")
if elem.tail:
write(ET._escape_cdata(elem.tail, encoding))
ET._serialize_xml=serialize_xml
tree = ET.parse(r"samplexml.xml")
root = tree.getroot()
locator_ref = ET.SubElement(root, "locator_ref")
name = ET.SubElement(locator_ref, "name")
name.text = " berry"
display_name = ET.SubElement(locator_ref, "display_name")
display_name.text = "berryberry"
workspace_properties = ET.SubElement(locator_ref, "workspace_properties")
factory_progid = ET.SubElement(workspace_properties,"factory_progid")
factory_progid.text = "WorkspaceFactory"
path = ET.SubElement(workspace_properties, "path")
ins_out=open("samplexml_1.xml",'wb',1000)
ET.ElementTree(locator_ref).write(ins_out,encoding="ASCII")
ins_out.close()
```
this will write complete file in single line; without adding white space in xml tail.
|
14,228,659
|
I can add the XML node using the ElementTree, but this returns the output in one single line instead of a tree structure look alike when I open the xml file in text format. I also tried using the minidom.toprettyxml but I do not know how to add the output to original XML. Since I would like the script to be reproducible in other environments, I prefer not using external libraries such as lxml. Can someone please help how I can pretty print the output? - python 2.7
The Sample XML. This is how it looks both in text format and Explorer.
```
<?xml version="1.0" encoding="utf-8"?>
<default_locators >
<locator_ref>
<name>cherry</name>
<display_name>cherrycherry</display_name>
<workspace_properties>
<factory_progid>Workspace</factory_progid>
<path>InstallDir</path>
</workspace_properties>
</locator_ref>
</default_locators>
```
Expected Output in both text format and Explorer.
```
<?xml version="1.0" encoding="utf-8"?>
<default_locators >
<locator_ref>
<name>cherry</name>
<display_name>cherrycherry</display_name>
<workspace_properties>
<factory_progid>Workspace</factory_progid>
<path>InstallDir</path>
</workspace_properties>
</locator_ref>
<locator_ref>
<name>berry</name>
<display_name>berryberry</display_name>
<workspace_properties>
<factory_progid>Workspace</factory_progid>
<path>C:\temp\temp</path>
</workspace_properties>
</locator_ref>
</default_locators>
```
My script
```
#coding: cp932
import xml.etree.ElementTree as ET
tree = ET.parse(r"C:\DefaultLocators.xml")
root = tree.getroot()
locator_ref = ET.SubElement(root, "locator_ref")
name = ET.SubElement(locator_ref, "name")
name.text = " berry"
display_name = ET.SubElement(locator_ref, "display_name")
display_name.text = "berryberry"
workspace_properties = ET.SubElement(locator_ref, "workspace_properties")
factory_progid = ET.SubElement(workspace_properties,"factory_progid")
factory_progid.text = "Workspace"
path = ET.SubElement(workspace_properties, "path")
path.text = r"c:\temp\temp"
tree.write(r"C:\DefaultLocators.xml", encoding='utf-8')
```
Returned output. After running my script, new nodes are added to my sample.xml file, but it returns output in one single line, with all newlines and indents removed from the original sample.xml file. At least thats how it looks when I open the sample.xml file in text format. However, When I open the sample.xml file in Explorer, it looks fine. I still see the newlines and indents as they were before. How can I keep the original tree structure in text format even after running the script?
```
<default_locators>
<locator_ref>
<name>cherry</name>
<display_name>cherrycherry</display_name>
<workspace_properties>
<factory_progid>Workspace</factory_progid>
<path>InstallDir</path>
</workspace_properties>
</locator_ref>
<locator_ref><name> berry</name><display_name>berryberry</display_name><workspace_properties><factory_progid>Workspace</factory_progid><path>c:\temp\temp</path></workspace_properties></locator_ref></default_locators>
```
|
2013/01/09
|
[
"https://Stackoverflow.com/questions/14228659",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1027101/"
] |
when dealing with element, you can do like this: `element.tail = '\n'`
then,it will be written in single line.
|
I think you must try [lxml library](http://lxml.de/tutorial.html). It's the best way to parse XML in Python.
It has magic argument \*pretty\_print\* for such things.
Here's an example:
```
import lxml.etree as etree
root = etree.Element("root")
for rn in range(10):
etree.SubElement(root, "column_%s" % str(rn)).text = str(rn*rn)
pretty_data = etree.tostring(root, pretty_print=True, encoding = 'utf-8')
print final_data
```
Result:<http://pastebin.com/y0rkQ78G>
|
2,565,415
|
What is the right way to forming in-memory table in python with direct lookups for rows and columns.
I thought of using dict of dicts this way,
```
class Table(dict):
def __getitem__(self, key):
if key not in self:
self[key]={}
return dict.__getitem__(self, key)
table = Table()
table['row1']['column1'] = 'value11'
table['row1']['column2'] = 'value12'
table['row2']['column1'] = 'value21'
table['row2']['column2'] = 'value22'
>>>table
{'row1':{'column1':'value11','column2':'value12'},'row2':{'column1':'value21','column2':'value22'}}
```
I had difficulty in looking up for values in columns.
```
>>>'row1' in table
True
>>>'value11' in table['row1'].values()
True
```
Now how do I do lookup if `'column1'` has `'value11'`
Is this method of forming tables wrong?
Is there a better way to implement such tables with easier lookups?.
|
2010/04/02
|
[
"https://Stackoverflow.com/questions/2565415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/231295/"
] |
I'd use an [in-memory database](http://www.sqlite.org/inmemorydb.html) with [SQLite](http://docs.python.org/library/sqlite3.html) for this. The sqlite module is even in the standard library since Python 2.5, which means this doesn't even add much to your requirements.
|
A nested list should be able to do the job here. I would only use nested dictionaries if elements are spread thin across the grid.
```
grid = []
for row in height:
grid.append([])
for cell in width:
grid[-1].append(value)
```
Checking rows is easy:
```
def valueInRow(value, row):
return value in grid[row]
```
Checking collumns takes a little more work, because the grid is a list of rows, not a list of collumns:
```
def collumnIterator(collumn):
height = len(grid)
for row in xrange(height):
yield grid[row][collumn]
def valueInCollumn(value, collumn):
return value in collumnIterator(collumn)
```
|
2,565,415
|
What is the right way to forming in-memory table in python with direct lookups for rows and columns.
I thought of using dict of dicts this way,
```
class Table(dict):
def __getitem__(self, key):
if key not in self:
self[key]={}
return dict.__getitem__(self, key)
table = Table()
table['row1']['column1'] = 'value11'
table['row1']['column2'] = 'value12'
table['row2']['column1'] = 'value21'
table['row2']['column2'] = 'value22'
>>>table
{'row1':{'column1':'value11','column2':'value12'},'row2':{'column1':'value21','column2':'value22'}}
```
I had difficulty in looking up for values in columns.
```
>>>'row1' in table
True
>>>'value11' in table['row1'].values()
True
```
Now how do I do lookup if `'column1'` has `'value11'`
Is this method of forming tables wrong?
Is there a better way to implement such tables with easier lookups?.
|
2010/04/02
|
[
"https://Stackoverflow.com/questions/2565415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/231295/"
] |
I'd use an [in-memory database](http://www.sqlite.org/inmemorydb.html) with [SQLite](http://docs.python.org/library/sqlite3.html) for this. The sqlite module is even in the standard library since Python 2.5, which means this doesn't even add much to your requirements.
|
>
> Now how do I do lookup if 'column1' has 'value11'
>
>
>
Are you asking about this?
```
found= False
for r in table:
if table[r]['column1'] == 'value11'
found= True
break
```
Is this what you're trying to do?
|
2,565,415
|
What is the right way to forming in-memory table in python with direct lookups for rows and columns.
I thought of using dict of dicts this way,
```
class Table(dict):
def __getitem__(self, key):
if key not in self:
self[key]={}
return dict.__getitem__(self, key)
table = Table()
table['row1']['column1'] = 'value11'
table['row1']['column2'] = 'value12'
table['row2']['column1'] = 'value21'
table['row2']['column2'] = 'value22'
>>>table
{'row1':{'column1':'value11','column2':'value12'},'row2':{'column1':'value21','column2':'value22'}}
```
I had difficulty in looking up for values in columns.
```
>>>'row1' in table
True
>>>'value11' in table['row1'].values()
True
```
Now how do I do lookup if `'column1'` has `'value11'`
Is this method of forming tables wrong?
Is there a better way to implement such tables with easier lookups?.
|
2010/04/02
|
[
"https://Stackoverflow.com/questions/2565415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/231295/"
] |
>
> Now how do I do lookup if 'column1'
> has 'value11'
>
>
>
`any(arow['column1'] == 'value11' for arow in table.iteritems())`
>
> Is this method of forming tables
> wrong?
>
>
>
No, it's just very "exposed", perhaps too much -- it could usefully be encapsulated in a class which exposes the methods you need, then the issue of how best to implement them does not affect all the rest of your application.
>
> Is there a better way to implement
> such tables with easier lookups?
>
>
>
Once you have designed a class whose interface you'd *like* to use, you can experiment with very different implementation approaches and benchmark them on a workload that's representative of *your* usage pattern, so you can find out what's best for *you* (assuming table manipulation and lookup are a big part of your application's runtime, of course -- to find out, **profile** your app).
I had similar but not identical needs in a large internal app I maintain at work, except that the row indices are integer (only the column names are strings), the column order is important, and the workload is more about "editing" the table (adding, removing, reordering rows or columns, renaming columns, etc). I started with a table exposing the functionality I needed, with the simplest rough-and-ready implementation internally (a list of dicts, plus a list of column names for the column ordering); and by now I have evolved it (independently of the actual "application-level" parts, but based on profiling and benchmarking thereof) to completely different implementations (currently based on `numpy`).
I think you should proceed along similar lines: "clothe" your current implementation into a nice "interface" with all the methods you need, profile your app -- unless this table object is a performance bottleneck, you're done; if it *is* a bottleneck, you can optimize the implementation (experiment, measure, repeat;-) without disturbing any of the rest of your application.
Inheriting from `dict` is not a good idea because you probably don't want to expose all of `dict`'s rich functionality; plus, what you're doing is, roughly, an inefficient implementation of `collections.defaultdict(dict)`. So, **encapsulate** the latter:
```
import collections
class Table(object):
def __init__(self):
self.d = collections.defaultdict(dict)
def add(self, row, col, val):
self.d[row][col] = val
def get(self, row, col, default=None):
return self.d[row].get(col, default)
def inrow(self, row, col):
return col in self.d[row]
def incol(self, col, val):
return any(x[col]==val for x in self.d.iteritems())
```
etc, etc -- write all the methods your app needs, with useful, short names, then maybe see if you can alias some of them as special methods if they're often used that way, e.g maybe (assuming Python 2.\* -- requires slightly different syntax in 3.\*):
```
def __setitem__(self, (row, col), val):
self.add(row, col, val)
```
and so forth. Once you have the code working, *then* comes the right time for profiling, benchmarking, and -- just perhaps -- internal optimization of the implementation.
|
I'd use an [in-memory database](http://www.sqlite.org/inmemorydb.html) with [SQLite](http://docs.python.org/library/sqlite3.html) for this. The sqlite module is even in the standard library since Python 2.5, which means this doesn't even add much to your requirements.
|
2,565,415
|
What is the right way to forming in-memory table in python with direct lookups for rows and columns.
I thought of using dict of dicts this way,
```
class Table(dict):
def __getitem__(self, key):
if key not in self:
self[key]={}
return dict.__getitem__(self, key)
table = Table()
table['row1']['column1'] = 'value11'
table['row1']['column2'] = 'value12'
table['row2']['column1'] = 'value21'
table['row2']['column2'] = 'value22'
>>>table
{'row1':{'column1':'value11','column2':'value12'},'row2':{'column1':'value21','column2':'value22'}}
```
I had difficulty in looking up for values in columns.
```
>>>'row1' in table
True
>>>'value11' in table['row1'].values()
True
```
Now how do I do lookup if `'column1'` has `'value11'`
Is this method of forming tables wrong?
Is there a better way to implement such tables with easier lookups?.
|
2010/04/02
|
[
"https://Stackoverflow.com/questions/2565415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/231295/"
] |
>
> Now how do I do lookup if 'column1'
> has 'value11'
>
>
>
`any(arow['column1'] == 'value11' for arow in table.iteritems())`
>
> Is this method of forming tables
> wrong?
>
>
>
No, it's just very "exposed", perhaps too much -- it could usefully be encapsulated in a class which exposes the methods you need, then the issue of how best to implement them does not affect all the rest of your application.
>
> Is there a better way to implement
> such tables with easier lookups?
>
>
>
Once you have designed a class whose interface you'd *like* to use, you can experiment with very different implementation approaches and benchmark them on a workload that's representative of *your* usage pattern, so you can find out what's best for *you* (assuming table manipulation and lookup are a big part of your application's runtime, of course -- to find out, **profile** your app).
I had similar but not identical needs in a large internal app I maintain at work, except that the row indices are integer (only the column names are strings), the column order is important, and the workload is more about "editing" the table (adding, removing, reordering rows or columns, renaming columns, etc). I started with a table exposing the functionality I needed, with the simplest rough-and-ready implementation internally (a list of dicts, plus a list of column names for the column ordering); and by now I have evolved it (independently of the actual "application-level" parts, but based on profiling and benchmarking thereof) to completely different implementations (currently based on `numpy`).
I think you should proceed along similar lines: "clothe" your current implementation into a nice "interface" with all the methods you need, profile your app -- unless this table object is a performance bottleneck, you're done; if it *is* a bottleneck, you can optimize the implementation (experiment, measure, repeat;-) without disturbing any of the rest of your application.
Inheriting from `dict` is not a good idea because you probably don't want to expose all of `dict`'s rich functionality; plus, what you're doing is, roughly, an inefficient implementation of `collections.defaultdict(dict)`. So, **encapsulate** the latter:
```
import collections
class Table(object):
def __init__(self):
self.d = collections.defaultdict(dict)
def add(self, row, col, val):
self.d[row][col] = val
def get(self, row, col, default=None):
return self.d[row].get(col, default)
def inrow(self, row, col):
return col in self.d[row]
def incol(self, col, val):
return any(x[col]==val for x in self.d.iteritems())
```
etc, etc -- write all the methods your app needs, with useful, short names, then maybe see if you can alias some of them as special methods if they're often used that way, e.g maybe (assuming Python 2.\* -- requires slightly different syntax in 3.\*):
```
def __setitem__(self, (row, col), val):
self.add(row, col, val)
```
and so forth. Once you have the code working, *then* comes the right time for profiling, benchmarking, and -- just perhaps -- internal optimization of the implementation.
|
A nested list should be able to do the job here. I would only use nested dictionaries if elements are spread thin across the grid.
```
grid = []
for row in height:
grid.append([])
for cell in width:
grid[-1].append(value)
```
Checking rows is easy:
```
def valueInRow(value, row):
return value in grid[row]
```
Checking collumns takes a little more work, because the grid is a list of rows, not a list of collumns:
```
def collumnIterator(collumn):
height = len(grid)
for row in xrange(height):
yield grid[row][collumn]
def valueInCollumn(value, collumn):
return value in collumnIterator(collumn)
```
|
2,565,415
|
What is the right way to forming in-memory table in python with direct lookups for rows and columns.
I thought of using dict of dicts this way,
```
class Table(dict):
def __getitem__(self, key):
if key not in self:
self[key]={}
return dict.__getitem__(self, key)
table = Table()
table['row1']['column1'] = 'value11'
table['row1']['column2'] = 'value12'
table['row2']['column1'] = 'value21'
table['row2']['column2'] = 'value22'
>>>table
{'row1':{'column1':'value11','column2':'value12'},'row2':{'column1':'value21','column2':'value22'}}
```
I had difficulty in looking up for values in columns.
```
>>>'row1' in table
True
>>>'value11' in table['row1'].values()
True
```
Now how do I do lookup if `'column1'` has `'value11'`
Is this method of forming tables wrong?
Is there a better way to implement such tables with easier lookups?.
|
2010/04/02
|
[
"https://Stackoverflow.com/questions/2565415",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/231295/"
] |
>
> Now how do I do lookup if 'column1'
> has 'value11'
>
>
>
`any(arow['column1'] == 'value11' for arow in table.iteritems())`
>
> Is this method of forming tables
> wrong?
>
>
>
No, it's just very "exposed", perhaps too much -- it could usefully be encapsulated in a class which exposes the methods you need, then the issue of how best to implement them does not affect all the rest of your application.
>
> Is there a better way to implement
> such tables with easier lookups?
>
>
>
Once you have designed a class whose interface you'd *like* to use, you can experiment with very different implementation approaches and benchmark them on a workload that's representative of *your* usage pattern, so you can find out what's best for *you* (assuming table manipulation and lookup are a big part of your application's runtime, of course -- to find out, **profile** your app).
I had similar but not identical needs in a large internal app I maintain at work, except that the row indices are integer (only the column names are strings), the column order is important, and the workload is more about "editing" the table (adding, removing, reordering rows or columns, renaming columns, etc). I started with a table exposing the functionality I needed, with the simplest rough-and-ready implementation internally (a list of dicts, plus a list of column names for the column ordering); and by now I have evolved it (independently of the actual "application-level" parts, but based on profiling and benchmarking thereof) to completely different implementations (currently based on `numpy`).
I think you should proceed along similar lines: "clothe" your current implementation into a nice "interface" with all the methods you need, profile your app -- unless this table object is a performance bottleneck, you're done; if it *is* a bottleneck, you can optimize the implementation (experiment, measure, repeat;-) without disturbing any of the rest of your application.
Inheriting from `dict` is not a good idea because you probably don't want to expose all of `dict`'s rich functionality; plus, what you're doing is, roughly, an inefficient implementation of `collections.defaultdict(dict)`. So, **encapsulate** the latter:
```
import collections
class Table(object):
def __init__(self):
self.d = collections.defaultdict(dict)
def add(self, row, col, val):
self.d[row][col] = val
def get(self, row, col, default=None):
return self.d[row].get(col, default)
def inrow(self, row, col):
return col in self.d[row]
def incol(self, col, val):
return any(x[col]==val for x in self.d.iteritems())
```
etc, etc -- write all the methods your app needs, with useful, short names, then maybe see if you can alias some of them as special methods if they're often used that way, e.g maybe (assuming Python 2.\* -- requires slightly different syntax in 3.\*):
```
def __setitem__(self, (row, col), val):
self.add(row, col, val)
```
and so forth. Once you have the code working, *then* comes the right time for profiling, benchmarking, and -- just perhaps -- internal optimization of the implementation.
|
>
> Now how do I do lookup if 'column1' has 'value11'
>
>
>
Are you asking about this?
```
found= False
for r in table:
if table[r]['column1'] == 'value11'
found= True
break
```
Is this what you're trying to do?
|
6,184,079
|
Similar questions have been asked, but I have not come across an easy-to-do-it way
We have some application logs of various kinds which fill up the space and we face other unwanted issues. How do I write a monitoring script(zipping files of particular size, moving them, watching them, etc..) for this maintenance? I am looking for a simple solution(as in what to use?), if possible in python or maybe just a shell script.
Thanks.
|
2011/05/31
|
[
"https://Stackoverflow.com/questions/6184079",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/294714/"
] |
The "standard" way of doing this (atleast on most Gnu/Linux distros) is to use [logrotate](http://www.linuxcommand.org/man_pages/logrotate8.html). I see a `/etc/logrotate.conf` on my Debian machine which has details on which files to rotate and at what frequency. It's triggered by a daily cron entry. This is what I'd recommend.
If you want your application itself to do this (which is a pain really since it's not it's job), you could consider writing a custom [log handler](http://docs.python.org/library/logging.handlers.html#module-logging.handlers). A RotatingFileHandler (or TimedRotatingFileHandler) might work but you can write a custom one.
Most systems are by default set up to automatically rotate log files which are emitted by syslog. You might want to consider using the SysLogHandler and logging to syslog (from all your apps regardless of language) so that the system infrastructure automatically takes care of things for you.
|
Use [logrotate](http://linuxcommand.org/man_pages/logrotate8.html) to do the work for you.
Remember that there are few cases where it **may not work properly**, for example if the logging application keeps the log file always open and is not able to resume it if the file is removed and recreated.
Over the years I encountered few applications like that, but even for them you could configure logrotate to restart them when it rotates the logs.
|
54,446,492
|
I have a requirement where I have to trigger a dataset in a blob to my python code where processing will happen and then store the processed dataset to the blob? Where should I do it? Any notebooks?
Azure functions dont have an option to write a Python code.
Any help would be appreciated.
|
2019/01/30
|
[
"https://Stackoverflow.com/questions/54446492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9668890/"
] |
The difference here is *really* subtle, and can only *easily* be appreciated in IL:
```
class MyBuilder1
{
private MySynchronizer m_synchronizer = new MySynchronizer();
public MyBuilder1()
{
}
}
```
gives us the constructor:
```
.method public hidebysig specialname rtspecialname
instance void .ctor () cil managed
{
// Method begins at RVA 0x2050
// Code size 18 (0x12)
.maxstack 8
IL_0000: ldarg.0
IL_0001: newobj instance void MySynchronizer::.ctor()
IL_0006: stfld class MySynchronizer MyBuilder1::m_synchronizer
IL_000b: ldarg.0
IL_000c: call instance void [mscorlib]System.Object::.ctor()
IL_0011: ret
} // end of method MyBuilder1::.ctor
```
where-as this:
```
class MyBuilder2
{
private MySynchronizer m_synchronizer;
public MyBuilder2()
{
m_synchronizer = new MySynchronizer();
}
}
```
gives us:
```
// Methods
.method public hidebysig specialname rtspecialname
instance void .ctor () cil managed
{
// Method begins at RVA 0x2063
// Code size 18 (0x12)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ldarg.0
IL_0007: newobj instance void MySynchronizer::.ctor()
IL_000c: stfld class MySynchronizer MyBuilder2::m_synchronizer
IL_0011: ret
} // end of method MyBuilder2::.ctor
```
The difference is simply one of ordering:
* field initializers (`MyBuilder1`) happen *before* the base-type constructor call (`object` is the base here; `call instance void [mscorlib]System.Object::.ctor()` is the base-constructor call)
* constructors happen *after* the base-type constructor call
In most cases, **this won't matter**. Unless your base-constructor invokes a virtual method that the derived type overrides: then whether or not the field has a value in the overridden method will be different between the two.
|
I almost always choose the second one option (initializing inside the constructor). In my point of view it keeps your code more readable and the control logic is inside the constructor which gives more flexibility to add logic in the future.
But again, it is only my personal opinion.
|
54,446,492
|
I have a requirement where I have to trigger a dataset in a blob to my python code where processing will happen and then store the processed dataset to the blob? Where should I do it? Any notebooks?
Azure functions dont have an option to write a Python code.
Any help would be appreciated.
|
2019/01/30
|
[
"https://Stackoverflow.com/questions/54446492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9668890/"
] |
The difference here is *really* subtle, and can only *easily* be appreciated in IL:
```
class MyBuilder1
{
private MySynchronizer m_synchronizer = new MySynchronizer();
public MyBuilder1()
{
}
}
```
gives us the constructor:
```
.method public hidebysig specialname rtspecialname
instance void .ctor () cil managed
{
// Method begins at RVA 0x2050
// Code size 18 (0x12)
.maxstack 8
IL_0000: ldarg.0
IL_0001: newobj instance void MySynchronizer::.ctor()
IL_0006: stfld class MySynchronizer MyBuilder1::m_synchronizer
IL_000b: ldarg.0
IL_000c: call instance void [mscorlib]System.Object::.ctor()
IL_0011: ret
} // end of method MyBuilder1::.ctor
```
where-as this:
```
class MyBuilder2
{
private MySynchronizer m_synchronizer;
public MyBuilder2()
{
m_synchronizer = new MySynchronizer();
}
}
```
gives us:
```
// Methods
.method public hidebysig specialname rtspecialname
instance void .ctor () cil managed
{
// Method begins at RVA 0x2063
// Code size 18 (0x12)
.maxstack 8
IL_0000: ldarg.0
IL_0001: call instance void [mscorlib]System.Object::.ctor()
IL_0006: ldarg.0
IL_0007: newobj instance void MySynchronizer::.ctor()
IL_000c: stfld class MySynchronizer MyBuilder2::m_synchronizer
IL_0011: ret
} // end of method MyBuilder2::.ctor
```
The difference is simply one of ordering:
* field initializers (`MyBuilder1`) happen *before* the base-type constructor call (`object` is the base here; `call instance void [mscorlib]System.Object::.ctor()` is the base-constructor call)
* constructors happen *after* the base-type constructor call
In most cases, **this won't matter**. Unless your base-constructor invokes a virtual method that the derived type overrides: then whether or not the field has a value in the overridden method will be different between the two.
|
As @Marc already mentioned, the difference is in the order of the base constructor.
I have added the base constructor
```
class Base
{
public Base()
{
Console.WriteLine("Inside Base constructor");
}
}
```
and modified my class "MyBuilder" to derived from it as;
```
class MyBuilder : Base
{
}
```
Now, the output from case1 looks like:
[](https://i.stack.imgur.com/Wq7kR.png)
whereas from case2:
[](https://i.stack.imgur.com/yvrZj.png)
Hence,
* If you have multiple constructor, then the case1 approach might be better, since there is less error prone as someone could easily add another constructor and forget to chain it.
* If you have single constructor and no any logic flow that depends on base constructor order, then the case2 seems better, as that will make the code cleaner. [no offence, personal preference]
|
54,446,492
|
I have a requirement where I have to trigger a dataset in a blob to my python code where processing will happen and then store the processed dataset to the blob? Where should I do it? Any notebooks?
Azure functions dont have an option to write a Python code.
Any help would be appreciated.
|
2019/01/30
|
[
"https://Stackoverflow.com/questions/54446492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9668890/"
] |
As @Marc already mentioned, the difference is in the order of the base constructor.
I have added the base constructor
```
class Base
{
public Base()
{
Console.WriteLine("Inside Base constructor");
}
}
```
and modified my class "MyBuilder" to derived from it as;
```
class MyBuilder : Base
{
}
```
Now, the output from case1 looks like:
[](https://i.stack.imgur.com/Wq7kR.png)
whereas from case2:
[](https://i.stack.imgur.com/yvrZj.png)
Hence,
* If you have multiple constructor, then the case1 approach might be better, since there is less error prone as someone could easily add another constructor and forget to chain it.
* If you have single constructor and no any logic flow that depends on base constructor order, then the case2 seems better, as that will make the code cleaner. [no offence, personal preference]
|
I almost always choose the second one option (initializing inside the constructor). In my point of view it keeps your code more readable and the control logic is inside the constructor which gives more flexibility to add logic in the future.
But again, it is only my personal opinion.
|
60,538,059
|
I am trying to download MNIST data in PyTorch using the following code:
```
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data',
train=True,
download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=128, shuffle=True)
```
and it gives the following error.
```py
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to data/MNIST/raw/train-images-idx3-ubyte.gz
0it [00:00, ?it/s]
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
<ipython-input-2-2fee284dabb8> in <module>()
5 transform=transforms.Compose([
6 transforms.ToTensor(),
----> 7 transforms.Normalize((0.1307,), (0.3081,))
8 ])),
9 batch_size=128, shuffle=True)
11 frames
/usr/lib/python3.6/urllib/request.py in http_error_default(self, req, fp, code, msg, hdrs)
648 class HTTPDefaultErrorHandler(BaseHandler):
649 def http_error_default(self, req, fp, code, msg, hdrs):
--> 650 raise HTTPError(req.full_url, code, msg, hdrs, fp)
651
652 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 403: Forbidden
```
How do I solve this? The notebook was working before, I'm trying to rerun it but I got this error.
|
2020/03/05
|
[
"https://Stackoverflow.com/questions/60538059",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4848812/"
] |
This is a new bug, reported here: <https://github.com/pytorch/vision/issues/1938>
See that thread for some potential workarounds until the issue is fixed in pytorch itself.
|
My workaround is: run on your local machine a simple program to download the MNIST dataset from the `torchvision.datasets` module, save with `pickle` a copy on your machine and upload it in your Google Drive.
Is not proper fix but a viable and affordable workaround, hope it helps somehow
|
23,080,960
|
Here I'm trying to create a pie chart using **matplotlib** python library. But the dates are overlapping if the values are same "0.0" multiple times.
My question is how I can display them separately.
Thanks.

This is what I tried:
```
from pylab import *
labels = [ "05-02-2014", "23-02-2014","07-02-2014","08-02-2014"]
values = [0, 0, 2, 10]
fig = plt.figure(figsize=(9.0, 6.10))
plt.pie(values, labels=labels, autopct='%1.1f%%', shadow=True)
plt.axis('equal')
show()
```
|
2014/04/15
|
[
"https://Stackoverflow.com/questions/23080960",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3270800/"
] |
You can adjust the label positions manually, although that results in a bit more code you would want to for such a simple request. You can detect groups of duplicate labels by examining the positions at which there are placed.
Here is an example with some random data replicating the occurrence of overlapping labels:
```
import matplotlib.pyplot as plt
import numpy as np
from collections import Counter
import datetime
# number slices of pie
num = 10
# generate some labels
dates = [datetime.datetime(2014,1,1) + datetime.timedelta(days=np.random.randint(1,20)) for i in range(num)]
labels = [d.strftime('%d-%m-%Y') for d in dates]
# generate some values
values = np.random.randint(2,10, num)
# force half of them to be zero
mask = np.random.choice(num, num // 2, replace=False)
values[mask] = 0
# pick some colors
colors = plt.cm.Blues(np.linspace(0,1,num))
fig, ax = plt.subplots(figsize=(9.0, 6.10), subplot_kw={'aspect': 1})
wedges, labels, pcts = ax.pie(values, colors=colors, labels=labels, autopct='%1.1f%%')
# find duplicate labels and the amount of duplicates
c = Counter([l.get_position() for l in labels])
dups = {key: val for key, val in c.items() if val > 1}
# degrees of spacing between duplicate labels
offset = np.deg2rad(3.)
# loop over any duplicate 'position'
for pos, n in dups.items():
# select all labels with that position
dup_labels = [l for l in labels if l.get_position() == pos]
# calculate the angle with respect to the center of the pie
theta = np.arctan2(pos[1], pos[0])
# get the offsets
offsets = np.linspace(-(n-1) * offset, (n-1) * offset, n)
# loop over the duplicate labels
for l, off in zip(dup_labels, offsets):
lbl_radius = 1.3
# calculate the new label positions
newx = lbl_radius * np.cos(theta + off)
newy = lbl_radius * np.sin(theta + off)
l.set_position((newx, newy))
# rotate the label
rot = np.rad2deg(theta + off)
# adjust the rotation so its
# never upside-down
if rot > 90:
rot += 180
elif rot < -90:
rot += 180
# rotate and highlight the adjusted labels
l.set_rotation(rot)
l.set_ha('center')
l.set_color('#aa0000')
```
I purposely only modified the overlapping labels to highlight the effect, but you could alter all labels in a similar way to create a uniform styling. The rotation makes it easier to automatically space them, but you could try alternate ways of placement.
Note that it only detect truly equal placements, if you would have values of `[0, 0.00001, 2, 10]`, they would probably still overlap.

|
I am not sure it there is a way to adjust "labeldistance" for every element, but I could solve this using a tricky-way.
I added explode(0, 0.1, 0, 0)
```
from pylab import *
labels = [ "05-02-2014", "23-02-2014","07-02-2014","08-02-2014"]
values = [0, 0, 2, 10]
explode = (0, 0.1, 0, 0)
fig = plt.figure(figsize=(9.0, 6.10))
test=range(len(values))
patches,texts= plt.pie(values, explode=explode,labels=labels, startangle=90, radius=0.5 )#pctdistance=1.1,startangle=10, labeldistance=0.8,radius=0.5)
#plt.axis('equal')
plt.axis('equal')
plt.show()
```
**UPDATE**
This is working with me, you should update pylab
|
58,841,308
|
I need a domain validator and email validator, ie validate if both exist. The company I'm servicing has a website that validates this for them, ensuring they won't send email to a nonexistent mailbox. It would be an email marketing action anyway. They have something basic about excel, but they want a service to be running directly getting a list of information and or transactional so that it checks by lot, speeding up the process. It is a work very similar to what makes this [site](https://tools.verifyemailaddress.io).
I would like to develop something similar in python rather. I would like to know if such a work is feasible and if so, if anyone could give me some reference.
|
2019/11/13
|
[
"https://Stackoverflow.com/questions/58841308",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9403338/"
] |
There is a [documented](https://learn.microsoft.com/graph/api/channel-get-filesfolder?view=graph-rest-1.0&tabs=http) navigational property of the Channel resource called `filesFolder`. From the Graph v1.0 endpoint:
```xml
<EntityType Name="channel" BaseType="microsoft.graph.entity">
<Property Name="displayName" Type="Edm.String"/>
<Property Name="description" Type="Edm.String"/>
<Property Name="isFavoriteByDefault" Type="Edm.Boolean"/>
<Property Name="email" Type="Edm.String"/>
<Property Name="webUrl" Type="Edm.String"/>
<Property Name="membershipType" Type="microsoft.graph.channelMembershipType"/>
<NavigationProperty Name="messages" Type="Collection(microsoft.graph.chatMessage)" ContainsTarget="true"/>
<NavigationProperty Name="chatThreads" Type="Collection(microsoft.graph.chatThread)" ContainsTarget="true"/>
<NavigationProperty Name="tabs" Type="Collection(microsoft.graph.teamsTab)" ContainsTarget="true"/>
<NavigationProperty Name="members" Type="Collection(microsoft.graph.conversationMember)" ContainsTarget="true"/>
<NavigationProperty Name="filesFolder" Type="microsoft.graph.driveItem" ContainsTarget="true"/>
</EntityType>
```
You can call this using this template:
```
/v1.0/teams/{teamId}/channels/{channelId}/filesFolder
```
This will return the Drive associated with a Private Channel:
```json
{
"@odata.context": "https://graph.microsoft.com/v1.0/$metadata#teams('{teamsId}')/channels('{channelId}')/filesFolder/$entity",
"id": "{id}",
"createdDateTime": "0001-01-01T00:00:00Z",
"lastModifiedDateTime": "2019-11-13T16:49:13Z",
"name": "Private",
"webUrl": "https://{tenant}.sharepoint.com/sites/{team}-Private/Shared%20Documents/{channel}",
"size": 0,
"parentReference": {
"driveId": "{driveId}",
"driveType": "documentLibrary"
},
"fileSystemInfo": {
"createdDateTime": "2019-11-13T16:49:13Z",
"lastModifiedDateTime": "2019-11-13T16:49:13Z"
},
"folder": {
"childCount": 0
}
}
```
|
Currently /filesFolder for Private Channels returns BadGateway
|
58,841,308
|
I need a domain validator and email validator, ie validate if both exist. The company I'm servicing has a website that validates this for them, ensuring they won't send email to a nonexistent mailbox. It would be an email marketing action anyway. They have something basic about excel, but they want a service to be running directly getting a list of information and or transactional so that it checks by lot, speeding up the process. It is a work very similar to what makes this [site](https://tools.verifyemailaddress.io).
I would like to develop something similar in python rather. I would like to know if such a work is feasible and if so, if anyone could give me some reference.
|
2019/11/13
|
[
"https://Stackoverflow.com/questions/58841308",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9403338/"
] |
There is a [documented](https://learn.microsoft.com/graph/api/channel-get-filesfolder?view=graph-rest-1.0&tabs=http) navigational property of the Channel resource called `filesFolder`. From the Graph v1.0 endpoint:
```xml
<EntityType Name="channel" BaseType="microsoft.graph.entity">
<Property Name="displayName" Type="Edm.String"/>
<Property Name="description" Type="Edm.String"/>
<Property Name="isFavoriteByDefault" Type="Edm.Boolean"/>
<Property Name="email" Type="Edm.String"/>
<Property Name="webUrl" Type="Edm.String"/>
<Property Name="membershipType" Type="microsoft.graph.channelMembershipType"/>
<NavigationProperty Name="messages" Type="Collection(microsoft.graph.chatMessage)" ContainsTarget="true"/>
<NavigationProperty Name="chatThreads" Type="Collection(microsoft.graph.chatThread)" ContainsTarget="true"/>
<NavigationProperty Name="tabs" Type="Collection(microsoft.graph.teamsTab)" ContainsTarget="true"/>
<NavigationProperty Name="members" Type="Collection(microsoft.graph.conversationMember)" ContainsTarget="true"/>
<NavigationProperty Name="filesFolder" Type="microsoft.graph.driveItem" ContainsTarget="true"/>
</EntityType>
```
You can call this using this template:
```
/v1.0/teams/{teamId}/channels/{channelId}/filesFolder
```
This will return the Drive associated with a Private Channel:
```json
{
"@odata.context": "https://graph.microsoft.com/v1.0/$metadata#teams('{teamsId}')/channels('{channelId}')/filesFolder/$entity",
"id": "{id}",
"createdDateTime": "0001-01-01T00:00:00Z",
"lastModifiedDateTime": "2019-11-13T16:49:13Z",
"name": "Private",
"webUrl": "https://{tenant}.sharepoint.com/sites/{team}-Private/Shared%20Documents/{channel}",
"size": 0,
"parentReference": {
"driveId": "{driveId}",
"driveType": "documentLibrary"
},
"fileSystemInfo": {
"createdDateTime": "2019-11-13T16:49:13Z",
"lastModifiedDateTime": "2019-11-13T16:49:13Z"
},
"folder": {
"childCount": 0
}
}
```
|
The issue you are having is that the drive and site for the private channel is never generated until you actually visit the channel in the teams app. That one visit will trigger the creation of the drive and site. Im stuck here myself as i cannot trigger a private channel to created the SharePoint site and drive until i actually open the teams app and visit the channel.
|
58,841,308
|
I need a domain validator and email validator, ie validate if both exist. The company I'm servicing has a website that validates this for them, ensuring they won't send email to a nonexistent mailbox. It would be an email marketing action anyway. They have something basic about excel, but they want a service to be running directly getting a list of information and or transactional so that it checks by lot, speeding up the process. It is a work very similar to what makes this [site](https://tools.verifyemailaddress.io).
I would like to develop something similar in python rather. I would like to know if such a work is feasible and if so, if anyone could give me some reference.
|
2019/11/13
|
[
"https://Stackoverflow.com/questions/58841308",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9403338/"
] |
The issue you are having is that the drive and site for the private channel is never generated until you actually visit the channel in the teams app. That one visit will trigger the creation of the drive and site. Im stuck here myself as i cannot trigger a private channel to created the SharePoint site and drive until i actually open the teams app and visit the channel.
|
Currently /filesFolder for Private Channels returns BadGateway
|
27,554,484
|
I'm trying to use theano but I get an error when I import it.
I've installed cuda\_6.5.14\_linux\_64.run, and passed all the recommended test in Chapter 6 of [this](http://developer.download.nvidia.com/compute/cuda/6_5/rel/docs/CUDA_Getting_Started_Linux.pdf) NVIDIA PDF.
Ultimately I want to be able to install pylearn2, but I get the exact same error as below when I try to compile it.
EDIT1: My theanorc looks like:
```
[cuda]
root = /usr/local/cuda-6.5
[global]
device = gpu
floatX=float32
```
If I replace gpu with cpu, the command import theano succeeds.
```
Python 2.7.8 |Anaconda 1.9.0 (64-bit)| (default, Aug 21 2014, 18:22:21)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
Imported NumPy 1.9.1, SciPy 0.14.0, Matplotlib 1.3.1
Type "scientific" for more details.
>>> import theano
Using gpu device 0: GeForce GTX 750 Ti
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/g/anaconda/lib/python2.7/site-packages/theano/__init__.py", line 92, in <module>
theano.sandbox.cuda.tests.test_driver.test_nvidia_driver1()
File "/home/g/anaconda/lib/python2.7/site-packages/theano/sandbox/cuda/tests/test_driver.py", line 28, in test_nvidia_driver1
profile=False)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/compile/function.py", line 223, in function
profile=profile)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/compile/pfunc.py", line 512, in pfunc
on_unused_input=on_unused_input)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py", line 1312, in orig_function
defaults)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py", line 1181, in create
_fn, _i, _o = self.linker.make_thunk(input_storage=input_storage_lists)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/link.py", line 434, in make_thunk
output_storage=output_storage)[:3]
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/vm.py", line 847, in make_all
no_recycling))
File "/home/g/anaconda/lib/python2.7/site-packages/theano/sandbox/cuda/__init__.py", line 237, in make_thunk
compute_map, no_recycling)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/op.py", line 606, in make_thunk
output_storage=node_output_storage)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 948, in make_thunk
keep_lock=keep_lock)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 891, in __compile__
keep_lock=keep_lock)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 1322, in cthunk_factory
key=key, fn=self.compile_cmodule_by_step, keep_lock=keep_lock)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cmodule.py", line 996, in module_from_key
module = next(compile_steps)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 1237, in compile_cmodule_by_step
preargs=preargs)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/sandbox/cuda/nvcc_compiler.py", line 444, in compile_str
return dlimport(lib_filename)
File "/home/g/anaconda/lib/python2.7/site-packages/theano/gof/cmodule.py", line 284, in dlimport
rval = __import__(module_name, {}, {}, [module_name])
ImportError: ('The following error happened while compiling the node', GpuCAReduce{add}{1}(<CudaNdarrayType(float32, vector)>), '\n', '/home/g/.theano/compiledir_Linux-3.11.0-26-generic-x86_64-with-debian-wheezy-sid-x86_64-2.7.8-64/tmpWYqQw5/7173b40d34b57da0645a57198c96dbcc.so: undefined symbol: __fatbinwrap_66_tmpxft_00004bf1_00000000_12_cuda_device_runtime_compute_50_cpp1_ii_5f6993ef', '[GpuCAReduce{add}{1}(<CudaNdarrayType(float32, vector)>)]')
```
|
2014/12/18
|
[
"https://Stackoverflow.com/questions/27554484",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2423116/"
] |
I encountered exactly the same question.
My solution is to replace cuda-6.5 with cuda-5.5, and everything works fine.
|
We also saw this error. We found that putting /usr/local/cuda-6.5/bin in $PATH seemed to fix it (even with the root = ... line in .theanorc).
|
61,264,563
|
When I import numpy and pandas in jupyter it gives error same in spider but in spider works after starting new kernel.
```
import numpy as np
```
---
```
NameError Traceback (most recent call last)
<ipython-input-1-0aa0b027fcb6> in <module>
----> 1 import numpy as np
~\numpy.py in <module>
1 from numpy import*
2
----> 3 arr = array([1,2,3,4])
NameError: name 'array' is not defined
```
|
2020/04/17
|
[
"https://Stackoverflow.com/questions/61264563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13287554/"
] |
this is showing "NameError" which is due to the
arr=array([1,2,3,4])
you should try something like this
arr=np.array([1,2,3,4])
|
Try this:
```
arr=np.array([1,2,3,4])
```
|
61,264,563
|
When I import numpy and pandas in jupyter it gives error same in spider but in spider works after starting new kernel.
```
import numpy as np
```
---
```
NameError Traceback (most recent call last)
<ipython-input-1-0aa0b027fcb6> in <module>
----> 1 import numpy as np
~\numpy.py in <module>
1 from numpy import*
2
----> 3 arr = array([1,2,3,4])
NameError: name 'array' is not defined
```
|
2020/04/17
|
[
"https://Stackoverflow.com/questions/61264563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13287554/"
] |
I found the error. It was a very bad mistake my c files have program numpy.py so while importing numpy python was accessing that file not the numpy module. So i deleted that and everything worked fine.
|
Try this:
```
arr=np.array([1,2,3,4])
```
|
61,264,563
|
When I import numpy and pandas in jupyter it gives error same in spider but in spider works after starting new kernel.
```
import numpy as np
```
---
```
NameError Traceback (most recent call last)
<ipython-input-1-0aa0b027fcb6> in <module>
----> 1 import numpy as np
~\numpy.py in <module>
1 from numpy import*
2
----> 3 arr = array([1,2,3,4])
NameError: name 'array' is not defined
```
|
2020/04/17
|
[
"https://Stackoverflow.com/questions/61264563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13287554/"
] |
this is showing "NameError" which is due to the
arr=array([1,2,3,4])
you should try something like this
arr=np.array([1,2,3,4])
|
As you are using numpy as np, to create an array the following syntax is needed:
arr=np.array([1,2,3])
|
61,264,563
|
When I import numpy and pandas in jupyter it gives error same in spider but in spider works after starting new kernel.
```
import numpy as np
```
---
```
NameError Traceback (most recent call last)
<ipython-input-1-0aa0b027fcb6> in <module>
----> 1 import numpy as np
~\numpy.py in <module>
1 from numpy import*
2
----> 3 arr = array([1,2,3,4])
NameError: name 'array' is not defined
```
|
2020/04/17
|
[
"https://Stackoverflow.com/questions/61264563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13287554/"
] |
I found the error. It was a very bad mistake my c files have program numpy.py so while importing numpy python was accessing that file not the numpy module. So i deleted that and everything worked fine.
|
As you are using numpy as np, to create an array the following syntax is needed:
arr=np.array([1,2,3])
|
61,264,563
|
When I import numpy and pandas in jupyter it gives error same in spider but in spider works after starting new kernel.
```
import numpy as np
```
---
```
NameError Traceback (most recent call last)
<ipython-input-1-0aa0b027fcb6> in <module>
----> 1 import numpy as np
~\numpy.py in <module>
1 from numpy import*
2
----> 3 arr = array([1,2,3,4])
NameError: name 'array' is not defined
```
|
2020/04/17
|
[
"https://Stackoverflow.com/questions/61264563",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13287554/"
] |
I found the error. It was a very bad mistake my c files have program numpy.py so while importing numpy python was accessing that file not the numpy module. So i deleted that and everything worked fine.
|
this is showing "NameError" which is due to the
arr=array([1,2,3,4])
you should try something like this
arr=np.array([1,2,3,4])
|
73,230,522
|
Hi I am new to python and I have a simple question, I have a list consisting of some user info and I want to know how can I write a program to find and update some of that info.
```
user_list = [
{'name': 'Alizom_12',
'gender': 'f',
'age': 34,
'active_day': 170},
{'name': 'Xzt4f',
'gender': None,
'age': None,
'active_day': 1152},
{'name': 'TomZ',
'gender': 'm',
'age': 24,
'active_day': 15},
{'name': 'Zxd975',
'gender': None,
'age': 44,
'active_day': 752},
]
```
what I did for finding a user is the following but I want to change it to display the info of a user rather than just printing the user exists:
```
def find_user(user_name):
for items in user_list:
if items['name'] == user_name:
return f'{user_name} exists. '
return f'{user_name} does not exists'
```
Also for updating the user info:
```
def update_user_info(user_name, **kw):
user_list.update({'name': name, 'gender': gender, 'age': age, 'active_day': active_day})
return user_list
```
```
print(find_user('Alizom_12'))
update_user_info('Alizom_12', **{'age': 29})
print(find_user('Alizom_12'))
```
|
2022/08/04
|
[
"https://Stackoverflow.com/questions/73230522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19686631/"
] |
Even if you accepted the remote version you still created a merge commit which basically contains the information that the changes you made are integrated in the branch. The merge commit will have two parents: the commit you pulled and your local one.
This new commit needs pushing.
You'll see the commit when you inspect the log using `git log` or your preferred visual tool for inspecting the commit history.
|
If you haven't set `rebase=true` in `.gitconfig`, please set it up like this:
```
[pull]
rebase = true
```
When you have conflicts you should resolve it and force push it:
```
git push -f
```
|
73,230,522
|
Hi I am new to python and I have a simple question, I have a list consisting of some user info and I want to know how can I write a program to find and update some of that info.
```
user_list = [
{'name': 'Alizom_12',
'gender': 'f',
'age': 34,
'active_day': 170},
{'name': 'Xzt4f',
'gender': None,
'age': None,
'active_day': 1152},
{'name': 'TomZ',
'gender': 'm',
'age': 24,
'active_day': 15},
{'name': 'Zxd975',
'gender': None,
'age': 44,
'active_day': 752},
]
```
what I did for finding a user is the following but I want to change it to display the info of a user rather than just printing the user exists:
```
def find_user(user_name):
for items in user_list:
if items['name'] == user_name:
return f'{user_name} exists. '
return f'{user_name} does not exists'
```
Also for updating the user info:
```
def update_user_info(user_name, **kw):
user_list.update({'name': name, 'gender': gender, 'age': age, 'active_day': active_day})
return user_list
```
```
print(find_user('Alizom_12'))
update_user_info('Alizom_12', **{'age': 29})
print(find_user('Alizom_12'))
```
|
2022/08/04
|
[
"https://Stackoverflow.com/questions/73230522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19686631/"
] |
This is indeed one reason people use rebase.
Remember that each Git commit:
* is numbered: it has a raw hash ID like `4af7188bc97f70277d0f10d56d5373022b1fa385`, unique to that one particular commit;
* is completely read-only: no part of `4af7blahblah` can ever change;
* is mostly permanent: once you *have* `4af7blahblah` you will still have it, but more interesting will be whether or not you can *find* it and whether you *see* it;
* contains both a full snapshot of every file and some metadata; and
* is found, in Git, by starting with a name—a branch or tag name for instance—that holds the hash ID of the *latest* commit and then working backwards.
That last bullet point is **important:** you *find* your commits using a branch name. *Every Git repository has its own private branch names.* You'll have your Git show some other Git software some of your branch names now and then, and they'll show you theirs now and then, but each repository has *its own* branch names. That's why, when you `git clone` from GitHub or wherever, you get a bunch of `origin/*` names. Those are *your copies* of *their branch names*.
(Compare: the name "Paul" is very common, and you can't assume that two people both named "Paul" are in fact the same. You need to qualify it: "Paul Aberly" vs "Paul Brumblefee", or on this case, "branch feature/foo in repository A" vs "branch feature/foo in repository B".)
When you have been working for a while in some branch of your own, other people have been working in *their* branches, in *their* repositories. Eventually they add new commits to the clone over on GitHub or wherever. You have, in your repository, at this point:
```
I--J <-- your-feature (HEAD)
/
...--G--H <-- origin/master
```
where each uppercase letter stands in for a commit. You've made two new commits `I` and `J`.
But now you run `git fetch`—the first step of a `git pull` command—to fetch any new commits from `origin`, and there are some on their `master`, which is your `origin/master`. So your Git adds those to your own repository and updates your `origin/master` to remember them:
```
I--J <-- your-feature (HEAD)
/
...--G--H
\
K--L <-- origin/master
```
When you ran `git pull`, you actually told your Git to run *two* Git commands:
1. `git fetch`, which did the above;
2. a second command of your choice (which you must choose *before* you enter the `git pull` command): if you don't choose one, you get a default `git merge`.
This *second* command has to *combine the work you did* with the work they did. This work-combining process requires making a new commit, so let's draw that in:
```
I--J
/ \
...--G--H M <-- your-feature (HEAD)
\ /
K--L <-- origin/master
```
Your new commit `M` connects your work, in commits `I-J`, to their work, in commits `K-L`, through the common starting point, commit `H`. (Git is all about the *commits*. The names are just tricks we use to find commits, without having to type in commit hash IDs.)
To make your work fit in with their work, commit `M` modifies your code in `J` to match their changes in `L`. That is, to your changes, Git has added their changes. Or you can look at it another way: to make their work fit in with your work, commit `M` modifies their code in `L` to match your changes in `J`.
You mention that there were conflicts:
>
> In all conflicts I chose to accept the remote repo version so those files are exactly the same as the remote.
>
>
>
Blindly accepting their version is rarely right, but if you examined these overlapping (colliding) changes closely and determined that your own change was no longer relevant and should be thrown away, then that was the right thing to do.
The *right* thing to do is to look carefully and determine what the *right final file version* should look like. Sometimes that means "use their version, throw out all my work", sometimes it means "use my version, throw out all their work" and sometimes it means "take this part of my work, that part of their work, and use this new third thing I just came up with too". **This requires careful judgement** and usually careful testing as well. You cannot omit this step or use a blanket rule like "theirs is always better" unless you have *very* strong tests. Whatever you put in here, Git is going to believe that this is the correct result, so make sure it *is* the correct result!
In any case, commit `M` *makes changes to the work you did* so that you have whatever snapshot went into new commit `M`. It's therefore part of your branch and *must be included* in the commits you send as part of your Pull Request. **But you do have one other option.**
### The other option: regretting and taking back your original commits
Let's go back to the pre-`git merge` picture, after `git fetch` but before the second command that `git pull` ran:
```
I--J <-- your-feature (HEAD)
/
...--G--H
\
K--L <-- origin/master
```
If you used `git fetch` *instead of* `git pull`, Git would stop here, and you could now inspect things. You could run a *test* merge if you like:
```
git merge origin/master
```
and if there are merge conflicts, you could inspect those conflicts and say to yourself: *Oh, hey, look, the problem is that the changes that I made in commit `I` are now useless! They fixed the problem a different way. If only I had not made commit `I` after all, I could just use the changes from commit `J`.*
This is where `git rebase` comes in. With `git rebase`, we say to ourselves:
* I made a bunch of commits, to do a bunch of work.
* Some parts of this work are good, and some parts of this are bad.
* I'd like to redo my work, and make new and improved commits and forget the original commits.
This is what `git rebase` does. The rebase command comes in a dizzying variety of flavors, e.g., `git rebase --interactive` and `git rebase --autosquash` and other fancy modes, but fundamentally it's about saying to yourself that you want to *replace* your old-and-lousy commits with new-and-improved commits.
Because Git is about *commits* and commits, with their unique hash IDs, get *shared*, you should be careful only to do this with commits that *aren't already shared*, in general. (In some very specific cases, you can do it with commits that *are* shared, as long as the other people sharing them are aware of what you're doing.)
To use `git rebase` is tricky, but knowing that it's going to copy commits, one commit at a time, helps. Here's that diagram yet again:
```
I--J <-- your-feature (HEAD)
/
...--G--H
\
K--L <-- origin/master
```
Suppose we now make a *new* branch, `new-and-improved`, pointing at commit `L`, and switch to it:
```
git switch -c new-and-improved --no-track origin/master
```
for instance (the `--no-track` is to keep from setting `origin/master` as the upstream of this new branch):
```
I--J <-- your-feature
/
...--G--H
\
K--L <-- new-and-improved (HEAD), origin/master
```
Now we copy *part of* commit `I`, or perhaps if commit `I` is entirely bad, deliberately don't copy it at all. Let's say we copy just one of the changes, so that we get a new commit that's like `I` but smaller:
```
I--J <-- your-feature
/
...--G--H
\
K--L <-- origin/master
\
i <-- new-and-improved (HEAD)
```
We drew it as a lowercase `i` because it's so much smaller than the original `I`, but is otherwise a lot like `I`. We'll even re-use the commit message, perhaps.
Then we copy commit `J` wholesale, without much change, to make a new commit we'll call `J'` to show how much it is like the original `J`:
```
I--J <-- your-feature
/
...--G--H
\
K--L <-- origin/master
\
i--J' <-- new-and-improved (HEAD)
```
Now we come to the last trick that `git rebase` does for us. It grabs the *name* `your-feature`—the branch you were on when you started all this—and makes that *name* point to commit `J'`, where you are now: The temporary branch disappears and we have:
```
I--J ???
/
...--G--H
\
K--L <-- origin/master
\
i--J' <-- your-feature (HEAD)
```
If nobody else has commits `I` and `J`, the only way you can find them now is if you memorized their hash IDs. You won't see them! Nobody else can see them either, because you're the only one who *has* these two commits.
So if the final result, after rebasing, is all good, *this* is what you want to send to GitHub to make into your Pull Request on GitHub. You can now:
```
git push origin your-feature
```
(or whatever name it has) and make the PR from there. Nobody has to know about your original `I-J` commits.
If you *ever* sent `I-J` to some other Git repository, make sure *no one else is using them* before you do this! It's very hard to get rid of a commit once it spreads: they're like viruses, infecting Git repositories. When two Git repositories meet up and one has some name or names that locate these commits, the one that has the commits can send them (if in "send commits" mode) to the one that doesn't, and then the commits have been copied and can spread from *that* repository to more repositories.
Other than this caveat about being careful not to spread bad commits like they were COVID, rebase is generally pretty nice. But it can be hard to use: be *very good* at regular `git merge` before you start in on `git rebase`, because *every commit you copy* is a mini-`git merge` of sorts.
|
Even if you accepted the remote version you still created a merge commit which basically contains the information that the changes you made are integrated in the branch. The merge commit will have two parents: the commit you pulled and your local one.
This new commit needs pushing.
You'll see the commit when you inspect the log using `git log` or your preferred visual tool for inspecting the commit history.
|
73,230,522
|
Hi I am new to python and I have a simple question, I have a list consisting of some user info and I want to know how can I write a program to find and update some of that info.
```
user_list = [
{'name': 'Alizom_12',
'gender': 'f',
'age': 34,
'active_day': 170},
{'name': 'Xzt4f',
'gender': None,
'age': None,
'active_day': 1152},
{'name': 'TomZ',
'gender': 'm',
'age': 24,
'active_day': 15},
{'name': 'Zxd975',
'gender': None,
'age': 44,
'active_day': 752},
]
```
what I did for finding a user is the following but I want to change it to display the info of a user rather than just printing the user exists:
```
def find_user(user_name):
for items in user_list:
if items['name'] == user_name:
return f'{user_name} exists. '
return f'{user_name} does not exists'
```
Also for updating the user info:
```
def update_user_info(user_name, **kw):
user_list.update({'name': name, 'gender': gender, 'age': age, 'active_day': active_day})
return user_list
```
```
print(find_user('Alizom_12'))
update_user_info('Alizom_12', **{'age': 29})
print(find_user('Alizom_12'))
```
|
2022/08/04
|
[
"https://Stackoverflow.com/questions/73230522",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/19686631/"
] |
This is indeed one reason people use rebase.
Remember that each Git commit:
* is numbered: it has a raw hash ID like `4af7188bc97f70277d0f10d56d5373022b1fa385`, unique to that one particular commit;
* is completely read-only: no part of `4af7blahblah` can ever change;
* is mostly permanent: once you *have* `4af7blahblah` you will still have it, but more interesting will be whether or not you can *find* it and whether you *see* it;
* contains both a full snapshot of every file and some metadata; and
* is found, in Git, by starting with a name—a branch or tag name for instance—that holds the hash ID of the *latest* commit and then working backwards.
That last bullet point is **important:** you *find* your commits using a branch name. *Every Git repository has its own private branch names.* You'll have your Git show some other Git software some of your branch names now and then, and they'll show you theirs now and then, but each repository has *its own* branch names. That's why, when you `git clone` from GitHub or wherever, you get a bunch of `origin/*` names. Those are *your copies* of *their branch names*.
(Compare: the name "Paul" is very common, and you can't assume that two people both named "Paul" are in fact the same. You need to qualify it: "Paul Aberly" vs "Paul Brumblefee", or on this case, "branch feature/foo in repository A" vs "branch feature/foo in repository B".)
When you have been working for a while in some branch of your own, other people have been working in *their* branches, in *their* repositories. Eventually they add new commits to the clone over on GitHub or wherever. You have, in your repository, at this point:
```
I--J <-- your-feature (HEAD)
/
...--G--H <-- origin/master
```
where each uppercase letter stands in for a commit. You've made two new commits `I` and `J`.
But now you run `git fetch`—the first step of a `git pull` command—to fetch any new commits from `origin`, and there are some on their `master`, which is your `origin/master`. So your Git adds those to your own repository and updates your `origin/master` to remember them:
```
I--J <-- your-feature (HEAD)
/
...--G--H
\
K--L <-- origin/master
```
When you ran `git pull`, you actually told your Git to run *two* Git commands:
1. `git fetch`, which did the above;
2. a second command of your choice (which you must choose *before* you enter the `git pull` command): if you don't choose one, you get a default `git merge`.
This *second* command has to *combine the work you did* with the work they did. This work-combining process requires making a new commit, so let's draw that in:
```
I--J
/ \
...--G--H M <-- your-feature (HEAD)
\ /
K--L <-- origin/master
```
Your new commit `M` connects your work, in commits `I-J`, to their work, in commits `K-L`, through the common starting point, commit `H`. (Git is all about the *commits*. The names are just tricks we use to find commits, without having to type in commit hash IDs.)
To make your work fit in with their work, commit `M` modifies your code in `J` to match their changes in `L`. That is, to your changes, Git has added their changes. Or you can look at it another way: to make their work fit in with your work, commit `M` modifies their code in `L` to match your changes in `J`.
You mention that there were conflicts:
>
> In all conflicts I chose to accept the remote repo version so those files are exactly the same as the remote.
>
>
>
Blindly accepting their version is rarely right, but if you examined these overlapping (colliding) changes closely and determined that your own change was no longer relevant and should be thrown away, then that was the right thing to do.
The *right* thing to do is to look carefully and determine what the *right final file version* should look like. Sometimes that means "use their version, throw out all my work", sometimes it means "use my version, throw out all their work" and sometimes it means "take this part of my work, that part of their work, and use this new third thing I just came up with too". **This requires careful judgement** and usually careful testing as well. You cannot omit this step or use a blanket rule like "theirs is always better" unless you have *very* strong tests. Whatever you put in here, Git is going to believe that this is the correct result, so make sure it *is* the correct result!
In any case, commit `M` *makes changes to the work you did* so that you have whatever snapshot went into new commit `M`. It's therefore part of your branch and *must be included* in the commits you send as part of your Pull Request. **But you do have one other option.**
### The other option: regretting and taking back your original commits
Let's go back to the pre-`git merge` picture, after `git fetch` but before the second command that `git pull` ran:
```
I--J <-- your-feature (HEAD)
/
...--G--H
\
K--L <-- origin/master
```
If you used `git fetch` *instead of* `git pull`, Git would stop here, and you could now inspect things. You could run a *test* merge if you like:
```
git merge origin/master
```
and if there are merge conflicts, you could inspect those conflicts and say to yourself: *Oh, hey, look, the problem is that the changes that I made in commit `I` are now useless! They fixed the problem a different way. If only I had not made commit `I` after all, I could just use the changes from commit `J`.*
This is where `git rebase` comes in. With `git rebase`, we say to ourselves:
* I made a bunch of commits, to do a bunch of work.
* Some parts of this work are good, and some parts of this are bad.
* I'd like to redo my work, and make new and improved commits and forget the original commits.
This is what `git rebase` does. The rebase command comes in a dizzying variety of flavors, e.g., `git rebase --interactive` and `git rebase --autosquash` and other fancy modes, but fundamentally it's about saying to yourself that you want to *replace* your old-and-lousy commits with new-and-improved commits.
Because Git is about *commits* and commits, with their unique hash IDs, get *shared*, you should be careful only to do this with commits that *aren't already shared*, in general. (In some very specific cases, you can do it with commits that *are* shared, as long as the other people sharing them are aware of what you're doing.)
To use `git rebase` is tricky, but knowing that it's going to copy commits, one commit at a time, helps. Here's that diagram yet again:
```
I--J <-- your-feature (HEAD)
/
...--G--H
\
K--L <-- origin/master
```
Suppose we now make a *new* branch, `new-and-improved`, pointing at commit `L`, and switch to it:
```
git switch -c new-and-improved --no-track origin/master
```
for instance (the `--no-track` is to keep from setting `origin/master` as the upstream of this new branch):
```
I--J <-- your-feature
/
...--G--H
\
K--L <-- new-and-improved (HEAD), origin/master
```
Now we copy *part of* commit `I`, or perhaps if commit `I` is entirely bad, deliberately don't copy it at all. Let's say we copy just one of the changes, so that we get a new commit that's like `I` but smaller:
```
I--J <-- your-feature
/
...--G--H
\
K--L <-- origin/master
\
i <-- new-and-improved (HEAD)
```
We drew it as a lowercase `i` because it's so much smaller than the original `I`, but is otherwise a lot like `I`. We'll even re-use the commit message, perhaps.
Then we copy commit `J` wholesale, without much change, to make a new commit we'll call `J'` to show how much it is like the original `J`:
```
I--J <-- your-feature
/
...--G--H
\
K--L <-- origin/master
\
i--J' <-- new-and-improved (HEAD)
```
Now we come to the last trick that `git rebase` does for us. It grabs the *name* `your-feature`—the branch you were on when you started all this—and makes that *name* point to commit `J'`, where you are now: The temporary branch disappears and we have:
```
I--J ???
/
...--G--H
\
K--L <-- origin/master
\
i--J' <-- your-feature (HEAD)
```
If nobody else has commits `I` and `J`, the only way you can find them now is if you memorized their hash IDs. You won't see them! Nobody else can see them either, because you're the only one who *has* these two commits.
So if the final result, after rebasing, is all good, *this* is what you want to send to GitHub to make into your Pull Request on GitHub. You can now:
```
git push origin your-feature
```
(or whatever name it has) and make the PR from there. Nobody has to know about your original `I-J` commits.
If you *ever* sent `I-J` to some other Git repository, make sure *no one else is using them* before you do this! It's very hard to get rid of a commit once it spreads: they're like viruses, infecting Git repositories. When two Git repositories meet up and one has some name or names that locate these commits, the one that has the commits can send them (if in "send commits" mode) to the one that doesn't, and then the commits have been copied and can spread from *that* repository to more repositories.
Other than this caveat about being careful not to spread bad commits like they were COVID, rebase is generally pretty nice. But it can be hard to use: be *very good* at regular `git merge` before you start in on `git rebase`, because *every commit you copy* is a mini-`git merge` of sorts.
|
If you haven't set `rebase=true` in `.gitconfig`, please set it up like this:
```
[pull]
rebase = true
```
When you have conflicts you should resolve it and force push it:
```
git push -f
```
|
16,536,101
|
I read this on Python tutorial: (<http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files>)
>
> Python on Windows makes a distinction between text and binary files;
> the end-of-line characters in text files are automatically altered slightly
> when data is read or written. This behind-the-scenes modification to file
> data is fine for ASCII text files, but it’ll corrupt binary data like that
> in JPEG or EXE files. Be very careful to use binary mode when reading and
> writing such files.
>
>
>
I don't quite understand how 'end-of-line characters in text files are altered' will 'corrupt binary data'.
Because I feel binary data don't have such things like end-of-line.
Can somebody explain more of this paragraph for me? It's making me feel like Python doesn't welcome binary files.
|
2013/05/14
|
[
"https://Stackoverflow.com/questions/16536101",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1769958/"
] |
You just have to take care to open files on windows as binary (`open(filename, "rb")`) and not as text files. After that there is no problem using the data.
Particularly the end-of-line on Windows is `'\r\n'`. And if you read a binary file as text file and write it back out, then single `'\n'` are transformed in `'\r\n'` sequences. If you open the files as binary (for reading and for writing) there is **no** such problem.
Python is capable of dealing with binary data, and you would have to take this kind of care in any language on the windows systems, not just in Python (but the developers of Python are friendly enough to warn you of possible OS problems). In systems like Linux where the end-of-line is a single character this distinction exists as well, but is less likely to cause a problem when reading/writing binary data as text (i.e. without the `b` option for opening of files).
|
>
> I feel binary data don't have such things like end-of-line.
>
>
>
Binary files can have ANY POSSIBLE character in them, including the character \n. You do not want python implicitly converting any characters in a binary file to something else. Python has no idea it is reading a binary file unless you tell it so. And when python reads a text file it automatically converts any \n character to the OS's newline character, which on Windows is \r\n.
That is the way things work in all computer programming languages.
Another way to think about it is: a file is just a long series of bytes (8 bits). A byte is just an integer. And a byte can be any integer. If a byte happens to be the integer 10, that is also the ascii code for the character \n. If the bytes in the file represent binary data, you don't want Python to read in 10 and convert it to two bytes: 13 and 10. Usually when you read binary data, you want to read, say, the first 2 bytes which represents a number, then the next 4 bytes which represent another number, etc.. Obviously, if python suddenly converts one of the bytes to two bytes, that will cause two problems: 1) It alters the data, 2) All your data boundaries will be messed up.
An example: suppose the first byte of a file is supposed to represent a dog's weight, and the byte's value is 10. Then the next byte is supposed to represent the dog's age, and its value is 1. If Python converts the 10, which is the ascii code for \n, to two bytes: 10 and 13, then the data python hands you will look like:
10 13 1
And when you extract the second byte for the dog's age, you get 13--not 1.
We often say a file contains 'characters' but that is patently false. Computers cannot store characters; they can only store numbers. So a file is just a long series of numbers. If you tell python to treat those numbers as ascii codes, which represent characters, then python will give you text.
|
16,536,101
|
I read this on Python tutorial: (<http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files>)
>
> Python on Windows makes a distinction between text and binary files;
> the end-of-line characters in text files are automatically altered slightly
> when data is read or written. This behind-the-scenes modification to file
> data is fine for ASCII text files, but it’ll corrupt binary data like that
> in JPEG or EXE files. Be very careful to use binary mode when reading and
> writing such files.
>
>
>
I don't quite understand how 'end-of-line characters in text files are altered' will 'corrupt binary data'.
Because I feel binary data don't have such things like end-of-line.
Can somebody explain more of this paragraph for me? It's making me feel like Python doesn't welcome binary files.
|
2013/05/14
|
[
"https://Stackoverflow.com/questions/16536101",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1769958/"
] |
>
> I feel binary data don't have such things like end-of-line.
>
>
>
Binary files can have ANY POSSIBLE character in them, including the character \n. You do not want python implicitly converting any characters in a binary file to something else. Python has no idea it is reading a binary file unless you tell it so. And when python reads a text file it automatically converts any \n character to the OS's newline character, which on Windows is \r\n.
That is the way things work in all computer programming languages.
Another way to think about it is: a file is just a long series of bytes (8 bits). A byte is just an integer. And a byte can be any integer. If a byte happens to be the integer 10, that is also the ascii code for the character \n. If the bytes in the file represent binary data, you don't want Python to read in 10 and convert it to two bytes: 13 and 10. Usually when you read binary data, you want to read, say, the first 2 bytes which represents a number, then the next 4 bytes which represent another number, etc.. Obviously, if python suddenly converts one of the bytes to two bytes, that will cause two problems: 1) It alters the data, 2) All your data boundaries will be messed up.
An example: suppose the first byte of a file is supposed to represent a dog's weight, and the byte's value is 10. Then the next byte is supposed to represent the dog's age, and its value is 1. If Python converts the 10, which is the ascii code for \n, to two bytes: 10 and 13, then the data python hands you will look like:
10 13 1
And when you extract the second byte for the dog's age, you get 13--not 1.
We often say a file contains 'characters' but that is patently false. Computers cannot store characters; they can only store numbers. So a file is just a long series of numbers. If you tell python to treat those numbers as ascii codes, which represent characters, then python will give you text.
|
I suppose the "slightly alter" in Python manual means the conversion Unix end-of-line characters to Windows end-of-line characters. Because this is done only in Windows, so Unix and Linux don't have this trouble.
|
16,536,101
|
I read this on Python tutorial: (<http://docs.python.org/2/tutorial/inputoutput.html#reading-and-writing-files>)
>
> Python on Windows makes a distinction between text and binary files;
> the end-of-line characters in text files are automatically altered slightly
> when data is read or written. This behind-the-scenes modification to file
> data is fine for ASCII text files, but it’ll corrupt binary data like that
> in JPEG or EXE files. Be very careful to use binary mode when reading and
> writing such files.
>
>
>
I don't quite understand how 'end-of-line characters in text files are altered' will 'corrupt binary data'.
Because I feel binary data don't have such things like end-of-line.
Can somebody explain more of this paragraph for me? It's making me feel like Python doesn't welcome binary files.
|
2013/05/14
|
[
"https://Stackoverflow.com/questions/16536101",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1769958/"
] |
You just have to take care to open files on windows as binary (`open(filename, "rb")`) and not as text files. After that there is no problem using the data.
Particularly the end-of-line on Windows is `'\r\n'`. And if you read a binary file as text file and write it back out, then single `'\n'` are transformed in `'\r\n'` sequences. If you open the files as binary (for reading and for writing) there is **no** such problem.
Python is capable of dealing with binary data, and you would have to take this kind of care in any language on the windows systems, not just in Python (but the developers of Python are friendly enough to warn you of possible OS problems). In systems like Linux where the end-of-line is a single character this distinction exists as well, but is less likely to cause a problem when reading/writing binary data as text (i.e. without the `b` option for opening of files).
|
I suppose the "slightly alter" in Python manual means the conversion Unix end-of-line characters to Windows end-of-line characters. Because this is done only in Windows, so Unix and Linux don't have this trouble.
|
60,882,099
|
I have a redhat server with docker installed
I want to create a docker image in which I want to run django with MySQL but the problem is django is unable to connect to MySQL server(remote server).
I'm getting following error:
```
Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory
```
I googled it and found that libraries does not support 'caching\_sha2\_password'.
Can anyone suggest me which distro have libraries that support 'caching\_sha2\_password'?
Thanks in advance.
P.S.
I don't have access to MySQL server so any change in server side is not in my hand.
UPDATED:
Dockerfile:
```
FROM python:3.7.4-stretch
COPY code/ /code/
WORKDIR /code
RUN apt-get update
RUN apt-get -y upgrade
RUN pip install -r requirements.txt
EXPOSE 8000
RUN python manage.py migrate
CMD python manage.py runserver
```
Error:
```
Step 8/9 : RUN python manage.py migrate
---> Running in a907f2d6dce6
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
self.connect()
File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 197, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/mysql/base.py", line 233, in get_new_connection
return Database.connect(**conn_params)
File "/usr/local/lib/python3.7/site-packages/MySQLdb/__init__.py", line 84, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/MySQLdb/connections.py", line 179, in __init__
super(Connection, self).__init__(*args, **kwargs2)
MySQLdb._exceptions.OperationalError: (2059, "Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib/x86_64-linux-gnu/mariadb18/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 22, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 328, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 366, in execute
self.check()
File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 395, in check
include_deployment_checks=include_deployment_checks,
File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/migrate.py", line 63, in _run_checks
issues = run_checks(tags=[Tags.database])
File "/usr/local/lib/python3.7/site-packages/django/core/checks/registry.py", line 72, in run_checks
new_errors = check(app_configs=app_configs)
File "/usr/local/lib/python3.7/site-packages/django/core/checks/database.py", line 10, in check_database_backends
issues.extend(conn.validation.check(**kwargs))
File "/usr/local/lib/python3.7/site-packages/django/db/backends/mysql/validation.py", line 9, in check
issues.extend(self._check_sql_mode(**kwargs))
File "/usr/local/lib/python3.7/site-packages/django/db/backends/mysql/validation.py", line 13, in _check_sql_mode
with self.connection.cursor() as cursor:
File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 260, in cursor
return self._cursor()
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 236, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
self.connect()
File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 220, in ensure_connection
self.connect()
File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/base/base.py", line 197, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.7/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/django/db/backends/mysql/base.py", line 233, in get_new_connection
return Database.connect(**conn_params)
File "/usr/local/lib/python3.7/site-packages/MySQLdb/__init__.py", line 84, in Connect
return Connection(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/MySQLdb/connections.py", line 179, in __init__
super(Connection, self).__init__(*args, **kwargs2)
django.db.utils.OperationalError: (2059, "Authentication plugin 'caching_sha2_password' cannot be loaded: /usr/lib/x86_64-linux-gnu/mariadb18/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory")
The command '/bin/sh -c python manage.py migrate' returned a non-zero code: 1
```
Requirement.txt:
```
django==3.0.4
django-environ==0.4.5
bcrypt==3.1.7
mysqlclient==1.4.6
psycopg2==2.8.4
PyMySQL==0.9.3
```
|
2020/03/27
|
[
"https://Stackoverflow.com/questions/60882099",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10386411/"
] |
The primary reason is simplicity. The existing rule is easy to understand (you clearly understand it) and easy to implement. The data-flow analysis required (to distinguish between acceptable and unacceptable uses in general) is complex and not normally necessary for a compiler, so it was thought a bad idea to require it of compilers.
Another consideration is Ada's compilation rules. If `Proc` passes `X` to another subprogram declared in another package, the data-flow analysis would require the body of that subprogram, but Ada requires that it be possible to compile `Proc` without the body of the other package.
Finally, the only time\* you'll ever need access-to-object types is if you need to declare a large object that won't fit on the stack, and in that case you won't need `access all` or `'access`, so you won't have to deal with this.
\*True as a 1st-order approximation (probably true at 2nd- and 3rd-order, too)
|
In Ada, when you try to think about accessibility, you have to do it in terms of access types instead of variables. There's no lifetime analysis of variables (contrarily to what Rust does, I think). So, what's the worst that could happen? If your pointer type level is less than the target variable level, accessibility checks will fail because the pointer *might* outlive the target.
I'm not sure what goes on with anonymous access types, but that's a whole different mess from what I pick here and there. Some people recommend not using them at all for variables.
|
11,450,649
|
I'm having a really tough time with getting the results page of this url with python's urllib2:
```
http://www.google.com/search?tbs=sbi:AMhZZitAaz7goe6AsfVSmFw1sbwsmX0uIjeVnzKHjEXMck70H3j32Q-6FApxrhxdSyMo0OedyWkxk3-qYbyf0q1OqNspjLu8DlyNnWVbNjiKGo87QUjQHf2_1idZ1q_1vvm5gzOCMpChYiKsKYdMywOLjJzqmzYoJNOU2UsTs_1zZGWjU-LsjdFXt_1D5bDkuyRK0YbsaLVcx4eEk_1KMkcJpWlfFEfPMutxTLGf1zxD-9DFZDzNOODs0oj2j_1KG8FRCaMFnTzAfTdl7JfgaDf_1t5Vti8FnbeG9i7qt9wF6P-QK9mdvC15hZ5UR29eQdYbcD1e4woaOQCmg8Q1VLVPf4-kf8dAI7p3jM_1MkBBwaxdt_1TsM4FLwh0oHAYKOS5qBRI28Vs0aw5_1C5-WR4dC902Eqm5eAkLiQyAM9J2bioR66g3tMWe-j9Hyh1ID40R1NyXEJDHcGxp7xOn_16XxfW_1Cq5ArdSNzxFvABb1UcXCn5s4_1LpXZxhZbauwaO8cg3CKGLUvl_1wySDB7QIkMIF2ZInEPS4K-eyErVKqOdY9caYUD8X7oOf6sDKFjT7pNHwlkXiuYbKBRYjlvRHPlcPN1WHWCJWdSNyXdZhwDI3VRaKwmi4YNvkryeNMMbhGytfvlNaaelKcOzWbvzCtSNaP2lJziN1x3btcIAplPcoZxEpb0cDlQwId3A5FDhczxpVbdRnOB-Xeq_1AiUTt_1iI6bSgUAinWXQFYWveTOttdSNCgK-VTxV4OCtlrCrZerk27RBLAzT0ol9NOfYmYhiabzhUczWk4NuiVhKN-M4eo76cAsi74PY4V_1lWjvOpI35V_1YLJQrm0fxVcD34wxFYCIllT2gYW09fj3cuBDMNbsaJqPVQ04OOGlwmcmJeAnK96xd_1aMUd6FsVLOSDS7RfS5MNUSyd1jnXvRU_1MF_1Dj8oC8sm7PfVdjm3firiMcaKM28j9kGWbY0heIGLtO_1m6ad-iKfxYEzSux2b5w62LQlP57yS7vX8RFoyKzHA0RrFIEbPBQdNMA3Vpw0G_1LvEjCAPSCV1HH1pDp0l4EnNCvUIAppVXzNMyWT_1gKITj1NLqAn-Z1tH323JwZSc77OftDSreyHJ-BPxn3n7JMkNZFcQx6S7tfBxeqJ1NuDlpax11pw0_1Oi_1nF3vyEP0NbGKSVgNvBv_1tv8ahxvrHn9UnP78FleiOpzUBfdfRPZiT20VEq5-oXtV_1XwIzrd-5_15-cf2yoL7ohyPuv3WKGUGr4YCsYje7_1D8VslqMPsvbwMg9haj3TrBKH7go70ZfPjUv3h1K7lplnnCdV0hrYVQkSLUY1eEor3L--Vu5PlewS60ZH5YEn4qTnDxniV95h8q0Y3RWXJ6gIXitR5y6CofVg
```
I use the following headers, and this should be simple I would think:
```
headers = {'Host':'www.google.com','User-Agent':user_agent,'Accept-Language':'en-us,en;q=0.5','Accept-Encoding':'gzip, deflate','Accept-Charset':'ISO-8859-1,utf-8;q=0.7,*;q=0.7','Connection':'keep-alive','Referer':'http://www.google.co.in/imghp?hl=en&tab=ii','Cookie':'PREF=ID=1d7bc4ff2a5d8bc6:U=1d37ba5a518b9be1:FF=4:LD=en:TM=1300950025:LM=1302071720:S=rkk0IbbhxUIgpTyA; NID=51=uNq6mZ385WlV1UTfXsiWkSgnsa6PdjH4l9ph-vSQRszBHRcKW3VRJclZLd2XUEdZtxiCtl5hpbJiS3SpEV7670w_x738h75akcO6Viw47MUlpCZfy4KZ2vLT4tcleeiW; SID=DQAAAMEAAACoYm-3B2aiLKf0cRU8spJuiNjiXEQRyxsUZqKf8UXZXS55movrnTmfEcM6FYn-gALmyMPNRIwLDBojINzkv8doX69rUQ9-'}
```
When I do the following, I get a result that doesn't contain what any ordinary web browser returns:
```
request=urllib2.Request(url,,None,headers)
response=urllib2.urlopen(request)
html=response.read()
```
Similarly, this bit of code returns a bunch of hex junk I can't read:
```
request=urllib2.Request(url,headers=headers)
response=urllib2.urlopen(request)
html=response.read()
```
Please help, as I am quite sure this is simple enough, and I must just be missing something. I was able to get this link in a similar way, but also uploading an image to images.google.com using the following code:
```
import httplib, mimetypes, android, sys, urllib2, urllib, simplejson
def post_multipart(host, selector, fields, files):
"""
Post fields and files to an http host as multipart/form-data.
fields is a sequence of (name, value) elements for regular form fields.
files is a sequence of (name, filename, value) elements for data to be uploaded as files
Return the server's response page.
"""
content_type, body = encode_multipart_formdata(fields, files)
h = httplib.HTTP(host)
h.putrequest('POST', selector)
h.putheader('content-type', content_type)
h.putheader('content-length', str(len(body)))
h.endheaders()
h.send(body)
errcode, errmsg, headers = h.getreply()
return h.file.read()
def encode_multipart_formdata(fields, files):
"""
fields is a sequence of (name, value) elements for regular form fields.
files is a sequence of (name, filename, value) elements for data to be uploaded as files
Return (content_type, body) ready for httplib.HTTP instance
"""
BOUNDARY = '----------ThIs_Is_tHe_bouNdaRY_$'
CRLF = '\r\n'
L = []
for (key, value) in fields:
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="%s"' % key)
L.append('')
L.append(value)
for (key, filename, value) in files:
L.append('--' + BOUNDARY)
L.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, filename))
L.append('Content-Type: %s' % get_content_type(filename))
L.append('')
L.append(value)
L.append('--' + BOUNDARY + '--')
L.append('')
body = CRLF.join(L)
content_type = 'multipart/form-data; boundary=%s' % BOUNDARY
return content_type, body
def get_content_type(filename):
return mimetypes.guess_type(filename)[0] or 'application/octet-stream'
host = 'www.google.co.in'
selector = '/searchbyimage/upload'
fields = [('user-agent','Mozilla/5.0 (Windows NT 5.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2'),('connection','keep-alive'),('referer','')]
with open('jpeg.jpg', 'rb') as jpeg:
files = [('encoded_image', 'jpeg.jpg', jpeg.read())]
response = post_multipart(host, selector, fields, files) #added: response =
responseLen=(len(response)-1)
x=22
if response[(x-21):(x+1)]!='EF=\"http://www.google':
x+=1
x+=145
link=''
while response[(x+1):(x+7)]!='amp;us': #>here<
link=link+response[x]
x+=1
print(link)
```
The above code returned not the page a browser would return, but instead html with a "link that has moved", which is the 'url' I posted first in this message. If I can do the upload of my image and return a results page, why can't I get the resulting links html page? It's severely frustrating:(
Please help, I've been burning out my brain for over a month on this problem. Yes I am a newbee, but I thought this would be straightforward:(
Please help me to return the results page of this one little url:
```
http://www.google.com/search?tbs=sbi:AMhZZitAaz7goe6AsfVSmFw1sbwsmX0uIjeVnzKHjEXMck70H3j32Q-6FApxrhxdSyMo0OedyWkxk3-qYbyf0q1OqNspjLu8DlyNnWVbNjiKGo87QUjQHf2_1idZ1q_1vvm5gzOCMpChYiKsKYdMywOLjJzqmzYoJNOU2UsTs_1zZGWjU-LsjdFXt_1D5bDkuyRK0YbsaLVcx4eEk_1KMkcJpWlfFEfPMutxTLGf1zxD-9DFZDzNOODs0oj2j_1KG8FRCaMFnTzAfTdl7JfgaDf_1t5Vti8FnbeG9i7qt9wF6P-QK9mdvC15hZ5UR29eQdYbcD1e4woaOQCmg8Q1VLVPf4-kf8dAI7p3jM_1MkBBwaxdt_1TsM4FLwh0oHAYKOS5qBRI28Vs0aw5_1C5-WR4dC902Eqm5eAkLiQyAM9J2bioR66g3tMWe-j9Hyh1ID40R1NyXEJDHcGxp7xOn_16XxfW_1Cq5ArdSNzxFvABb1UcXCn5s4_1LpXZxhZbauwaO8cg3CKGLUvl_1wySDB7QIkMIF2ZInEPS4K-eyErVKqOdY9caYUD8X7oOf6sDKFjT7pNHwlkXiuYbKBRYjlvRHPlcPN1WHWCJWdSNyXdZhwDI3VRaKwmi4YNvkryeNMMbhGytfvlNaaelKcOzWbvzCtSNaP2lJziN1x3btcIAplPcoZxEpb0cDlQwId3A5FDhczxpVbdRnOB-Xeq_1AiUTt_1iI6bSgUAinWXQFYWveTOttdSNCgK-VTxV4OCtlrCrZerk27RBLAzT0ol9NOfYmYhiabzhUczWk4NuiVhKN-M4eo76cAsi74PY4V_1lWjvOpI35V_1YLJQrm0fxVcD34wxFYCIllT2gYW09fj3cuBDMNbsaJqPVQ04OOGlwmcmJeAnK96xd_1aMUd6FsVLOSDS7RfS5MNUSyd1jnXvRU_1MF_1Dj8oC8sm7PfVdjm3firiMcaKM28j9kGWbY0heIGLtO_1m6ad-iKfxYEzSux2b5w62LQlP57yS7vX8RFoyKzHA0RrFIEbPBQdNMA3Vpw0G_1LvEjCAPSCV1HH1pDp0l4EnNCvUIAppVXzNMyWT_1gKITj1NLqAn-Z1tH323JwZSc77OftDSreyHJ-BPxn3n7JMkNZFcQx6S7tfBxeqJ1NuDlpax11pw0_1Oi_1nF3vyEP0NbGKSVgNvBv_1tv8ahxvrHn9UnP78FleiOpzUBfdfRPZiT20VEq5-oXtV_1XwIzrd-5_15-cf2yoL7ohyPuv3WKGUGr4YCsYje7_1D8VslqMPsvbwMg9haj3TrBKH7go70ZfPjUv3h1K7lplnnCdV0hrYVQkSLUY1eEor3L--Vu5PlewS60ZH5YEn4qTnDxniV95h8q0Y3RWXJ6gIXitR5y6CofVg
```
Dave
|
2012/07/12
|
[
"https://Stackoverflow.com/questions/11450649",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1488252/"
] |
Your user-agent is not defined !
Take that one :
```
#!/usr/bin/python
import urllib2
url = "http://www.google.com/search?q=mysearch";
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
print opener.open(url).read()
raw_input()
```
If you like find an other user-agent, you can write `about:config` in the Firefox.
And search "user-agent" :
>
> Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.8) Gecko/20050511
>
>
> Googlebot/2.1 (+http://www.google.com/bot.html)
>
>
> Opera/7.23 (Windows 98; U) [en]
>
>
>
|
Google has several anti-scraping techniques in place, since they don't want users to get to the results without the APIs or real browsers.
If you are serious about scraping this kind of pages, I suggest you look into: [Selenium](http://seleniumhq.org/) or [Spynner](http://code.google.com/p/spynner/).
Another advantage is that both execute javascript.
|
66,488,745
|
**PROBLEM ENCOUNTERED:**
>
> E/AndroidRuntime: FATAL EXCEPTION: main
> Process: org.tensorflow.lite.examples.detection, PID: 14719
> java.lang.AssertionError: Error occurred when initializing ObjectDetector: Mobile SSD models are expected to have exactly 4 outputs, found 8
>
>
>
**Problem Description**
* Android Application Source: TensorFlow Lite Object Detection Example from Google
* Error shown when starting the Example Application
**Model Description**
* Custom Model Used? **YES**
* Pre-trained Model Used: ssd\_mobilenet\_v2\_fpnlite\_640x640\_coco17\_tpu-8
* Inference type: FLOAT
* Number of classes: 4
**System Information**
* OS Platform and Distribution: ( Linux Ubuntu 20.14)
* TensorFlow Version: 2.4.1
* TensorFlow installed from: Pip
**Saved Model conversion commands used:**
**1. Saved\_Model.pb export:**
>
> python ./exporter\_main\_v2.py
>
> --input\_type image\_tensor
>
> --pipeline\_config\_path ./models/ssd\_mobilenet\_v2\_fpnlite\_640x640\_coco17\_tpu-8/pipeline.config
>
> --trained\_checkpoint\_dir ./models/ssd\_mobilenet\_v2\_fpnlite\_640x640\_coco17\_tpu-8
>
> --output\_directory exported\_models/tflite
>
>
>
**2. Convert saved model (.pb) to tflite**
>
> toco
>
> --saved\_model\_dir ./exported-models/tflite/saved\_model
>
> --emit-select-tf-ops true
>
> --allow\_custom\_ops
>
> --graph\_def\_file ./exported-models/tflite/saved\_model/saved\_model.pb
>
> --output\_file ./exported-models/tflite/tflite/detect.tflite
>
> --input\_shapes 1,300,300,3
>
> --input\_arrays normalized\_input\_image\_tensor
>
> --output\_arrays 'TFLite\_Detection\_PostProcess’,’TFLite\_Detection\_PostProcess:1','TFLite\_Detection\_PostProcess:2','TFLite\_Detection\_PostProcess:3'
>
> --inference\_type=FLOAT
>
> --allow\_custom\_ops
>
>
>
**Remarks**
I am trying to use a trained custom model on the Google TensorFlow lite provided example. Just that every time I open the application, it returns such an error, Mobile SSD models are expected to have exactly 4 outputs, found 8. The model is trained to identify 4 classes, all stated in the labelmap.txt and pipeline config.
**Does anybody have any clue about this error?**
|
2021/03/05
|
[
"https://Stackoverflow.com/questions/66488745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/15334979/"
] |
After further study, I believe the aforementioned issue was raised since the model has 8 tensors output but the Android application written in Java can only support 4 tensors output (at least the example provided by Google only supports 4 tensors output)
I am not very certain about the number of tensors output on different models. So far as I understood and messed around with different models, models with fixed\_shape\_resizer of 640 x 640 are likely to require more than 4 tensors output ( 8 tensors output usually), which is not compatible with the Android application written in Java.
For any amateur users like me, please find the following prerequisites to use your custom model in the [Android application](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md)
Suggested Setup ( Assume you are using TensorFlow version >= 2.3):
* TensorFlow Model: **SSD model with fixed\_shape\_resizer of 320 x 320**
(In my case, SSD MobileNet v2 320x320 works perfectly fine)
(The tensors output has to be 4)
* **Colab** ( Perfect for model training and conversion)
( I've tried to perform the training and conversion on both Linux and Windows platform on my local machine, the incompatibility of different tools and package gives me a headache. I turned out using Colab to perform the training and conversion. It is much more powerful and has great compatibility with those training tools and script.)
* The [**metadata writer library**](https://stackoverflow.com/questions/64097085/issue-in-creating-tflite-model-populated-with-metadata-for-object-detection/64493506#64493506) that was written by [@lu-wang-g](https://github.com/lu-wang-g)
(In my case, after converting the trained model to .tflite, if you directly migrate the .tflite model to the Android application, the application will report tons of problem regarding the config of the .tflite model. Assume if you trained and converted the model correctly, all you need is the metadata writer library above. It will automatically configure the metadata for you according to the .tflite model. Then you can directly migrate the model to the application.)
For detail, please visit my GitHub issue:
<https://github.com/tensorflow/tensorflow/issues/47595>
|
For those who will stumble on this problem/question later: limitations on the number of output tensors are part of Tensorflow Lite Object Detection API specification described [here](https://www.tensorflow.org/lite/inference_with_metadata/task_library/object_detector#model_compatibility_requirements)
I don't know how to make a model compatible with this requirements yet, but will append my answer if/when I will figure that out.
**UPDATE**
[Here](https://colab.research.google.com/github/tensorflow/models/blob/master/research/object_detection/colab_tutorials/convert_odt_model_to_TFLite.ipynb) is the official Google Colab with an example of a model conversion. The interesting part is that script call:
```
python models/research/object_detection/export_tflite_graph_tf2.py \
--trained_checkpoint_dir {'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/checkpoint'} \
--output_directory {'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/tflite'} \
--pipeline_config_path {'ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/pipeline.config'}
```
The script doesn't convert your model, but makes the model compatible with TFLite in terms of used operations and inputs/outputs format.
A comment inside the script claims that only SSD meta-architectures are supported (also stated [here](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tf2.md)). Also in the same directory of the repo where this script comes from there are other scripts that seem to be doing similar things, though no clear description given.
|
27,239,348
|
I am using photologue to create a photo gallery site with django. I installed django-tagging into my virtualenv, not knowing it was no longer supported by photologue. Now, after having performed migrations, whenever I try to add a photo or view the photo, I get FieldError at /admin/photologue/photo/upload\_zip/
Cannot resolve keyword 'items' into field. Choices are: id, name.
I uninstalled and reinstalled django, photologue, the SQLite file, and removed django-tagging, but the problem persists. I also tried running a different project that uses photologue and shares a virtualenv, and I am prompted to perform the same (assumedly destructive) migration.
I can't figure out what could have possibly changed on my system if the problem spans multiple projects and all the dependencies have been freshly installed.
Exception Location: /home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py in raise\_field\_error, line 1389
Traceback:
```
Environment:
Request Method: POST
Request URL: http://localhost:8000/admin/photologue/photo/add/
Django Version: 1.7.1
Python Version: 2.7.6
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
'sortedm2m',
'photologue',
'photologue_custom',
'pornsite')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
111. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in wrapper
584. return self.admin_site.admin_view(view)(*args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view
105. response = view_func(request, *args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/views/decorators/cache.py" in _wrapped_view_func
52. response = view_func(request, *args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/sites.py" in inner
204. return view(request, *args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in add_view
1454. return self.changeform_view(request, None, form_url, extra_context)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapper
29. return bound_func(*args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view
105. response = view_func(request, *args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/utils/decorators.py" in bound_func
25. return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/transaction.py" in inner
394. return func(*args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in changeform_view
1405. self.save_model(request, new_object, form, not add)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in save_model
1046. obj.save()
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/photologue/models.py" in save
540. super(Photo, self).save(*args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/photologue/models.py" in save
491. super(ImageModel, self).save(*args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/base.py" in save
591. force_update=force_update, update_fields=update_fields)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/base.py" in save_base
628. update_fields=update_fields, raw=raw, using=using)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py" in send
198. response = receiver(signal=self, sender=sender, **named)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/tagging/fields.py" in _save
81. Tag.objects.update_tags(kwargs['instance'], tags)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/tagging/models.py" in update_tags
34. items__object_id=obj.pk))
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/manager.py" in manager_method
92. return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/query.py" in filter
691. return self._filter_or_exclude(False, *args, **kwargs)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/query.py" in _filter_or_exclude
709. clone.query.add_q(Q(*args, **kwargs))
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in add_q
1287. clause, require_inner = self._add_q(where_part, self.used_aliases)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in _add_q
1314. current_negated=current_negated, connector=connector)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in build_filter
1138. lookups, parts, reffed_aggregate = self.solve_lookup_type(arg)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in solve_lookup_type
1076. _, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta())
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in names_to_path
1383. self.raise_field_error(opts, name)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in raise_field_error
1389. "Choices are: %s" % (name, ", ".join(available)))
Exception Type: FieldError at /admin/photologue/photo/add/
Exception Value: Cannot resolve keyword 'items' into field. Choices are: id, name
```
|
2014/12/01
|
[
"https://Stackoverflow.com/questions/27239348",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4043633/"
] |
The problem seems to arise from the fact, that django-tagging was somehow still present on the virtualenv.
In your traceback after photologue saves a model, django-tagging reacts to the sent signal and tries to update any related tags:
```
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py" in send
198. response = receiver(signal=self, sender=sender, **named)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/tagging/fields.py" in _save
81. Tag.objects.update_tags(kwargs['instance'], tags)
File "/home/cameron/Envs/photologue/local/lib/python2.7/site-packages/tagging/models.py" in update_tags
34. items__object_id=obj.pk))
```
There it tries to use the (apparently not existing anymore) field `items`, and that's where the error occurs.
I guess the way you deinstalled django-tagging on the venv didn't really work. Did you uninstalled with: `pip uninstall django-tagging` ?
For reference, here again my comment/steps to recreate the venv
... If your venv is somehow corrupted, the easiest could be to recreate it from scratch:
1. On your venv do:
`env pip freeze > orig_requirements.txt`
Check the `orig_requirements.txt` and delete everything you don't need.
2. Make a new venv with no site-packages and reinstall the requirements:
`mkvirtualenv --no-site-packages photoenv`
`pip install -r orig_requirements.txt`
3. Now double check you're on this venv when running your django project
`workon photoenv`
|
Well the error is simple -- in that you are requesting a field in the database that does not exist. Since you haven't posted code it is hard to be more specific than that. Was one of your templates built, referencing a field named 'items' that is no longer there?
Please edit your question to include a FULL traceback as well as some code where you think the problem could be.
The full traceback will give you a better idea of where the problem is.
|
63,354,202
|
i am beginer of the python programming. i am creating simple employee salary calculation using python.
**tax = salary \* 10 / 100** this line said wrong error displayed Unindent does not match outer indentation level
this is the full code
```
salary = 60000
if(salary > 50000):
tax = float(salary * 10 / 100)
elif(salary > 35000):
tax = float(salary * 5 / 100)
else:
tax = 0
netsal = salary - tax
print(tax)
print(netsal)
```
|
2020/08/11
|
[
"https://Stackoverflow.com/questions/63354202",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12932093/"
] |
The error message is self explanatory.
You can't indent your elif and else, they should be at the same level as the if condition.
```
salary = 60000
if(salary > 50000):
tax = salary * 10 / 100
elif(salary > 35000):
tax = salary * 5 / 100
else :
tax = 0
netsal = salary - tax
print(tax)
print(netsal)
```
|
You just need to fix your indentation, I would suggest using an IDE
```py
salary = 60000
if(salary > 50000):
tax = salary * 10 / 100
elif(salary > 35000):
tax = salary * 5 / 100
else:
tax = 0
print(tax)
>>> 6000.0
```
|
59,467,023
|
```
C:\Users\gabri\OneDrive\Desktop>pip3 install pyaudio
Collecting pyaudio
Using cached https://files.pythonhosted.org/packages/ab/42/b4f04721c5c5bfc196ce156b3c768998ef8c0ae3654ed29ea5020c749a6b/PyAudio-0.2.11.tar.gz
Installing collected packages: pyaudio
Running setup.py install for pyaudio ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix=
cwd: C:\Users\gabri\AppData\Local\Temp\pip-install-r9cgblze\pyaudio\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying src\pyaudio.py -> build\lib.win-amd64-3.7
running build_ext
building '_portaudio' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.
```
My python version: 3.7.4 and pip is upgraded.
I have already tried "python3.7 -m pip install PyAudio", but it still did not work.
I have tried installing it from a .whl too, but when I run the command, the following message appears:
>
> ERROR: PyAudio-0.2.11-cp34-cp34m-win32.whl is not a supported wheel on this platform."
>
>
>
I tried with 32 and 64 bit.
|
2019/12/24
|
[
"https://Stackoverflow.com/questions/59467023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12590302/"
] |
Currently, there are wheels compatible with the official distributions of **Python 2.7, 3.4, 3.5, and 3.6.**
Apparently, there is no version of that library for Python 3.7, so I'd try downgrading the Python version.
Download the wheel on this site: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio>.
Choose:
* PyAudio‑0.2.11‑cp37‑cp37m‑win32.whl if you use 32 bit
* PyAudio‑0.2.11‑cp37‑cp37m‑win\_amd64.whl for 64 bit
Then go to your download folder:
```
cd <your_donwload_path>
pip install PyAudio‑0.2.11‑cp37‑cp37m‑win_amd64.whl
```
|
You need to install Microsoft Visual C++ 14.0
This should work <https://visualstudio.microsoft.com/visual-cpp-build-tools/>
|
59,467,023
|
```
C:\Users\gabri\OneDrive\Desktop>pip3 install pyaudio
Collecting pyaudio
Using cached https://files.pythonhosted.org/packages/ab/42/b4f04721c5c5bfc196ce156b3c768998ef8c0ae3654ed29ea5020c749a6b/PyAudio-0.2.11.tar.gz
Installing collected packages: pyaudio
Running setup.py install for pyaudio ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix=
cwd: C:\Users\gabri\AppData\Local\Temp\pip-install-r9cgblze\pyaudio\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying src\pyaudio.py -> build\lib.win-amd64-3.7
running build_ext
building '_portaudio' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.
```
My python version: 3.7.4 and pip is upgraded.
I have already tried "python3.7 -m pip install PyAudio", but it still did not work.
I have tried installing it from a .whl too, but when I run the command, the following message appears:
>
> ERROR: PyAudio-0.2.11-cp34-cp34m-win32.whl is not a supported wheel on this platform."
>
>
>
I tried with 32 and 64 bit.
|
2019/12/24
|
[
"https://Stackoverflow.com/questions/59467023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12590302/"
] |
You need to install Microsoft Visual C++ 14.0
This should work <https://visualstudio.microsoft.com/visual-cpp-build-tools/>
|
```
pip install pipwin
```
pipwin install pyaudio
This worked straight away for me without installing any visual studio stuff, python --version is 3.9.5.
I've just done a fresh install of windows 10 a few days ago on my machine
|
59,467,023
|
```
C:\Users\gabri\OneDrive\Desktop>pip3 install pyaudio
Collecting pyaudio
Using cached https://files.pythonhosted.org/packages/ab/42/b4f04721c5c5bfc196ce156b3c768998ef8c0ae3654ed29ea5020c749a6b/PyAudio-0.2.11.tar.gz
Installing collected packages: pyaudio
Running setup.py install for pyaudio ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix=
cwd: C:\Users\gabri\AppData\Local\Temp\pip-install-r9cgblze\pyaudio\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying src\pyaudio.py -> build\lib.win-amd64-3.7
running build_ext
building '_portaudio' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.
```
My python version: 3.7.4 and pip is upgraded.
I have already tried "python3.7 -m pip install PyAudio", but it still did not work.
I have tried installing it from a .whl too, but when I run the command, the following message appears:
>
> ERROR: PyAudio-0.2.11-cp34-cp34m-win32.whl is not a supported wheel on this platform."
>
>
>
I tried with 32 and 64 bit.
|
2019/12/24
|
[
"https://Stackoverflow.com/questions/59467023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12590302/"
] |
Currently, there are wheels compatible with the official distributions of **Python 2.7, 3.4, 3.5, and 3.6.**
Apparently, there is no version of that library for Python 3.7, so I'd try downgrading the Python version.
Download the wheel on this site: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio>.
Choose:
* PyAudio‑0.2.11‑cp37‑cp37m‑win32.whl if you use 32 bit
* PyAudio‑0.2.11‑cp37‑cp37m‑win\_amd64.whl for 64 bit
Then go to your download folder:
```
cd <your_donwload_path>
pip install PyAudio‑0.2.11‑cp37‑cp37m‑win_amd64.whl
```
|
```
pip install pipwin
pipwin install pyaudio
```
`pipwin` will automatically do the download of required wheel, as it installs unofficial Python package binaries for windows.
|
59,467,023
|
```
C:\Users\gabri\OneDrive\Desktop>pip3 install pyaudio
Collecting pyaudio
Using cached https://files.pythonhosted.org/packages/ab/42/b4f04721c5c5bfc196ce156b3c768998ef8c0ae3654ed29ea5020c749a6b/PyAudio-0.2.11.tar.gz
Installing collected packages: pyaudio
Running setup.py install for pyaudio ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix=
cwd: C:\Users\gabri\AppData\Local\Temp\pip-install-r9cgblze\pyaudio\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying src\pyaudio.py -> build\lib.win-amd64-3.7
running build_ext
building '_portaudio' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.
```
My python version: 3.7.4 and pip is upgraded.
I have already tried "python3.7 -m pip install PyAudio", but it still did not work.
I have tried installing it from a .whl too, but when I run the command, the following message appears:
>
> ERROR: PyAudio-0.2.11-cp34-cp34m-win32.whl is not a supported wheel on this platform."
>
>
>
I tried with 32 and 64 bit.
|
2019/12/24
|
[
"https://Stackoverflow.com/questions/59467023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12590302/"
] |
Currently, there are wheels compatible with the official distributions of **Python 2.7, 3.4, 3.5, and 3.6.**
Apparently, there is no version of that library for Python 3.7, so I'd try downgrading the Python version.
Download the wheel on this site: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio>.
Choose:
* PyAudio‑0.2.11‑cp37‑cp37m‑win32.whl if you use 32 bit
* PyAudio‑0.2.11‑cp37‑cp37m‑win\_amd64.whl for 64 bit
Then go to your download folder:
```
cd <your_donwload_path>
pip install PyAudio‑0.2.11‑cp37‑cp37m‑win_amd64.whl
```
|
Whenever there is a build type error or say C++ 14.0 build tool required,just follow these simple steps-
1. Goto and scroll down for Pyaudio - [enter link description here](https://www.lfd.uci.edu/%7Egohlke/pythonlibs/#pyaudio)
2. Here you have to download wheel file according your python version and according to your python 32bit or 64 bit.
****Note:**** download wheel file where your python is installed or keep there.
**Example:** my python version is 3.8 and python installed is 32bit so I downloaded this --
PyAudio‑0.2.11‑cp38‑cp38‑win32.whl
[enter image description here](https://i.stack.imgur.com/BAggm.png)
|
59,467,023
|
```
C:\Users\gabri\OneDrive\Desktop>pip3 install pyaudio
Collecting pyaudio
Using cached https://files.pythonhosted.org/packages/ab/42/b4f04721c5c5bfc196ce156b3c768998ef8c0ae3654ed29ea5020c749a6b/PyAudio-0.2.11.tar.gz
Installing collected packages: pyaudio
Running setup.py install for pyaudio ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix=
cwd: C:\Users\gabri\AppData\Local\Temp\pip-install-r9cgblze\pyaudio\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying src\pyaudio.py -> build\lib.win-amd64-3.7
running build_ext
building '_portaudio' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.
```
My python version: 3.7.4 and pip is upgraded.
I have already tried "python3.7 -m pip install PyAudio", but it still did not work.
I have tried installing it from a .whl too, but when I run the command, the following message appears:
>
> ERROR: PyAudio-0.2.11-cp34-cp34m-win32.whl is not a supported wheel on this platform."
>
>
>
I tried with 32 and 64 bit.
|
2019/12/24
|
[
"https://Stackoverflow.com/questions/59467023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12590302/"
] |
Currently, there are wheels compatible with the official distributions of **Python 2.7, 3.4, 3.5, and 3.6.**
Apparently, there is no version of that library for Python 3.7, so I'd try downgrading the Python version.
Download the wheel on this site: <https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyaudio>.
Choose:
* PyAudio‑0.2.11‑cp37‑cp37m‑win32.whl if you use 32 bit
* PyAudio‑0.2.11‑cp37‑cp37m‑win\_amd64.whl for 64 bit
Then go to your download folder:
```
cd <your_donwload_path>
pip install PyAudio‑0.2.11‑cp37‑cp37m‑win_amd64.whl
```
|
```
pip install pipwin
```
pipwin install pyaudio
This worked straight away for me without installing any visual studio stuff, python --version is 3.9.5.
I've just done a fresh install of windows 10 a few days ago on my machine
|
59,467,023
|
```
C:\Users\gabri\OneDrive\Desktop>pip3 install pyaudio
Collecting pyaudio
Using cached https://files.pythonhosted.org/packages/ab/42/b4f04721c5c5bfc196ce156b3c768998ef8c0ae3654ed29ea5020c749a6b/PyAudio-0.2.11.tar.gz
Installing collected packages: pyaudio
Running setup.py install for pyaudio ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix=
cwd: C:\Users\gabri\AppData\Local\Temp\pip-install-r9cgblze\pyaudio\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying src\pyaudio.py -> build\lib.win-amd64-3.7
running build_ext
building '_portaudio' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.
```
My python version: 3.7.4 and pip is upgraded.
I have already tried "python3.7 -m pip install PyAudio", but it still did not work.
I have tried installing it from a .whl too, but when I run the command, the following message appears:
>
> ERROR: PyAudio-0.2.11-cp34-cp34m-win32.whl is not a supported wheel on this platform."
>
>
>
I tried with 32 and 64 bit.
|
2019/12/24
|
[
"https://Stackoverflow.com/questions/59467023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12590302/"
] |
```
pip install pipwin
pipwin install pyaudio
```
`pipwin` will automatically do the download of required wheel, as it installs unofficial Python package binaries for windows.
|
```
pip install pipwin
```
pipwin install pyaudio
This worked straight away for me without installing any visual studio stuff, python --version is 3.9.5.
I've just done a fresh install of windows 10 a few days ago on my machine
|
59,467,023
|
```
C:\Users\gabri\OneDrive\Desktop>pip3 install pyaudio
Collecting pyaudio
Using cached https://files.pythonhosted.org/packages/ab/42/b4f04721c5c5bfc196ce156b3c768998ef8c0ae3654ed29ea5020c749a6b/PyAudio-0.2.11.tar.gz
Installing collected packages: pyaudio
Running setup.py install for pyaudio ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix=
cwd: C:\Users\gabri\AppData\Local\Temp\pip-install-r9cgblze\pyaudio\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying src\pyaudio.py -> build\lib.win-amd64-3.7
running build_ext
building '_portaudio' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Microsoft Visual C++ Build Tools": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\gabri\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"'; __file__='"'"'C:\\Users\\gabri\\AppData\\Local\\Temp\\pip-install-r9cgblze\\pyaudio\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\gabri\AppData\Local\Temp\pip-record-032v86d_\install-record.txt' --single-version-externally-managed --compile --user --prefix= Check the logs for full command output.
```
My python version: 3.7.4 and pip is upgraded.
I have already tried "python3.7 -m pip install PyAudio", but it still did not work.
I have tried installing it from a .whl too, but when I run the command, the following message appears:
>
> ERROR: PyAudio-0.2.11-cp34-cp34m-win32.whl is not a supported wheel on this platform."
>
>
>
I tried with 32 and 64 bit.
|
2019/12/24
|
[
"https://Stackoverflow.com/questions/59467023",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12590302/"
] |
Whenever there is a build type error or say C++ 14.0 build tool required,just follow these simple steps-
1. Goto and scroll down for Pyaudio - [enter link description here](https://www.lfd.uci.edu/%7Egohlke/pythonlibs/#pyaudio)
2. Here you have to download wheel file according your python version and according to your python 32bit or 64 bit.
****Note:**** download wheel file where your python is installed or keep there.
**Example:** my python version is 3.8 and python installed is 32bit so I downloaded this --
PyAudio‑0.2.11‑cp38‑cp38‑win32.whl
[enter image description here](https://i.stack.imgur.com/BAggm.png)
|
```
pip install pipwin
```
pipwin install pyaudio
This worked straight away for me without installing any visual studio stuff, python --version is 3.9.5.
I've just done a fresh install of windows 10 a few days ago on my machine
|
18,041,050
|
I've got a py2.7 project which I want to test under py3.2. For this purpose, I want to use virtualenv. I wanted to create an environment that would run 3.2 version internally:
```
virtualenv 3.2 -p /usr/bin/python3.2
```
but it failed. My default python version is `2.7` (ubuntu default settings). Here is `virtualenv --version 1.10`. The error output is:
```
Running virtualenv with interpreter /usr/bin/python3.2
New python executable in 3.2/bin/python3.2
Also creating executable in 3.2/bin/python
Installing Setuptools...................................................................................................................................................................................................................................done.
Installing Pip..............
Complete output from command /home/tomasz/Develop...on/3.2/bin/python3.2 setup.py install --single-version-externally-managed --record record:
Traceback (most recent call last):
File "setup.py", line 5, in <module>
from setuptools import setup, find_packages
File "/usr/lib/python2.7/dist-packages/setuptools/__init__.py", line 2, in <module>
from setuptools.extension import Extension, Library
File "/usr/lib/python2.7/dist-packages/setuptools/extension.py", line 2, in <module>
from setuptools.dist import _get_unpatched
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 103
except ValueError, e:
^
SyntaxError: invalid syntax
----------------------------------------
...Installing Pip...done.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2308, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 821, in main
symlink=options.symlink)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 963, in create_environment
install_sdist('Pip', 'pip-*.tar.gz', py_executable, search_dirs)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 932, in install_sdist
filter_stdout=filter_install_output)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 899, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /home/tomasz/Develop...on/3.2/bin/python3.2 setup.py install --single-version-externally-managed --record record failed with error code 1
```
I don't know what the hell is this syntax error - where does it come from... I know there was a change in try...catch statement syntax between 2.x and 3.x, but should virtualenv throw syntax errors?
I'd be grateful if someone pointed me out if there's something I'm doing wrong or if there is an installation problem on my machine.
|
2013/08/04
|
[
"https://Stackoverflow.com/questions/18041050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/769384/"
] |
To create a Python 3.2 virtual environment you should use the virtualenv you installed for Python 3.2. In your case that would be:
```
/usr/bin/virtualenv-3.2
```
|
You'll have to use a Python 3 version of `virtualenv`; the version you are using is installing Python 2 tools into a Python 3 virtual environment and these are not compatible.
|
18,041,050
|
I've got a py2.7 project which I want to test under py3.2. For this purpose, I want to use virtualenv. I wanted to create an environment that would run 3.2 version internally:
```
virtualenv 3.2 -p /usr/bin/python3.2
```
but it failed. My default python version is `2.7` (ubuntu default settings). Here is `virtualenv --version 1.10`. The error output is:
```
Running virtualenv with interpreter /usr/bin/python3.2
New python executable in 3.2/bin/python3.2
Also creating executable in 3.2/bin/python
Installing Setuptools...................................................................................................................................................................................................................................done.
Installing Pip..............
Complete output from command /home/tomasz/Develop...on/3.2/bin/python3.2 setup.py install --single-version-externally-managed --record record:
Traceback (most recent call last):
File "setup.py", line 5, in <module>
from setuptools import setup, find_packages
File "/usr/lib/python2.7/dist-packages/setuptools/__init__.py", line 2, in <module>
from setuptools.extension import Extension, Library
File "/usr/lib/python2.7/dist-packages/setuptools/extension.py", line 2, in <module>
from setuptools.dist import _get_unpatched
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 103
except ValueError, e:
^
SyntaxError: invalid syntax
----------------------------------------
...Installing Pip...done.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2308, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 821, in main
symlink=options.symlink)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 963, in create_environment
install_sdist('Pip', 'pip-*.tar.gz', py_executable, search_dirs)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 932, in install_sdist
filter_stdout=filter_install_output)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 899, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /home/tomasz/Develop...on/3.2/bin/python3.2 setup.py install --single-version-externally-managed --record record failed with error code 1
```
I don't know what the hell is this syntax error - where does it come from... I know there was a change in try...catch statement syntax between 2.x and 3.x, but should virtualenv throw syntax errors?
I'd be grateful if someone pointed me out if there's something I'm doing wrong or if there is an installation problem on my machine.
|
2013/08/04
|
[
"https://Stackoverflow.com/questions/18041050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/769384/"
] |
You'll have to use a Python 3 version of `virtualenv`; the version you are using is installing Python 2 tools into a Python 3 virtual environment and these are not compatible.
|
```
virtualenv --python=/usr/bin/python3.2 --no-site-packages ENV
```
|
18,041,050
|
I've got a py2.7 project which I want to test under py3.2. For this purpose, I want to use virtualenv. I wanted to create an environment that would run 3.2 version internally:
```
virtualenv 3.2 -p /usr/bin/python3.2
```
but it failed. My default python version is `2.7` (ubuntu default settings). Here is `virtualenv --version 1.10`. The error output is:
```
Running virtualenv with interpreter /usr/bin/python3.2
New python executable in 3.2/bin/python3.2
Also creating executable in 3.2/bin/python
Installing Setuptools...................................................................................................................................................................................................................................done.
Installing Pip..............
Complete output from command /home/tomasz/Develop...on/3.2/bin/python3.2 setup.py install --single-version-externally-managed --record record:
Traceback (most recent call last):
File "setup.py", line 5, in <module>
from setuptools import setup, find_packages
File "/usr/lib/python2.7/dist-packages/setuptools/__init__.py", line 2, in <module>
from setuptools.extension import Extension, Library
File "/usr/lib/python2.7/dist-packages/setuptools/extension.py", line 2, in <module>
from setuptools.dist import _get_unpatched
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 103
except ValueError, e:
^
SyntaxError: invalid syntax
----------------------------------------
...Installing Pip...done.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 2308, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 821, in main
symlink=options.symlink)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 963, in create_environment
install_sdist('Pip', 'pip-*.tar.gz', py_executable, search_dirs)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 932, in install_sdist
filter_stdout=filter_install_output)
File "/usr/local/lib/python2.7/dist-packages/virtualenv.py", line 899, in call_subprocess
% (cmd_desc, proc.returncode))
OSError: Command /home/tomasz/Develop...on/3.2/bin/python3.2 setup.py install --single-version-externally-managed --record record failed with error code 1
```
I don't know what the hell is this syntax error - where does it come from... I know there was a change in try...catch statement syntax between 2.x and 3.x, but should virtualenv throw syntax errors?
I'd be grateful if someone pointed me out if there's something I'm doing wrong or if there is an installation problem on my machine.
|
2013/08/04
|
[
"https://Stackoverflow.com/questions/18041050",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/769384/"
] |
To create a Python 3.2 virtual environment you should use the virtualenv you installed for Python 3.2. In your case that would be:
```
/usr/bin/virtualenv-3.2
```
|
```
virtualenv --python=/usr/bin/python3.2 --no-site-packages ENV
```
|
68,762,785
|
I have the following dataframes.
```
Name | Data
A foo
A bar
B foo
B bar
C foo
C bar
C cat
Name | foo | bar | cat
A 1 2 3
B 4 5 6
C 7 8 9
```
I need to lookup the values present in the 2nd dataframe and create a dataframe like this
```
Name | Data | Value
A foo 1
A bar 2
B foo 4
B bar 5
C foo 7
C bar 8
C cat 9
```
I tried looping over df1 and parsing df2 like df2[df2['Name']=='A']['foo'], this works but it takes forever to complete. I am new to python and any help to reduce the runtime would be appreciated.
|
2021/08/12
|
[
"https://Stackoverflow.com/questions/68762785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/16652846/"
] |
You can use `.melt` + `.merge`:
```py
x = df1.merge(df2.melt("Name", var_name="Data"), on=["Name", "Data"])
print(x)
```
Prints:
```none
Name Data value
0 A foo 1
1 A bar 2
2 B foo 4
3 B bar 5
4 C foo 7
5 C bar 8
6 C cat 9
```
|
You can melt your second dataframe and then merge it with your first:
```
import pandas as pd
df1 = pd.DataFrame({
'Name': ['A', 'A', 'B', 'B', 'C', 'C', 'C'],
'Data': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar', 'cat'],
})
df2 = pd.DataFrame({
'Name': ['A', 'B', 'C'],
'foo': [1, 4, 7],
'bar': [2, 5, 8],
'cat': [3, 6, 9],
})
df1.merge(df2.melt('Name', var_name='Data'), on=['Name', 'Data'])
```
|
56,878,362
|
I'm trying to create a role wrapper which will allow me to restrict certain pages and content for different users. I already have methods implemented for checking this, but the wrapper/decorator for implementing this fails and sometimes doesn't, and I have no idea of what the cause could be.
I've searched around looking for a conclusive reason as to what is causing this problem, but unfortunately, Flask's tracebacks do not give a conclusive reason or solution, like most other searches I come up with do.
I'm using Flask-Login, Flask-Migrate, and Flask-SQLAlchemy to manage my web application, I've looked into different methods of applying RBAC, but they all required seemingly complicated changes to my database models, and I felt that my method would have a higher chance of working in the long run.
Here is my simplified code (I can provide the full application if requested). Below that is the full traceback from the debugger.
Thank you.
`routes.py`
```py
def require_role(roles=["User"]):
def wrap(func):
def run(*args, **kwargs):
if current_user.is_authenticated:
if current_user.has_roles(roles):
return func(*args, **kwargs)
return abort(401)
return run
return wrap
@app.route('/hidden<id>/history')
@login_required
@require_role(roles=['Admin'])
def hidden_history(id):
if not validate_id(id):
return '<span style="color: red;">error:</span> bad id'
return render_template('hidden_history.html')
@app.route('/hidden<id>/help')
@login_required
def hidden_help(id):
if not validate_id(id):
return '<span style="color: red;">error:</span> bad id'
return render_template('hidden_help.html')
@app.route('/hidden<id>/')
@login_required
@require_role(roles=['Hidden'])
def hidden(id):
if not validate_id(id):
return '<span style="color: red;">error:</span> bad id'
# ...
return render_template('hidden.html')
```
`Traceback (most recent call last)`
```py
Traceback (most recent call last):
File "A:\Programming\Python\Flask\xevion.dev\wsgi.py", line 1, in <module>
from app import app, db
File "A:\Programming\Python\Flask\xevion.dev\app\__init__.py", line 18, in <module>
from app import routes, models
File "A:\Programming\Python\Flask\xevion.dev\app\routes.py", line 143, in <module>
@require_role(roles=['Hidden'])
File "c:\users\xevion\appdata\local\programs\python\python36\lib\site-packages\flask\app.py", line 1251, in decorator
self.add_url_rule(rule, endpoint, f, **options)
File "c:\users\xevion\appdata\local\programs\python\python36\lib\site-packages\flask\app.py", line 67, in wrapper_func
return f(self, *args, **kwargs)
File "c:\users\xevion\appdata\local\programs\python\python36\lib\site-packages\flask\app.py", line 1222, in add_url_rule
'existing endpoint function: %s' % endpoint)
AssertionError: View function mapping is overwriting an existing endpoint function: run
```
**Edit:** I realize now that it doesn't work when there are more than one calls to the wrapper function. How come?
|
2019/07/03
|
[
"https://Stackoverflow.com/questions/56878362",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6912830/"
] |
So to solve the problem that has been plaguing me for the last couple of hours, I've looked into how the `flask_login` module actually works, and after a bit of investigating, I found out that they use an import from `functools` called `wraps`.
I imported that, copied how `flask_login` implemented it essentially, and my app is now working.
```py
def require_role(roles=["User"]):
def wrap(func):
@wraps(func)
def decorated_view(*args, **kwargs):
if current_user.is_authenticated:
if current_user.has_roles(roles):
return func(*args, **kwargs)
return abort(401)
return decorated_view
return wrap
```
---
[`flask_login/utils.py#L264-L273`](https://github.com/maxcountryman/flask-login/blob/9dafc656f8771d3943f0b1fd96aff52f24223d17/flask_login/utils.py#L264-L273)
|
At first glance it looks like a conflict with your `run` function in the `require_role` decorator ([docs](http://flask.pocoo.org/docs/1.0/patterns/viewdecorators/)):
```
def require_role(roles=["User"]):
def wrap(func):
def wrapped_func(*args, **kwargs):
...
```
|
38,882,845
|
Anaconda for python 3.5 and python 2.7 seems to install just as a drop in folder inside my home folder on Ubuntu. Is there an installed version of Anaconda for Ubuntu 16? I'm not sure how to ask this but do I need python 3.5 that comes by default if I am also using Anaconda 3.5?
It seems like the best solution is docker these days. I mean I understand virtualenv and virtualenvwrapper. However, sometimes I try to indicate in my .bashrc that I want to use python 3.5 and yet I'll use the command mkvirtualenv and it will start installing the python 2.7 versions of python.
Should I choose either Anaconda or the version of python installed with my OS from python.org or is there an easy way to manage many different versions of Python?
Thanks,
Bruce
|
2016/08/10
|
[
"https://Stackoverflow.com/questions/38882845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/784304/"
] |
My solution for Python 3.5 and Anaconda on Ubuntu 16.04 LTS (with the bonus of OpenCV 3) was to install Anaconda, then deprecate to 3.5. You have to be sure to update anaconda afterwards - that's the bit that got me at first. The commands I gave were:
```
bash Anaconda3-4.3.1-Linux-x86_64.sh
conda install python=3.5
conda update anaconda
conda install -c menpo opencv3
```
This seems to work for me; tried it on a few other Ubuntu 16.04 machines as well, and it seemed to work well.
|
Use anaconda version `Anaconda3-4.2.0-Linux-x86_64.sh` from the anaconda installer archive.This comes with `python 3.5`. This worked for me.
|
35,528,078
|
I have a Python code like this,
```
pyg = 'ay'
original = raw_input('Enter a word:')
if len(original) > 0 and original.isalpha():
word = original.lower()
first = word[0]
new_word = word+first+pyg
new_word[1:]
print original
else:
print 'empty'
```
The output of variable "new\_word" should be "ythonpay", but now the new\_word contains "pythonpay". Could anyone let me know what is the mistake i am doing here.
|
2016/02/20
|
[
"https://Stackoverflow.com/questions/35528078",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1802617/"
] |
You will need your own implementation of `ToString` in your `Employee` class. You just need to override it and put your code of `PrintEmployee` in the new method.
Just to make it clear what I mean I give you a sample on how the override should look like:
```
public override string ToString()
{
return string.Format("ID:{0}{5}Full Name: {1} {2}{5}Social Security Number: {3}{5}Wage: {4}{5}", ID, FirstName, LastName, SocialNumber, HourWage, Environment.NewLine);
}
```
To give you a little background information: every class or struct in c# inherits from the `Object` class. Per default the `Object` class already has an implementation of `ToString`. This basic implementation would only print out namespace + class name of your Employee class. But with the above override you could simply write something like the following and get your wished output:
```
var employee = new Employee { Firstname = "Happy", Lastname = "Coder", ... }
Console.WriteLine(employee);
```
|
Here's a simple solution
```
private void PrintRegistry()
{
foreach(Employee employee in Accounts)
{
Console.WriteLine("\nID:{0}\nFull Name: {1} {2}\nSocial Security Number: {3}\nWage: {4}\n", employee.ID, employee.FirstName, employee.LastName, employee.SocialNumber, employee.HourWage);
}
}
```
Or if the "PrintEmployee" PROCEDURE (not function as it doesn't return a value) is in the "Employee" class.
```
private void PrintRegistry()
{
foreach(Employee employee in Accounts)
{
employee.PrintEmployee();
}
}
```
|
16,946,684
|
Minimal working example that shows this error:
```
from os import listdir, getcwd
from os.path import isfile, join, realpath, dirname
import csv
def gd(mypath, myfile):
# Obtain the number of columns in the data file
with open(myfile) as f:
reader = csv.reader(f, delimiter=' ', skipinitialspace=True)
for i in range(20):
row_20 = next(reader)
# Save number of clumns in 'num_cols'.
num_cols = len(row_20)
return num_cols
mypath = realpath(join(getcwd(), dirname(__file__)))
# Iterate through all files. Stores name of file in 'myfile'.
for myfile in listdir(mypath):
if isfile(join(mypath,myfile)) and (myfile.endswith('.dat')):
num_cols = gd(mypath, myfile)
print(num_cols)
```
I have a single file called 'data.dat' in that folder and `python` returns the error:
```
----> 9 with open(myfile) as f:
....
IOError: [Errno 2] No existe el archivo o el directorio: u'data.dat'
```
Which translates to *No file or directory: u'data.dat'*.
Why is that *u* being added at the beginning of the file name and how do I get the code to correctly parse the file name?
|
2013/06/05
|
[
"https://Stackoverflow.com/questions/16946684",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1391441/"
] |
The `u` just indicates that it is a unicode string and is not relevant to the problem.
The file isn't found because you aren't adding the `mypath` in front of the filename - try `with open(join(mypath, myfile)) as f:`
|
Your problem is that `myfile` is just a filename, not the result of `join(mypath,myfile)`.
|
57,523,861
|
I'm attempting to install pymc on MacOS 10.14.5 Mojave. However, there seems to be a problem with the gfortran module. The error message is minimally helpful.
I have attempted all the possible ways to install pymc as suggested here: <https://pymc-devs.github.io/pymc/INSTALL.html>
I first came across a problem with not recognising f951 in my gfortran compiler, but I solved that by adding the path to f951 explicitly to my `PATH`.
Now I come across the following after a bunch of warning messages in `pymc.flib.f`:
```
ld: unknown option: -idsym
error: Command "/usr/local/bin/gfortran -Wall -g -m64 -Wall -g -undefined dynamic_lookup -bundle build/temp.macosx-10.7-x86_64-3.7/cephes/i0.o build/temp.macosx-10.7-x86_64-3.7/cephes/c2f.o build/temp.macosx-10.7-x86_64-3.7/cephes/chbevl.o build/temp.macosx-10.7-x86_64-3.7/build/src.macosx-10.7-x86_64-3.7/pymc/flibmodule.o build/temp.macosx-10.7-x86_64-3.7/build/src.macosx-10.7-x86_64-3.7/build/src.macosx-10.7-x86_64-3.7/pymc/fortranobject.o build/temp.macosx-10.7-x86_64-3.7/pymc/flib.o build/temp.macosx-10.7-x86_64-3.7/pymc/histogram.o build/temp.macosx-10.7-x86_64-3.7/pymc/flib_blas.o build/temp.macosx-10.7-x86_64-3.7/pymc/blas_wrap.o build/temp.macosx-10.7-x86_64-3.7/pymc/math.o build/temp.macosx-10.7-x86_64-3.7/pymc/gibbsit.o build/temp.macosx-10.7-x86_64-3.7/build/src.macosx-10.7-x86_64-3.7/pymc/flib-f2pywrappers.o -L/Users/cameron/anaconda3/lib -L/usr/local/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0 -L/usr/local/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0/../../.. -L/usr/local/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0/../../.. -lmkl_rt -lpthread -lgfortran -o build/lib.macosx-10.7-x86_64-3.7/pymc/flib.cpython-37m-darwin.so" failed with exit status 1
```
No online searches reveal what might cause the exit status 1 with gfortran.
|
2019/08/16
|
[
"https://Stackoverflow.com/questions/57523861",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/11935431/"
] |
I'm not familiar with mongoose, so I will take for granted that `"user_count": user_count++` works.
For the rest, there are two things that won't work:
* the `$` operator in `"users.$.id": req.user.id,` is known as the positional operator, and that's not what you want, it's used to update a specific element in an array. Further reading here: <https://docs.mongodb.com/manual/reference/operator/update/positional/>
* the `upsert` is about inserting a full document if the `update` does not match anything in the collection. In your case you just want to push an element in the array right?
In this case I guess something like this might work:
```
const result = await Example.updateOne(
{
_id: id,
},
{
$set: {
"user_count": user_count++
},
$addToSet: {
"users": {
"id": req.user.id,
"action": true
}
}
}
);
```
Please note that `$push` might also do the trick instead of `$addToSet`. But `$addToSet` takes care of keeping stuff unique in your array.
|
```
db.collection.findOneAndUpdate({_id: id}, {$set: {"user_count": user_count++},$addToSet: {"users": {"id": req.user.id,"action": true}}}, {returnOriginal:false}, (err, doc) => {
if (err) {
console.log("Something wrong when updating data!");
}
console.log(doc);
});
```
|
25,113,767
|
I am programming in python which involves me implementing a shell in Python in Linux. I am trying to run standard unix commands by using os.execvp(). I need to keep asking the user for commands so I have used an infinite while loop. However, the infinite while loop doesn't work. I have tried searching online but they're isn't much available for Python. Any help would be appreciated. Thanks
This is the code I have written so far:
```
import os
import shlex
def word_list(line):
"""Break the line into shell words."""
lexer = shlex.shlex(line, posix=True)
lexer.whitespace_split = False
lexer.wordchars += '#$+-,./?@^='
args = list(lexer)
return args
def main():
while(True):
line = input('psh>')
split_line = word_list(line)
if len(split_line) == 1:
print(os.execvp(split_line[0],[" "]))
else:
print(os.execvp(split_line[0],split_line))
if __name__ == "__main__":
main()
```
So when I run this and put in the input "ls" I get the output "HelloWorld.py" (which is correct) and "Process finished with exit code 0". However I don't get the output "psh>" which is waiting for the next command. No exceptions are thrown when I run this code.
|
2014/08/04
|
[
"https://Stackoverflow.com/questions/25113767",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3903472/"
] |
Your code does not work because it uses [`os.execvp`](https://docs.python.org/3/library/os.html#os.execvp). `os.execvp` **replaces the current process image completely with the executing program**, your running process **becomes** the `ls`.
To execute a **subprocess** use the aptly named [`subprocess`](https://docs.python.org/3/library/subprocess.html) module.
---
In case of an **ill-advised** programming exercise then you need to:
```
# warning, never do this at home!
pid = os.fork()
if not pid:
os.execvp(cmdline) # in child
else:
os.wait(pid) # in parent
```
`os.fork` returns twice, giving the pid of child in parent process, zero in child process.
|
If you want it to run like a shell you are looking for os.fork() . Call this before you call os.execvp() and it will create a child process. os.fork() returns the process id. If it is 0 then you are in the child process and can call os.execvp(), otherwise continue with the code. This will keep the while loop running. You can have the original process either wait for it to complete os.wait(), or continue without waiting to the start of the while loop. The pseudo code on page 2 of this link should help <https://www.cs.auckland.ac.nz/courses/compsci340s2c/assignments/A1/A1.pdf>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.