qid
int64
46k
74.7M
question
stringlengths
54
37.8k
date
stringlengths
10
10
metadata
listlengths
3
3
response_j
stringlengths
17
26k
response_k
stringlengths
26
26k
5,333,509
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it? For example I have the numbers 1,2,5,15 and I want to reverse the bits like so: ``` original reversed 1 - 0001 - 1000 - 8 2 - 0010 - 0100 - 4 5 - 0101 - 1010 - 10 15 - 1111 - 1111 - 15 ``` Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so. PS This isn't homework, I am just trying to program the solution to a logic puzzle.
2011/03/17
[ "https://Stackoverflow.com/questions/5333509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/330013/" ]
``` reversed_ = sum(1<<(numbits-1-i) for i in range(numbits) if original>>i&1) ```
Numpy indexing arrays provide concise notation for application of bit reversal permutations. ``` import numpy as np def bitrev_map(nbits): """create bit reversal mapping >>> bitrev_map(3) array([0, 4, 2, 6, 1, 5, 3, 7], dtype=uint16) >>> import numpy as np >>> np.arange(8)[bitrev_map(3)] array([0, 4, 2, 6, 1, 5, 3, 7]) >>> (np.arange(8)[bitrev_map(3)])[bitrev_map(3)] array([0, 1, 2, 3, 4, 5, 6, 7]) """ assert isinstance(nbits, int) and nbits > 0, 'bit size must be positive integer' dtype = np.uint32 if nbits > 16 else np.uint16 brmap = np.empty(2**nbits, dtype=dtype) int_, ifmt, fmtstr = int, int.__format__, ("0%db" % nbits) for i in range(2**nbits): brmap[i] = int_(ifmt(i, fmtstr)[::-1], base=2) return brmap ``` When bit reversing many vectors one wants to store the mapping anyway. The implementation reverses binary literal representations.
5,333,509
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it? For example I have the numbers 1,2,5,15 and I want to reverse the bits like so: ``` original reversed 1 - 0001 - 1000 - 8 2 - 0010 - 0100 - 4 5 - 0101 - 1010 - 10 15 - 1111 - 1111 - 15 ``` Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so. PS This isn't homework, I am just trying to program the solution to a logic puzzle.
2011/03/17
[ "https://Stackoverflow.com/questions/5333509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/330013/" ]
``` reversed_ = sum(1<<(numbits-1-i) for i in range(numbits) if original>>i&1) ```
You can do something like this where you can define how many bits in the argument l. ``` def reverse(x, l): r = x & 1 for i in range (1, l): r = r << 1 | (x >> i) & 1 print(bin(r)) return r ```
5,333,509
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it? For example I have the numbers 1,2,5,15 and I want to reverse the bits like so: ``` original reversed 1 - 0001 - 1000 - 8 2 - 0010 - 0100 - 4 5 - 0101 - 1010 - 10 15 - 1111 - 1111 - 15 ``` Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so. PS This isn't homework, I am just trying to program the solution to a logic puzzle.
2011/03/17
[ "https://Stackoverflow.com/questions/5333509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/330013/" ]
If you truly want 32 bits rather than 4, then here is a function to do what you want: ``` def revbits(x): return int(bin(x)[2:].zfill(32)[::-1], 2) ``` Basically I'm converting to binary, stripping off the '0b' in front, filling with zeros up to 32 chars, reversing, and converting back to an integer. It might be faster to do the actual bit work as in gnibbler's answer.
Numpy indexing arrays provide concise notation for application of bit reversal permutations. ``` import numpy as np def bitrev_map(nbits): """create bit reversal mapping >>> bitrev_map(3) array([0, 4, 2, 6, 1, 5, 3, 7], dtype=uint16) >>> import numpy as np >>> np.arange(8)[bitrev_map(3)] array([0, 4, 2, 6, 1, 5, 3, 7]) >>> (np.arange(8)[bitrev_map(3)])[bitrev_map(3)] array([0, 1, 2, 3, 4, 5, 6, 7]) """ assert isinstance(nbits, int) and nbits > 0, 'bit size must be positive integer' dtype = np.uint32 if nbits > 16 else np.uint16 brmap = np.empty(2**nbits, dtype=dtype) int_, ifmt, fmtstr = int, int.__format__, ("0%db" % nbits) for i in range(2**nbits): brmap[i] = int_(ifmt(i, fmtstr)[::-1], base=2) return brmap ``` When bit reversing many vectors one wants to store the mapping anyway. The implementation reverses binary literal representations.
5,333,509
What is the best way to reverse the significant bits of an integer in python and then get the resulting integer out of it? For example I have the numbers 1,2,5,15 and I want to reverse the bits like so: ``` original reversed 1 - 0001 - 1000 - 8 2 - 0010 - 0100 - 4 5 - 0101 - 1010 - 10 15 - 1111 - 1111 - 15 ``` Given that these numbers are 32 bit integers how should I do this in python? The part I am unsure about is how to move individual bits around in python and if there is anything funny about using a 32bit field as an integer after doing so. PS This isn't homework, I am just trying to program the solution to a logic puzzle.
2011/03/17
[ "https://Stackoverflow.com/questions/5333509", "https://Stackoverflow.com", "https://Stackoverflow.com/users/330013/" ]
[This](http://wiki.python.org/moin/BitwiseOperators) is the first link I found googling for "python bitwise" and it has a pretty good summary of how to do bit-twiddling in Python. If you don't care too much about speed, the most straightforward solution is to go through each bit-place of the original number and, if it is set, set the corresponding bit of your return value. ``` def reversed( x, num_bits ): answer = 0 for i in range( num_bits ): # for each bit number if (x & (1 << i)): # if it matches that bit answer |= (1 << (num_bits - 1 - i)) # set the "opposite" bit in answer return answer ``` I'm sure you could do this with a comprehension, too. If you really needed speed, you'd probably do it 8 bits at a time with a 256-value precomputed lookup table.
You can do something like this where you can define how many bits in the argument l. ``` def reverse(x, l): r = x & 1 for i in range (1, l): r = r << 1 | (x >> i) & 1 print(bin(r)) return r ```
28,055,565
I have an python `dict` whose keys and values are strings, integers and other dicts and tuples ([json does not support those](https://stackoverflow.com/q/7001606/850781)). I want to save it to a text file and then read it from the file. **Basically, I want a [`read`](http://www.lispworks.com/documentation/HyperSpec/Body/f_rd_rd.htm) counterpart to the built-in [`print`](http://www.lispworks.com/documentation/HyperSpec/Body/f_wr_pr.htm)** (like in Lisp). Constraints: 1. the file must be human readable (thus [pickle](https://docs.python.org/2/library/pickle.html) is out) 2. no need to detect circularities. Is there anything better than [json](https://docs.python.org/2/library/json.html)?
2015/01/20
[ "https://Stackoverflow.com/questions/28055565", "https://Stackoverflow.com", "https://Stackoverflow.com/users/850781/" ]
You could use `repr()` on the `dict`, then read it back in and parse it with `ast.literal_eval()`. It's as human readable as Python itself is. Example: ``` In [1]: import ast In [2]: x = {} In [3]: x['string key'] = 'string value' In [4]: x[(42, 56)] = {'dict': 'value'} In [5]: x[13] = ('tuple', 'value') In [6]: repr(x) Out[6]: "{(42, 56): {'dict': 'value'}, 'string key': 'string value', 13: ('tuple', 'value')}" In [7]: with open('/tmp/test.py', 'w') as f: f.write(repr(x)) In [8]: with open('/tmp/test.py', 'r') as f: y = ast.literal_eval(f.read()) In [9]: y Out[9]: {13: ('tuple', 'value'), 'string key': 'string value', (42, 56): {'dict': 'value'}} In [10]: x == y Out[10]: True ``` You may also consider using the `pprint` module for even friendlier formatted output.
Honestly, json is your answer [EDIT: so long as the keys are strings, didn't see the part about dicts as keys], and that's why it's taken over in the least 5 years. What legibility issues does json have? There are tons of json indenter, pretty-printer utilities, browser plug-ins [1][2] - use them and it certainly is human-readable. json(/simplejson) is also extremely performant (C implementations), and it scales, and can be processed serially, which cannot be said for the AST approach (why be eccentric and break scalability?). This also seems to be the consensus from 100% of people answering you here... everyone can't be wrong ;-) XML is dead, good riddance. 1. [How can I pretty-print JSON?](https://stackoverflow.com/questions/352098/how-can-i-pretty-print-json) and countless others 2. [Browser JSON Plugins](https://stackoverflow.com/questions/2547769/browser-json-plugins)
73,067,801
I'm using databricks, and I have a repo in which I have a basic python module within which I define a class. I'm able to import and access the class and its methods from the databricks notebook. One of the methods within the class within the module looks like this (simplified) ```py def read_raw_json(self): self.df = spark.read.format("json").load(f"{self.base_savepath}/{self.resource}/{self.resource}*.json") ``` When I execute this particular method within the databricks notebook it gives me a `NameError` that 'spark' is not defined. The databricks runtime instantiates with a spark session stored in a variable called "spark". I assumed any methods executed in that runtime would inherit from the parent scope. Anyone know why this isn't the case? EDIT: I was able to get it to work by passing it the spark variable from within the notebook as an object to my class instantiation. But I don't want to call this an answer yet, because I'm not sure why I needed to.
2022/07/21
[ "https://Stackoverflow.com/questions/73067801", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9485834/" ]
The INFILE statement is for reading a file as raw TEXT. If you have a SAS dataset then you can just SET the dataset to read it into a data step. So the equivalent for your attempted method would be something like: ``` data _null_; set "C:\myfiles\sample.sas7bdat" end=eof; if eof then put "Observations read=====:" _n_ ; run; ```
One cool thing about sas7bdat files is the amount of metadata stored with them. The row count of that file is already known by SAS as an attribute. You can use `proc contents` to read it. `Observations` is the number of rows in the table. ``` libname files "C:\myfiles"; proc contents data=files.sample; run; ``` A more advanced way is to open the file directly using macro functions. ``` %let dsid = %sysfunc(open(files.sample) ); /* Open the file */ %let nobs = %sysfunc(attrn(&dsid, nlobs) ); /* Get the number of observations */ %let rc = %sysfunc(close(&dsid) ); /* Close the file */ %put Total observations: &nobs ```
57,009,331
I'm currently coding a text based game in python in which the narrative will start differently depending on answers to certain questions. The first question is simple, a name. I can't seem to get the input to display in the correct text option after the prompt. I tried using "if name is True" and "if name is str" but they both skip to the else option, instead of displaying the correct option after input. ``` while True: try: # This will query for first user input, Name. name = str(input("Please enter your name: ")) except ValueError: print("Sorry, I didn't understand that.") # No valid input will restart loop. continue else: break if name is str: print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ``` So if I enter John as my name, I'd expect the output to be "Ah, so John is your name? Excellent." But instead I enter John and it outputs: Please enter your name: John No name? That's fine, I suppose. Any help would be appreciated. Thanks.
2019/07/12
[ "https://Stackoverflow.com/questions/57009331", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389547/" ]
use `isinstance` instead of `is`. And, You do not need to use `str(input())` because `input` returns `str`. ``` while True: try: # This will query for first user input, Name. name = input("Please enter your name: ") except ValueError: print("Sorry, I didn't understand that.") # No valid input will restart loop. continue else: break if isinstance(name, str): print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ``` The result is: ``` Please enter your name: John Ah, so John is your name? Excellent. ``` But, You have to use `string.isalpha()` if you want to check whether `name` is real name in consist of alphabets. ``` if name.isalpha(): print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ```
The exception handling around `input` is not required as the `input` function always returns a string. If the user does not enter a value an empty string is returned. Therefore your code can be simplified to ``` name = input("Please enter your name: ") # In python an empty string is considered `False` allowing # you to use an if statement like this. if name: print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ``` Further regarding usage of `is`: `is` is used to compare identity as in two variables refer to the same object. It's more common use is to test if a variable is `None` eg `if name is None` (`None` is always the same object). As mentioned you can use `type(name)` to obtain the type of variable *name* but this is discouraged in place of the builtin check `isinstance`. `==` on the other hand compares equality as in `if name == "Dave"`. `isinstance` does more than just check for a specific type, it also handles inherited types.
57,009,331
I'm currently coding a text based game in python in which the narrative will start differently depending on answers to certain questions. The first question is simple, a name. I can't seem to get the input to display in the correct text option after the prompt. I tried using "if name is True" and "if name is str" but they both skip to the else option, instead of displaying the correct option after input. ``` while True: try: # This will query for first user input, Name. name = str(input("Please enter your name: ")) except ValueError: print("Sorry, I didn't understand that.") # No valid input will restart loop. continue else: break if name is str: print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ``` So if I enter John as my name, I'd expect the output to be "Ah, so John is your name? Excellent." But instead I enter John and it outputs: Please enter your name: John No name? That's fine, I suppose. Any help would be appreciated. Thanks.
2019/07/12
[ "https://Stackoverflow.com/questions/57009331", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11389547/" ]
use `isinstance` instead of `is`. And, You do not need to use `str(input())` because `input` returns `str`. ``` while True: try: # This will query for first user input, Name. name = input("Please enter your name: ") except ValueError: print("Sorry, I didn't understand that.") # No valid input will restart loop. continue else: break if isinstance(name, str): print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ``` The result is: ``` Please enter your name: John Ah, so John is your name? Excellent. ``` But, You have to use `string.isalpha()` if you want to check whether `name` is real name in consist of alphabets. ``` if name.isalpha(): print("Ah, so " + name + " is your name? Excellent.") else: print("No name? That's fine, I suppose.") ```
The issue is you're checking if name is str, not if type(name) is str. ``` if type(name) is str: ```
55,327,900
I'm currently developing my first python program, a booking system using tkinter. I have a customer account creation screen that uses a number of Entry boxes. When the Entry box is clicked the following key bind is called to clear the entry box of its instruction (ie. "enter name") ``` def entry_click(event): if "enter" in event.widget.get(): event.widget.delete(0, "end") event.widget.insert(0, "") event.widget.configure(fg="white") #example Entry fields new_name = Entry(root) new_name.insert(0, "enter name") new_name.bind('<FocusIn>', entry_click) new_name.bind('<FocusOut>', entry_focusout) new_email = Entry(root) new_email.insert(0, "enter email") new_email.bind('<FocusIn>', entry_click) new_email.bind('<FocusOut>', entry_focusout) ``` In a similar vein I'm looking for a way to create a universal event where the appropriate text for that unique entry field is returned to its initial state (ie. "enter first name" or "enter email") if the box is empty when clicked away from. ``` def entry_focusout(): if not event.widget.get(): event.widget.insert(fg="grey") event.widget.insert(0, #[appropriate text here]) ``` How would I be able to do this? Many thanks.
2019/03/24
[ "https://Stackoverflow.com/questions/55327900", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11240330/" ]
Just add an arg to your `entry_focusout` event and bind to the `Entry` widgets with a lambda function. ``` from tkinter import * root = Tk() def entry_click(event): if event.widget["foreground"] == "grey": event.widget.delete(0, "end") event.widget.insert(0, "") event.widget.configure(fg="black") def entry_focusout(event, msg): if not event.widget.get(): event.widget.configure(fg="grey") #you had a typo here event.widget.insert(0, msg) new_name = Entry(root,fg="grey") new_name.insert(0, "enter name") new_name.bind('<FocusIn>', entry_click) new_name.bind('<FocusOut>', lambda e: entry_focusout(e, "enter name")) new_email = Entry(root,fg="grey") new_email.insert(0, "enter email") new_email.bind('<FocusIn>', entry_click) new_email.bind('<FocusOut>', lambda e: entry_focusout(e, "enter email")) new_name.pack() new_email.pack() root.mainloop() ```
> > **Question**: Return empty `tk.Entry` box to previous state when clicked away > > > The following is a `OOP` **universal** solution. Thanks to @Henry Yik, for the `if event.widget["foreground"] == "grey":` part. ``` class EntryInstruction(tk.Entry): def __init__(self, parent, instruction=None): super().__init__(parent) self.instruction = instruction self.focus_out(None) self.bind('<FocusIn>', self.focus_in) self.bind('<FocusOut>', self.focus_out) def focus_in(self, event): if self["foreground"] == "grey": self.delete(0, "end") self.configure(fg="black") def focus_out(self, event): if not self.get(): self.configure(fg="grey") self.insert(0, self.instruction) ``` > > **Usage**: > > > ``` class App(tk.Tk): def __init__(self): super().__init__() entry = EntryInstruction(self, '<enter name>') entry.grid(row=0, column=0) entry = EntryInstruction(self, '<enter email>') entry.grid(row=1, column=0) if __name__ == "__main__": App().mainloop() ``` ***Tested with Python: 3.5***
6,003,932
``` import cgi def fill(): s = """\ <html><body> <form method="get" action="./show"> <p>Type a word: <input type="text" name="word"> <input type="submit" value="Submit"</p> </form></body></html> """ return s # Receive the Request object def show(req): # The getfirst() method returns the value of the first field with the # name passed as the method argument word = req.form.getfirst('word', '') print "Creating a text file with the write() method." text_file = open("/var/www/cgi-bin/input.txt", "w") text_file.write(word) text_file.close() # Escape the user input to avoid script injection attacks #word = cgi.escape(word) test(0) '''Input triggers the application to start its process''' simplified_file=open("/var/www/cgi-bin/output.txt", "r").read() s = """\ <html><body> <p>The submitted word was "%s"</p> <p><a href="./fill">Submit another word!</a></p> </body></html> """ return s % simplified_file def test(flag): print flag while flag!=1: x=1 return ``` This mod\_python program's fill method send the text to show method where it writes to input.txt file which is used by my application, till my application is running i don't want rest of the statements to work so i have called a function test, in which i have a while loop which will be looping continuously till the flag is set to 1. If its set to 1 then it will break the while loop and continue execution of rest of the statements. I have made my application to pass test flag variable to set it as 1. according to my logic, it should break the loop and return to show function and continue executing rest but its not happening in that way, its continuously loading the page! please help me through this.. Thank you.. :)
2011/05/14
[ "https://Stackoverflow.com/questions/6003932", "https://Stackoverflow.com", "https://Stackoverflow.com/users/640666/" ]
``` while flag!=1: x=1 ``` This loop won't ever finish. When is `flag` ever going to change so that `flag != 1` is False? Remember, `flag` is a *local* variable so changing it anywhere else isn't going to have an effect -- especially since no other code is going to have the opportunity to run while that loop is still running. It's really not very clear what you're trying to achieve here. You shouldn't be trying to delay code with infinite loops. I'd re-think your architecture carefully.
it is not the most elegant way to do this, but if you need to change the value of flag outside your method, you should use it as a `global` variable. ``` def test(): global flag # use this everywhere you're using flag. print flag while flag!=1: x=1 return ``` but to make a waiting method, have a look to [python Event() objects](http://docs.python.org/library/threading.html#event-objects), they have a wait() method that blocks until the Event's flag is set.
59,282,950
We use python with pyspark api in order to run simple code on spark cluster. ``` from pyspark import SparkContext, SparkConf conf = SparkConf().setAppName('appName').setMaster('spark://clusterip:7077') sc = SparkContext(conf=conf) rdd = sc.parallelize([1, 2, 3, 4]) rdd.map(lambda x: x**2).collect() ``` It works when we setup a spark cluster locally and with dockers. We would now like to start an emr cluster and test the same code. And seems that pyspark can't connect to the spark cluster on emr We opened ports 8080 and 7077 from our machine to the spark master We are getting past the firewall and just seems that nothing is listening on port 7077 and we get connection refused. We found [this](https://medium.com/@kulasangar/creating-a-spark-job-using-pyspark-and-executing-it-in-aws-emr-70dba5e98a75) explaining how to serve a job using the cli but we need to run it directly from pyspark api on the driver. What are we missing here? How can one start an emr cluster and actually run pyspark code locally on python using this cluster? edit: running this code from the master itself works As opposed to what was suggested, when connecting to the master using ssh, and running python from the terminal, the very same code (with proper adjustments for the master ip, given it's the same machine) works. No issues no problems. How does this make sense given the documentation that clearly states otherwise?
2019/12/11
[ "https://Stackoverflow.com/questions/59282950", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4406595/" ]
You try to run pyspark (which calls spark-submit) form a remote computer outside the spark cluster. This is technically possible but it is not the intended way of deploying applications. In yarn mode, it will make your computer participate in the spark protocol as a client. Thus it would require opening several ports and installing exactly the same spark jars as on spark aws emr. Form the [spark submit doc](https://spark.apache.org/docs/latest/submitting-applications.html#launching-applications-with-spark-submit) : ``` A common deployment strategy is to submit your application from a gateway machine that is physically co-located with your worker machines (e.g. Master node in a standalone EC2 cluster) ``` A simple deploy strategy is * sync code to master node via rsync, scp or git ``` cd ~/projects/spark-jobs # on local machine EMR_MASTER_IP='255.16.17.13' TARGET_DIR=spark_jobs rsync -avze "ssh -i ~/dataScienceKey.pem" --rsync-path="mkdir -p ${TARGET_DIR} && rsync" --delete ./ hadoop@${EMR_MASTER_IP}:${TARGET_DIR} ``` * ssh to the master node ``` ssh -i ~/dataScienceKey.pem hadoop@${EMR_HOST} ``` * run `spark-submit` on the master node ``` cd spark_jobs spark-submit --master yarn --deploy-mode cluster my-job.py ``` ``` # my-job.py from pyspark.sql import SparkSession spark = SparkSession.builder.appName("my-job-py").getOrCreate() sc = spark.sparkContext rdd = sc.parallelize([1, 2, 3, 4]) res = rdd.map(lambda x: x**2).collect() print(res) ``` There is a way to submit the job directly to spark emr without syncing. Spark EMR runs [Apache Livy](http://livy.apache.org/) on port `8998` by default. It is a rest webservice which allows to submit jobs via a rest api. You can pass the same `spark-submit` parameters with a `curl` script from your machine. See [doc](https://aws.amazon.com/de/blogs/big-data/orchestrate-apache-spark-applications-using-aws-step-functions-and-apache-livy/) For interactive development we have also configured local running `jupyter notebooks` which automatically submit cell runs to livy. This is done via the [spark-magic project](https://github.com/jupyter-incubator/sparkmagic)
According to this [Amazon Doc](https://aws.amazon.com/premiumsupport/knowledge-center/emr-submit-spark-job-remote-cluster/?nc1=h_ls), you can't do that: > > *Common errors* > > > **Standalone mode** > > > Amazon EMR doesn't support standalone mode for Spark. It's not > possible to submit a Spark application to a remote Amazon EMR cluster > with a command like this: > > > `SparkConf conf = new SparkConf().setMaster("spark://master_url:7077”).setAppName("WordCount");` > > > Instead, set up your local machine as explained earlier in this > article. Then, submit the application using the spark-submit command. > > > You can follow the above linked resource to configure your local machine in order to submit spark jobs to EMR Cluster. Or more simpler, use the ssh key you specified when you create your cluster to connect to the master node and submit spark jobs: ``` ssh -i ~/path/ssh_key hadoop@$<master_ip_address> ```
1,247,133
Working in python I want to extract a dataset with the following structure: Each item has a unique ID and the unique ID of its parent. Each parent can have one or more children, each of which can have one or more children of its own, to n levels i.e. the data has an upturned tree-like structure. While it has the potential to go on for infinity, in reality a depth of 10 levels is unusual, as is having more than 10 siblings at each level. For each item in the dataset I want to show show all items for which this item is their parent... and so on until it reaches the bottom of the dataset. Doing the first two levels is easy, but I'm unsure how to make it efficiently recurs down through the levels. Any pointers very much appreciated.
2009/08/07
[ "https://Stackoverflow.com/questions/1247133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/408134/" ]
you should probably use a defaultdictionary for this: ``` from collections import defaultdict itemdict = defaultdict(list) for id, parent_id in itemlist: itemdict[parent_id].append(id) ``` then you can recursively print it (with indentation) like ``` def printitem(id, depth=0): print ' '*depth, id for child in itemdict[id]: printitem(child, depth+1) ```
Are you saying that each item only maintains a reference to its parents? If so, then how about ``` def getChildren(item) : children = [] for possibleChild in allItems : if (possibleChild.parent == item) : children.extend(getChildren(possibleChild)) return children ``` This returns a list that contains all items who are in some way descended from item.
1,247,133
Working in python I want to extract a dataset with the following structure: Each item has a unique ID and the unique ID of its parent. Each parent can have one or more children, each of which can have one or more children of its own, to n levels i.e. the data has an upturned tree-like structure. While it has the potential to go on for infinity, in reality a depth of 10 levels is unusual, as is having more than 10 siblings at each level. For each item in the dataset I want to show show all items for which this item is their parent... and so on until it reaches the bottom of the dataset. Doing the first two levels is easy, but I'm unsure how to make it efficiently recurs down through the levels. Any pointers very much appreciated.
2009/08/07
[ "https://Stackoverflow.com/questions/1247133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/408134/" ]
Are you saying that each item only maintains a reference to its parents? If so, then how about ``` def getChildren(item) : children = [] for possibleChild in allItems : if (possibleChild.parent == item) : children.extend(getChildren(possibleChild)) return children ``` This returns a list that contains all items who are in some way descended from item.
How about something like this, ``` #!/usr/bin/python tree = { 0:(None, [1,2,3]), 1:(0, [4]), 2:(0, []), 3:(0, [5,6]), 4:(1, [7]), 5:(3, []), 6:(3, []), 7:(4, []), } def find_children( tree, id ): print "node:", id, tree[id] for child in tree[id][1]: find_children( tree, child ) if __name__=="__main__": import sys find_children( tree, int(sys.argv[1]) ) $ ./tree.py 3 node: 3 (0, [5, 6]) node: 5 (3, []) node: 6 (3, []) ``` It's also worth noting that python has a pretty low default recursion limit, 1000 I think. In the event that your tree actually gets pretty deep you'll hit this very quickly. You can crank this up with, ``` sys.setrecursionlimit(100000) ``` and check it with, ``` sys.getrecursionlimit() ```
1,247,133
Working in python I want to extract a dataset with the following structure: Each item has a unique ID and the unique ID of its parent. Each parent can have one or more children, each of which can have one or more children of its own, to n levels i.e. the data has an upturned tree-like structure. While it has the potential to go on for infinity, in reality a depth of 10 levels is unusual, as is having more than 10 siblings at each level. For each item in the dataset I want to show show all items for which this item is their parent... and so on until it reaches the bottom of the dataset. Doing the first two levels is easy, but I'm unsure how to make it efficiently recurs down through the levels. Any pointers very much appreciated.
2009/08/07
[ "https://Stackoverflow.com/questions/1247133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/408134/" ]
you should probably use a defaultdictionary for this: ``` from collections import defaultdict itemdict = defaultdict(list) for id, parent_id in itemlist: itemdict[parent_id].append(id) ``` then you can recursively print it (with indentation) like ``` def printitem(id, depth=0): print ' '*depth, id for child in itemdict[id]: printitem(child, depth+1) ```
If you want to keep the structure of your dataset, this will produce a list of the format [id, [children of id], id2, [children of id2]] ``` def children(id): return [id]+[children(x.id) for x in filter(lambda x:x.parent == id, items)] ```
1,247,133
Working in python I want to extract a dataset with the following structure: Each item has a unique ID and the unique ID of its parent. Each parent can have one or more children, each of which can have one or more children of its own, to n levels i.e. the data has an upturned tree-like structure. While it has the potential to go on for infinity, in reality a depth of 10 levels is unusual, as is having more than 10 siblings at each level. For each item in the dataset I want to show show all items for which this item is their parent... and so on until it reaches the bottom of the dataset. Doing the first two levels is easy, but I'm unsure how to make it efficiently recurs down through the levels. Any pointers very much appreciated.
2009/08/07
[ "https://Stackoverflow.com/questions/1247133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/408134/" ]
If you want to keep the structure of your dataset, this will produce a list of the format [id, [children of id], id2, [children of id2]] ``` def children(id): return [id]+[children(x.id) for x in filter(lambda x:x.parent == id, items)] ```
How about something like this, ``` #!/usr/bin/python tree = { 0:(None, [1,2,3]), 1:(0, [4]), 2:(0, []), 3:(0, [5,6]), 4:(1, [7]), 5:(3, []), 6:(3, []), 7:(4, []), } def find_children( tree, id ): print "node:", id, tree[id] for child in tree[id][1]: find_children( tree, child ) if __name__=="__main__": import sys find_children( tree, int(sys.argv[1]) ) $ ./tree.py 3 node: 3 (0, [5, 6]) node: 5 (3, []) node: 6 (3, []) ``` It's also worth noting that python has a pretty low default recursion limit, 1000 I think. In the event that your tree actually gets pretty deep you'll hit this very quickly. You can crank this up with, ``` sys.setrecursionlimit(100000) ``` and check it with, ``` sys.getrecursionlimit() ```
1,247,133
Working in python I want to extract a dataset with the following structure: Each item has a unique ID and the unique ID of its parent. Each parent can have one or more children, each of which can have one or more children of its own, to n levels i.e. the data has an upturned tree-like structure. While it has the potential to go on for infinity, in reality a depth of 10 levels is unusual, as is having more than 10 siblings at each level. For each item in the dataset I want to show show all items for which this item is their parent... and so on until it reaches the bottom of the dataset. Doing the first two levels is easy, but I'm unsure how to make it efficiently recurs down through the levels. Any pointers very much appreciated.
2009/08/07
[ "https://Stackoverflow.com/questions/1247133", "https://Stackoverflow.com", "https://Stackoverflow.com/users/408134/" ]
you should probably use a defaultdictionary for this: ``` from collections import defaultdict itemdict = defaultdict(list) for id, parent_id in itemlist: itemdict[parent_id].append(id) ``` then you can recursively print it (with indentation) like ``` def printitem(id, depth=0): print ' '*depth, id for child in itemdict[id]: printitem(child, depth+1) ```
How about something like this, ``` #!/usr/bin/python tree = { 0:(None, [1,2,3]), 1:(0, [4]), 2:(0, []), 3:(0, [5,6]), 4:(1, [7]), 5:(3, []), 6:(3, []), 7:(4, []), } def find_children( tree, id ): print "node:", id, tree[id] for child in tree[id][1]: find_children( tree, child ) if __name__=="__main__": import sys find_children( tree, int(sys.argv[1]) ) $ ./tree.py 3 node: 3 (0, [5, 6]) node: 5 (3, []) node: 6 (3, []) ``` It's also worth noting that python has a pretty low default recursion limit, 1000 I think. In the event that your tree actually gets pretty deep you'll hit this very quickly. You can crank this up with, ``` sys.setrecursionlimit(100000) ``` and check it with, ``` sys.getrecursionlimit() ```
7,394,301
i have a class "karte" i want to know is there a way of dynamic name creation of my new objects normal object creation would be > > karta=karte() > > > but i am curious in something like this > > karta[i]=karte() > > > or something like that where i would be the number of for loop. and at the end i would call object like this ``` karta1.boja karta2.boja karta3.boja ``` how can i achieve that , im new to python thanks.
2011/09/12
[ "https://Stackoverflow.com/questions/7394301", "https://Stackoverflow.com", "https://Stackoverflow.com/users/666160/" ]
You can create a list of objects like this: ``` karta = [] for i in range(10): karta.append(karte()) ``` Or using a list comprehension: ``` karta = [karte() for i in range(10)] ``` Now you can access the objects like this: `karta[i]`. To accomplish your last example, you have to modify the `globals()` dictionary. I do not endorse this at all, but here is how to do it: ``` g = globals() for i in range(10): g["karte" + str(i)] = karte() ``` This is not very pythonic though, you should just use a list.
Unless you have a real need to keep the objects out of a list and have names like karta1, karta2, etc. I would do as you suggest and use a list with a loop to initialize: ``` for i in some_range: karta[i]=karte() ```
69,238,333
I'm trying to fit a curve to a differential equation. For the sake of simplicity, I'm just doing the logistic equation here. I wrote the code below but I get an error shown below it. I'm not quite sure what I'm doing wrong. ``` import numpy as np import pandas as pd import scipy.optimize as optim from scipy.integrate import odeint df_yeast = pd.DataFrame({'cd': [9.6, 18.3, 29., 47.2, 71.1, 119.1, 174.6, 257.3, 350.7, 441., 513.3, 559.7, 594.8, 629.4, 640.8, 651.1, 655.9, 659.6], 'td': np.arange(18)}) N0 = 1 parsic = [5, 2] def logistic_de(t, N, r, K): return r*N*(1 - N/K) def logistic_solution(t, r, K): return odeint(logistic_de, N0, t, (r, K)) params, _ = optim.curve_fit(logistic_solution, df_yeast['td'], df_yeast['cd'], p0=parsic); ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) ValueError: object too deep for desired array --------------------------------------------------------------------------- error Traceback (most recent call last) <ipython-input-94-2a5a467cfa43> in <module> ----> 1 params, _ = optim.curve_fit(logistic_solution, df_yeast['td'], df_yeast['cd'], p0=parsic); ~/SageMath/local/lib/python3.9/site-packages/scipy/optimize/minpack.py in curve_fit(f, xdata, ydata, p0, sigma, absolute_sigma, check_finite, bounds, method, jac, **kwargs) 782 # Remove full_output from kwargs, otherwise we're passing it in twice. 783 return_full = kwargs.pop('full_output', False) --> 784 res = leastsq(func, p0, Dfun=jac, full_output=1, **kwargs) 785 popt, pcov, infodict, errmsg, ier = res 786 ysize = len(infodict['fvec']) ~/SageMath/local/lib/python3.9/site-packages/scipy/optimize/minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag) 420 if maxfev == 0: 421 maxfev = 200*(n + 1) --> 422 retval = _minpack._lmdif(func, x0, args, full_output, ftol, xtol, 423 gtol, maxfev, epsfcn, factor, diag) 424 else: error: Result from function call is not a proper array of floats. ```
2021/09/18
[ "https://Stackoverflow.com/questions/69238333", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10796158/" ]
@hpaulj has pointed out the problem with the shape of the return value from `logistic_solution` and shown that fixing that eliminates the error that you reported. There is, however, another problem in the code. The problem does not generate an error, but it does result in an incorrect solution to your test problem (the logistic differential equation). By default, `odeint` expects the `t` argument of the function that computes the differential equations to be the *second* argument. Either change the order of the first two arguments of `logistic_de`, or add the argument `tfirst=True` to your call of `odeint`. The second option is a bit nicer, because it will allow you to use `logistic_de` with `scipy.integrate.solve_ivp` if you decide to try that function instead of `odeint`.
A sample run of `logistic_solution` produces a (18,1) result: ``` In [268]: logistic_solution(df_yeast['td'], *parsic) Out[268]: array([[ 1.00000000e+00], [ 2.66666671e+00], [ 4.33333337e+00], [ 1.00000004e+00], [-1.23333333e+01], [-4.06666666e+01], [-8.90000000e+01], [-1.62333333e+02], [-2.65666667e+02], [-4.04000000e+02], [-5.82333333e+02], [-8.05666667e+02], [-1.07900000e+03], [-1.40733333e+03], [-1.79566667e+03], [-2.24900000e+03], [-2.77233333e+03], [-3.37066667e+03]]) In [269]: _.shape Out[269]: (18, 1) ``` but the `y` values is ``` In [281]: df_yeast['cd'].values.shape Out[281]: (18,) ``` Define an alternative function that returns a 1d array: ``` In [282]: def foo(t,r,K): ...: return logistic_solution(t,r,K).ravel() ``` This works: ``` In [283]: params, _ = optim.curve_fit(foo, df_yeast['td'], df_yeast['cd'], p0=parsic) In [284]: params Out[284]: array([16.65599815, 15.52779946]) ``` test the `params`: ``` In [287]: logistic_solution(df_yeast['td'], *params) Out[287]: array([[ 1. ], [ 8.97044688], [ 31.45157847], [ 66.2980814 ], [111.36464226], [164.50594767], [223.5766842 ], [286.43153847], [350.92519706], [414.91234658], [476.24767362], [532.78586477], [582.38160664], [622.88958582], [652.1644889 ], [668.0610025 ], [668.43381319], [651.13760758]]) In [288]: df_yeast['cd'].values Out[288]: array([ 9.6, 18.3, 29. , 47.2, 71.1, 119.1, 174.6, 257.3, 350.7, 441. , 513.3, 559.7, 594.8, 629.4, 640.8, 651.1, 655.9, 659.6]) ``` [![enter image description here](https://i.stack.imgur.com/VIztW.png)](https://i.stack.imgur.com/VIztW.png) `too deep` in this context means a 2d array, when it should be 1d, in order to compare with `ydata` ``` ydata : array_like The dependent data, a length M array - nominally ``f(xdata, ...)``. ```
35,930,924
I am new to Python(I am using Python2.7) and Pycharm, but I need to use MySQLdb module to complete my task. I spent time to search for some guides or tips and finally I go to here but does not found MySQLdb to install. [MySQL-python](http://i.stack.imgur.com/EhTaq.png) But there is error: [Error](http://i.stack.imgur.com/lsXZi.png)
2016/03/11
[ "https://Stackoverflow.com/questions/35930924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5886173/" ]
I have a suggestion,if you have install MySQL database,you follow this,open pycharm and click File->Settings->Project->Project Interpreter,then select your Python interpreter and click install button (the little green plus sign),input "MySQL-Python" and click the button "install package",you will install MySQL-Python successfully.If you have not install MySQL database,you need install MySQL database first.
From Windows Command Prompt / Linux Shell install using wheel ``` pip install wheel ``` Download 32 or 64 Bit version from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python> **64-Bit** ``` pip install MySQL_python-1.2.5-cp27-none-win_amd64.whl ``` **32-Bit** ``` pip install MySQL_python-1.2.5-cp27-none-win32.whl ``` **Using Anaconda2:** Open a Command Prompt ``` $ conda install mysql-python ``` Press y to proceed with installation
35,930,924
I am new to Python(I am using Python2.7) and Pycharm, but I need to use MySQLdb module to complete my task. I spent time to search for some guides or tips and finally I go to here but does not found MySQLdb to install. [MySQL-python](http://i.stack.imgur.com/EhTaq.png) But there is error: [Error](http://i.stack.imgur.com/lsXZi.png)
2016/03/11
[ "https://Stackoverflow.com/questions/35930924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5886173/" ]
If you are unable to install MySQL CLient in Pycharm, try opening the Terminal (Alt+F12) and write the command ``` pip install --only-binary :all: mysqlclient ``` This might help!
From Windows Command Prompt / Linux Shell install using wheel ``` pip install wheel ``` Download 32 or 64 Bit version from <http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python> **64-Bit** ``` pip install MySQL_python-1.2.5-cp27-none-win_amd64.whl ``` **32-Bit** ``` pip install MySQL_python-1.2.5-cp27-none-win32.whl ``` **Using Anaconda2:** Open a Command Prompt ``` $ conda install mysql-python ``` Press y to proceed with installation
35,930,924
I am new to Python(I am using Python2.7) and Pycharm, but I need to use MySQLdb module to complete my task. I spent time to search for some guides or tips and finally I go to here but does not found MySQLdb to install. [MySQL-python](http://i.stack.imgur.com/EhTaq.png) But there is error: [Error](http://i.stack.imgur.com/lsXZi.png)
2016/03/11
[ "https://Stackoverflow.com/questions/35930924", "https://Stackoverflow.com", "https://Stackoverflow.com/users/5886173/" ]
I have a suggestion,if you have install MySQL database,you follow this,open pycharm and click File->Settings->Project->Project Interpreter,then select your Python interpreter and click install button (the little green plus sign),input "MySQL-Python" and click the button "install package",you will install MySQL-Python successfully.If you have not install MySQL database,you need install MySQL database first.
If you are unable to install MySQL CLient in Pycharm, try opening the Terminal (Alt+F12) and write the command ``` pip install --only-binary :all: mysqlclient ``` This might help!
12,609,728
I need to change to reference of a function in a mach-o binary to a custom function defined in my own dylib. The process I am now following is, 1. Replacing references to older functions to the new one. e.g `_fopen` to `_mopen` using sed. 2. I open the mach-o binary in [MachOView](http://sourceforge.net/projects/machoview/) to find the address of the entities I want to change. I then manually change the information in the binary using a hex editor. Is there a way I can automate this process i.e write a program to read the symbols, and dynamic loading info and then change them in the executable. I was looking at the mach-o header files at `/usr/include/mach-o` but am not entire sure how to use them to get this information. Do there exist any libraries present - C or python which help do the same?
2012/09/26
[ "https://Stackoverflow.com/questions/12609728", "https://Stackoverflow.com", "https://Stackoverflow.com/users/390984/" ]
Just a guess and atm I am not able to test it, but try following code: ``` <Button.Template> <ControlTemplate> <Border Background="{StaticResource BlueGradient}" CornerRadius="5"> <DockPanel> <Image x:Name="imgIcon" DockPanel.Dock="Left" Height="32" Margin="4"/> <TextBlock DockPanel.Dock="Right" Text ="Test" VerticalAlignment="Center" HorizontalAlignment="Left" Foreground="White" /> </DockPanel> </Border> </ControlTemplate> </Button.Template> ``` EDIT: Ok next try: Instead of adding your ShinyButton to the Window (remove this.AddChild(shiny)), try adding your ShinyButton to the Grid by adding this code: ``` NameOfGrid.Children.Add(shiny); ``` The reason of this error is that your Window just can have one child and I bet in your case there is already a Grid as this child.
Just tried your code and it works fine, with a couple of caveats. In order to get it to work I needed to drop the resouceDictionary link, as I don't have that file. Is there any chance that some content is being defined, rather than just a style/template etc? Also I note that your code has an x:Class="ShinyButton", is there any content defined in the code as well as in the xaml?
22,960,956
The python package 'isbntools' (<https://github.com/xlcnd/isbntools>) allows to retrieve bibliography information about books from online resources. In particular the script `isbn_meta [number]` retrieves information about the book with given isbn-number `[number]`. Among other it uses data from google using googleapis as `https://www.googleapis.com/books/v1/volumes?q=isbn+[number]&fields=`. It is obvious that the url can be adjusted to search e.g. for a general `[keyword]` `https://www.googleapis.com/books/v1/volumes?q=[keyword]`. But how can I reuse the code in 'isbntools' to create a script, say `title_meta` which searches and retrieves bibliography data based on `[keyword]`. My problem is to some extent that it is not obvious for me how to deal consistently with the python package.
2014/04/09
[ "https://Stackoverflow.com/questions/22960956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/429540/" ]
you made me some good challenge with this, but here you go: Method that return image with bottom half coloured with some colour ``` - (UIImage *)image:(UIImage *)image withBottomHalfOverlayColor:(UIColor *)color { CGRect rect = CGRectMake(0.f, 0.f, image.size.width, image.size.height); if (UIGraphicsBeginImageContextWithOptions) { CGFloat imageScale = 1.f; if ([self respondsToSelector:@selector(scale)]) imageScale = image.scale; UIGraphicsBeginImageContextWithOptions(image.size, NO, imageScale); } else { UIGraphicsBeginImageContext(image.size); } [image drawInRect:rect]; CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetBlendMode(context, kCGBlendModeSourceIn); CGContextSetFillColorWithColor(context, color.CGColor); CGRect rectToFill = CGRectMake(0.f, image.size.height*0.5f, image.size.width, image.size.height*0.5f); CGContextFillRect(context, rectToFill); UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage; } ``` Example of use : ``` UIImage *image = [UIImage imageNamed:@"q.png"]; image = [self image:image withBottomHalfOverlayColor:[UIColor cyanColor]]; self.imageView.image = image; ``` My results :![enter image description here](https://i.stack.imgur.com/kG1hb.png)![enter image description here](https://i.stack.imgur.com/hyCpe.png)
You can use some tricks: * Use 2 different images and change the whole background. * Use one background color (light blue) and 2 images, one with the bottom half transparent
22,960,956
The python package 'isbntools' (<https://github.com/xlcnd/isbntools>) allows to retrieve bibliography information about books from online resources. In particular the script `isbn_meta [number]` retrieves information about the book with given isbn-number `[number]`. Among other it uses data from google using googleapis as `https://www.googleapis.com/books/v1/volumes?q=isbn+[number]&fields=`. It is obvious that the url can be adjusted to search e.g. for a general `[keyword]` `https://www.googleapis.com/books/v1/volumes?q=[keyword]`. But how can I reuse the code in 'isbntools' to create a script, say `title_meta` which searches and retrieves bibliography data based on `[keyword]`. My problem is to some extent that it is not obvious for me how to deal consistently with the python package.
2014/04/09
[ "https://Stackoverflow.com/questions/22960956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/429540/" ]
**In Swift 4.** Some Changes ``` func withBottomHalfOverlayColor(myImage: UIImage, color: UIColor) -> UIImage { let rect = CGRect(x: 0, y: 0, width: myImage.size.width, height: myImage.size.height) UIGraphicsBeginImageContextWithOptions(myImage.size, false, myImage.scale) myImage.draw(in: rect) let context = UIGraphicsGetCurrentContext()! context.setBlendMode(CGBlendMode.sourceIn) context.setFillColor(color.cgColor) let rectToFill = CGRect(x: 0, y: myImage.size.height*0.5, width: myImage.size.width, height: myImage.size.height*0.5) context.fill(rectToFill) let newImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return newImage! } ```
You can use some tricks: * Use 2 different images and change the whole background. * Use one background color (light blue) and 2 images, one with the bottom half transparent
22,960,956
The python package 'isbntools' (<https://github.com/xlcnd/isbntools>) allows to retrieve bibliography information about books from online resources. In particular the script `isbn_meta [number]` retrieves information about the book with given isbn-number `[number]`. Among other it uses data from google using googleapis as `https://www.googleapis.com/books/v1/volumes?q=isbn+[number]&fields=`. It is obvious that the url can be adjusted to search e.g. for a general `[keyword]` `https://www.googleapis.com/books/v1/volumes?q=[keyword]`. But how can I reuse the code in 'isbntools' to create a script, say `title_meta` which searches and retrieves bibliography data based on `[keyword]`. My problem is to some extent that it is not obvious for me how to deal consistently with the python package.
2014/04/09
[ "https://Stackoverflow.com/questions/22960956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/429540/" ]
you made me some good challenge with this, but here you go: Method that return image with bottom half coloured with some colour ``` - (UIImage *)image:(UIImage *)image withBottomHalfOverlayColor:(UIColor *)color { CGRect rect = CGRectMake(0.f, 0.f, image.size.width, image.size.height); if (UIGraphicsBeginImageContextWithOptions) { CGFloat imageScale = 1.f; if ([self respondsToSelector:@selector(scale)]) imageScale = image.scale; UIGraphicsBeginImageContextWithOptions(image.size, NO, imageScale); } else { UIGraphicsBeginImageContext(image.size); } [image drawInRect:rect]; CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetBlendMode(context, kCGBlendModeSourceIn); CGContextSetFillColorWithColor(context, color.CGColor); CGRect rectToFill = CGRectMake(0.f, image.size.height*0.5f, image.size.width, image.size.height*0.5f); CGContextFillRect(context, rectToFill); UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage; } ``` Example of use : ``` UIImage *image = [UIImage imageNamed:@"q.png"]; image = [self image:image withBottomHalfOverlayColor:[UIColor cyanColor]]; self.imageView.image = image; ``` My results :![enter image description here](https://i.stack.imgur.com/kG1hb.png)![enter image description here](https://i.stack.imgur.com/hyCpe.png)
**In Swift 4.** Some Changes ``` func withBottomHalfOverlayColor(myImage: UIImage, color: UIColor) -> UIImage { let rect = CGRect(x: 0, y: 0, width: myImage.size.width, height: myImage.size.height) UIGraphicsBeginImageContextWithOptions(myImage.size, false, myImage.scale) myImage.draw(in: rect) let context = UIGraphicsGetCurrentContext()! context.setBlendMode(CGBlendMode.sourceIn) context.setFillColor(color.cgColor) let rectToFill = CGRect(x: 0, y: myImage.size.height*0.5, width: myImage.size.width, height: myImage.size.height*0.5) context.fill(rectToFill) let newImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return newImage! } ```
22,960,956
The python package 'isbntools' (<https://github.com/xlcnd/isbntools>) allows to retrieve bibliography information about books from online resources. In particular the script `isbn_meta [number]` retrieves information about the book with given isbn-number `[number]`. Among other it uses data from google using googleapis as `https://www.googleapis.com/books/v1/volumes?q=isbn+[number]&fields=`. It is obvious that the url can be adjusted to search e.g. for a general `[keyword]` `https://www.googleapis.com/books/v1/volumes?q=[keyword]`. But how can I reuse the code in 'isbntools' to create a script, say `title_meta` which searches and retrieves bibliography data based on `[keyword]`. My problem is to some extent that it is not obvious for me how to deal consistently with the python package.
2014/04/09
[ "https://Stackoverflow.com/questions/22960956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/429540/" ]
you made me some good challenge with this, but here you go: Method that return image with bottom half coloured with some colour ``` - (UIImage *)image:(UIImage *)image withBottomHalfOverlayColor:(UIColor *)color { CGRect rect = CGRectMake(0.f, 0.f, image.size.width, image.size.height); if (UIGraphicsBeginImageContextWithOptions) { CGFloat imageScale = 1.f; if ([self respondsToSelector:@selector(scale)]) imageScale = image.scale; UIGraphicsBeginImageContextWithOptions(image.size, NO, imageScale); } else { UIGraphicsBeginImageContext(image.size); } [image drawInRect:rect]; CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetBlendMode(context, kCGBlendModeSourceIn); CGContextSetFillColorWithColor(context, color.CGColor); CGRect rectToFill = CGRectMake(0.f, image.size.height*0.5f, image.size.width, image.size.height*0.5f); CGContextFillRect(context, rectToFill); UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return newImage; } ``` Example of use : ``` UIImage *image = [UIImage imageNamed:@"q.png"]; image = [self image:image withBottomHalfOverlayColor:[UIColor cyanColor]]; self.imageView.image = image; ``` My results :![enter image description here](https://i.stack.imgur.com/kG1hb.png)![enter image description here](https://i.stack.imgur.com/hyCpe.png)
**Swift 5 extension** ``` extension UIImage { func withBottomHalfOverlayColor(color: UIColor) -> UIImage { let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height) UIGraphicsBeginImageContextWithOptions(size, false, scale) draw(in: rect) let context = UIGraphicsGetCurrentContext()! context.setBlendMode(CGBlendMode.sourceIn) context.setFillColor(color.cgColor) let rectToFill = CGRect(x: 0, y: size.height*0.5, width: size.width, height: size.height*0.5) context.fill(rectToFill) let newImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return newImage! } } ```
22,960,956
The python package 'isbntools' (<https://github.com/xlcnd/isbntools>) allows to retrieve bibliography information about books from online resources. In particular the script `isbn_meta [number]` retrieves information about the book with given isbn-number `[number]`. Among other it uses data from google using googleapis as `https://www.googleapis.com/books/v1/volumes?q=isbn+[number]&fields=`. It is obvious that the url can be adjusted to search e.g. for a general `[keyword]` `https://www.googleapis.com/books/v1/volumes?q=[keyword]`. But how can I reuse the code in 'isbntools' to create a script, say `title_meta` which searches and retrieves bibliography data based on `[keyword]`. My problem is to some extent that it is not obvious for me how to deal consistently with the python package.
2014/04/09
[ "https://Stackoverflow.com/questions/22960956", "https://Stackoverflow.com", "https://Stackoverflow.com/users/429540/" ]
**In Swift 4.** Some Changes ``` func withBottomHalfOverlayColor(myImage: UIImage, color: UIColor) -> UIImage { let rect = CGRect(x: 0, y: 0, width: myImage.size.width, height: myImage.size.height) UIGraphicsBeginImageContextWithOptions(myImage.size, false, myImage.scale) myImage.draw(in: rect) let context = UIGraphicsGetCurrentContext()! context.setBlendMode(CGBlendMode.sourceIn) context.setFillColor(color.cgColor) let rectToFill = CGRect(x: 0, y: myImage.size.height*0.5, width: myImage.size.width, height: myImage.size.height*0.5) context.fill(rectToFill) let newImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return newImage! } ```
**Swift 5 extension** ``` extension UIImage { func withBottomHalfOverlayColor(color: UIColor) -> UIImage { let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height) UIGraphicsBeginImageContextWithOptions(size, false, scale) draw(in: rect) let context = UIGraphicsGetCurrentContext()! context.setBlendMode(CGBlendMode.sourceIn) context.setFillColor(color.cgColor) let rectToFill = CGRect(x: 0, y: size.height*0.5, width: size.width, height: size.height*0.5) context.fill(rectToFill) let newImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return newImage! } } ```
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
> > `TypeError: can't use a string pattern` > `on a bytes-like object` > > > what did i do wrong?? > > > You used a string pattern on a bytes object. Use a bytes pattern instead: ``` linkregex = re.compile(b'<a\s*href=[\'|"](.*?)[\'"].*?>') ^ Add the b there, it makes it into a bytes object ``` (ps: ``` >>> from disclaimer include dont_use_regexp_on_html "Use BeautifulSoup or lxml instead." ``` )
That worked for me in python3. Hope this helps ``` import urllib.request import re urls = ["https://google.com","https://nytimes.com","http://CNN.com"] i = 0 regex = '<title>(.+?)</title>' pattern = re.compile(regex) while i < len(urls) : htmlfile = urllib.request.urlopen(urls[i]) htmltext = htmlfile.read() titles = re.search(pattern, str(htmltext)) print(titles) i+=1 ``` And also this in which i added **b** before regex to convert it into byte array. ``` import urllib.request import re urls = ["https://google.com","https://nytimes.com","http://CNN.com"] i = 0 regex = b'<title>(.+?)</title>' pattern = re.compile(regex) while i < len(urls) : htmlfile = urllib.request.urlopen(urls[i]) htmltext = htmlfile.read() titles = re.search(pattern, htmltext) print(titles) i+=1 ```
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
Well, my version of Python doesn't have a urllib with a request attribute but if I use "urllib.urlopen(url)" I don't get back a string, I get an object. This is the type error.
That worked for me in python3. Hope this helps ``` import urllib.request import re urls = ["https://google.com","https://nytimes.com","http://CNN.com"] i = 0 regex = '<title>(.+?)</title>' pattern = re.compile(regex) while i < len(urls) : htmlfile = urllib.request.urlopen(urls[i]) htmltext = htmlfile.read() titles = re.search(pattern, str(htmltext)) print(titles) i+=1 ``` And also this in which i added **b** before regex to convert it into byte array. ``` import urllib.request import re urls = ["https://google.com","https://nytimes.com","http://CNN.com"] i = 0 regex = b'<title>(.+?)</title>' pattern = re.compile(regex) while i < len(urls) : htmlfile = urllib.request.urlopen(urls[i]) htmltext = htmlfile.read() titles = re.search(pattern, htmltext) print(titles) i+=1 ```
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
If you are running Python 2.6 then there isn't any "request" in "urllib". So the third line becomes: ``` m = urllib.urlopen(url) ``` And in version 3 you should use this: ``` links = linkregex.findall(str(msg)) ``` Because 'msg' is a bytes object and not a string as findall() expects. Or you could decode using the correct encoding. For instance, if "latin1" is the encoding then: ``` links = linkregex.findall(msg.decode("latin1")) ```
The regular expression pattern and string have to be of the same type. If you're matching a regular string, you need a string pattern. If you're matching a byte string, you need a bytes pattern. In this case *m.read()* returns a byte string, so you need a bytes pattern. In Python 3, regular strings are unicode strings, and you need the *b* modifier to specify a byte string literal: ``` linkregex = re.compile(b'<a\s*href=[\'|"](.*?)[\'"].*?>') ```
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
If you are running Python 2.6 then there isn't any "request" in "urllib". So the third line becomes: ``` m = urllib.urlopen(url) ``` And in version 3 you should use this: ``` links = linkregex.findall(str(msg)) ``` Because 'msg' is a bytes object and not a string as findall() expects. Or you could decode using the correct encoding. For instance, if "latin1" is the encoding then: ``` links = linkregex.findall(msg.decode("latin1")) ```
The url you have for Google didn't work for me, so I substituted `http://www.google.com/ig?hl=en` for it which works for me. Try this: ``` import re import urllib.request url="http://www.google.com/ig?hl=en" linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read(): links = linkregex.findall(str(msg)) print(links) ``` Hope this helps.
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
> > `TypeError: can't use a string pattern` > `on a bytes-like object` > > > what did i do wrong?? > > > You used a string pattern on a bytes object. Use a bytes pattern instead: ``` linkregex = re.compile(b'<a\s*href=[\'|"](.*?)[\'"].*?>') ^ Add the b there, it makes it into a bytes object ``` (ps: ``` >>> from disclaimer include dont_use_regexp_on_html "Use BeautifulSoup or lxml instead." ``` )
The regular expression pattern and string have to be of the same type. If you're matching a regular string, you need a string pattern. If you're matching a byte string, you need a bytes pattern. In this case *m.read()* returns a byte string, so you need a bytes pattern. In Python 3, regular strings are unicode strings, and you need the *b* modifier to specify a byte string literal: ``` linkregex = re.compile(b'<a\s*href=[\'|"](.*?)[\'"].*?>') ```
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
> > `TypeError: can't use a string pattern` > `on a bytes-like object` > > > what did i do wrong?? > > > You used a string pattern on a bytes object. Use a bytes pattern instead: ``` linkregex = re.compile(b'<a\s*href=[\'|"](.*?)[\'"].*?>') ^ Add the b there, it makes it into a bytes object ``` (ps: ``` >>> from disclaimer include dont_use_regexp_on_html "Use BeautifulSoup or lxml instead." ``` )
The url you have for Google didn't work for me, so I substituted `http://www.google.com/ig?hl=en` for it which works for me. Try this: ``` import re import urllib.request url="http://www.google.com/ig?hl=en" linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read(): links = linkregex.findall(str(msg)) print(links) ``` Hope this helps.
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
If you are running Python 2.6 then there isn't any "request" in "urllib". So the third line becomes: ``` m = urllib.urlopen(url) ``` And in version 3 you should use this: ``` links = linkregex.findall(str(msg)) ``` Because 'msg' is a bytes object and not a string as findall() expects. Or you could decode using the correct encoding. For instance, if "latin1" is the encoding then: ``` links = linkregex.findall(msg.decode("latin1")) ```
Well, my version of Python doesn't have a urllib with a request attribute but if I use "urllib.urlopen(url)" I don't get back a string, I get an object. This is the type error.
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
The url you have for Google didn't work for me, so I substituted `http://www.google.com/ig?hl=en` for it which works for me. Try this: ``` import re import urllib.request url="http://www.google.com/ig?hl=en" linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read(): links = linkregex.findall(str(msg)) print(links) ``` Hope this helps.
That worked for me in python3. Hope this helps ``` import urllib.request import re urls = ["https://google.com","https://nytimes.com","http://CNN.com"] i = 0 regex = '<title>(.+?)</title>' pattern = re.compile(regex) while i < len(urls) : htmlfile = urllib.request.urlopen(urls[i]) htmltext = htmlfile.read() titles = re.search(pattern, str(htmltext)) print(titles) i+=1 ``` And also this in which i added **b** before regex to convert it into byte array. ``` import urllib.request import re urls = ["https://google.com","https://nytimes.com","http://CNN.com"] i = 0 regex = b'<title>(.+?)</title>' pattern = re.compile(regex) while i < len(urls) : htmlfile = urllib.request.urlopen(urls[i]) htmltext = htmlfile.read() titles = re.search(pattern, htmltext) print(titles) i+=1 ```
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
> > `TypeError: can't use a string pattern` > `on a bytes-like object` > > > what did i do wrong?? > > > You used a string pattern on a bytes object. Use a bytes pattern instead: ``` linkregex = re.compile(b'<a\s*href=[\'|"](.*?)[\'"].*?>') ^ Add the b there, it makes it into a bytes object ``` (ps: ``` >>> from disclaimer include dont_use_regexp_on_html "Use BeautifulSoup or lxml instead." ``` )
If you are running Python 2.6 then there isn't any "request" in "urllib". So the third line becomes: ``` m = urllib.urlopen(url) ``` And in version 3 you should use this: ``` links = linkregex.findall(str(msg)) ``` Because 'msg' is a bytes object and not a string as findall() expects. Or you could decode using the correct encoding. For instance, if "latin1" is the encoding then: ``` links = linkregex.findall(msg.decode("latin1")) ```
5,184,483
So, I have this code: ``` url = 'http://google.com' linkregex = re.compile('<a\s*href=[\'|"](.*?)[\'"].*?>') m = urllib.request.urlopen(url) msg = m.read() links = linkregex.findall(msg) ``` But then python returns this error: ``` links = linkregex.findall(msg) TypeError: can't use a string pattern on a bytes-like object ``` What did I do wrong?
2011/03/03
[ "https://Stackoverflow.com/questions/5184483", "https://Stackoverflow.com", "https://Stackoverflow.com/users/380714/" ]
> > `TypeError: can't use a string pattern` > `on a bytes-like object` > > > what did i do wrong?? > > > You used a string pattern on a bytes object. Use a bytes pattern instead: ``` linkregex = re.compile(b'<a\s*href=[\'|"](.*?)[\'"].*?>') ^ Add the b there, it makes it into a bytes object ``` (ps: ``` >>> from disclaimer include dont_use_regexp_on_html "Use BeautifulSoup or lxml instead." ``` )
Well, my version of Python doesn't have a urllib with a request attribute but if I use "urllib.urlopen(url)" I don't get back a string, I get an object. This is the type error.
55,937,156
I am trying to read glove.6B.300d.txt file into a Pandas dataframe. (The file can be downloaded from here: <https://github.com/stanfordnlp/GloVe>) Here are the exceptions I am getting: ``` glove = pd.read_csv(filename, sep = ' ') ParserError: Error tokenizing data. C error: EOF inside string starting at line 8 glove = pd.read_csv(filename, sep = ' ', engine = 'python') ParserError: field larger than field limit (131072) ```
2019/05/01
[ "https://Stackoverflow.com/questions/55937156", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8270077/" ]
You can do it in two steps. 1. Get all record which satisfy criteria like `dimension === 2` `let resultArr = jsObjects.filter(data => { return data.dimension === 2 })` 2. Get random object from result. `var randomElement = resultArr[Math.floor(Math.random() * resultArr.length)];` ```js var arr = [{ dimension: 2, x: -973.097, y: -133.411, z: 38.2531 }, { dimension: 3, x: -116.746, y: -48.414, z: 17.226 }, { dimension: 2, x: -946.746, y: -128.411, z: 37.786 }, { dimension: 2, x: -814.093, y: -106.724, z: 37.589 }] //Filter out with specific criteria let resultArr = arr.filter(data => { return data.dimension === 2 }) //Get random element var randomElement = resultArr[Math.floor(Math.random() * resultArr.length)]; console.log(randomElement) ```
You could use `Math.random()` and in the range of `0` to `length` of array. ``` let result = jsObjects.filter(data => { return data.dimension === 2 }) let randomObj = result[Math.floor(Math.random() * result.length)] ```
26,193,193
How can I change the priority of the path in sys.path in python 2.7? I know that I can use `PYTHONPATH` environment variable, but it is what I will get: ``` $ PYTHONPATH=/tmp python Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> for i in sys.path: ... print i ... /usr/local/lib/python2.7/dist-packages/pycuda-2014.1-py2.7-linux-x86_64.egg /usr/local/lib/python2.7/dist-packages/pytest-2.6.2-py2.7.egg /usr/local/lib/python2.7/dist-packages/pytools-2014.3-py2.7.egg /usr/local/lib/python2.7/dist-packages/py-1.4.24-py2.7.egg /usr/lib/python2.7/dist-packages /tmp /usr/lib/python2.7 /usr/lib/python2.7/plat-x86_64-linux-gnu /usr/lib/python2.7/lib-tk /usr/lib/python2.7/lib-old /usr/lib/python2.7/lib-dynload /usr/local/lib/python2.7/dist-packages /usr/lib/python2.7/dist-packages/PILcompat /usr/lib/python2.7/dist-packages/gtk-2.0 /usr/lib/python2.7/dist-packages/ubuntu-sso-client >>> ``` `/tmp` is added between `/usr/lib/python2.7/dist-packages` and `/usr/lib/python2.7`. My goal is to make python to load packages from `/usr/local/lib/python2.7/dist-packages` first. Here is what I want: ``` $ python Python 2.7.6 (default, Mar 22 2014, 22:59:56) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import numpy as np >>> np.version <module 'numpy.version' from '/usr/local/lib/python2.7/dist-packages/numpy/version.pyc'> >>> ``` If I install `python-numpy` by `apt-get install python-numpy`. Python will try to load from `/usr/lib/python2.7` and not the one I compiled.
2014/10/04
[ "https://Stackoverflow.com/questions/26193193", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1470911/" ]
As you may know, [`sys.path` is initialized from](https://docs.python.org/2/tutorial/modules.html#the-module-search-path): * the current directory * your `PYTHONPATH` * an installation-dependent default However unfortunately that is only part of the story: `setuptools` creates [`easy-install.pth`](https://pythonhosted.org/setuptools/easy_install.html) files, which also modify `sys.path` and worst of all they *prepend* packages and therefore totally mess up the order of directories. In particular (at least on my system), there is `/usr/local/lib/python2.7/dist-packages/easy-install.pth` with the following contents: ``` import sys; sys.__plen = len(sys.path) /usr/lib/python2.7/dist-packages import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sys.path[p:p]=new; sys.__egginsert = p+len(new) ``` This causes `/usr/lib/python2.7/dist-packages` to be *prepended even before your `PYTHONPATH`*! What you could do is simply change the 2nd line in this file to ``` /usr/local/lib/python2.7/dist-packages ``` and you will get your desired priority. However beware this file might be overwritten or changed again by a future `setuptools` invocation!
We encountered an almost identical situation and wanted to expand upon @kynan's response which is spot-on. In the case where you have such an `easy-install.pth` that you want to overcome, but which you cannot modify it (say you are a user with no root/admin access), you can do the following: * Set up an [alternate python installation scheme](https://docs.python.org/3/install/#alternate-installation) + e.g. we use a PYTHON HOME install (setting PYTHONUSERBASE) * Create a user/home site-packages + You can do this by installing a package into the user env: `pip install <package> --user` * Create a pth to set `sys.__egginsert` to workaround the system/distribution easy-install.pth + Create a `$PYTHONUSERBASE/lib/python2.7/site-packages/fix_easy_install.pth` + Containing: `import sys; sys.__egginsert = len(sys.path);` This will set `sys.__egginsert` to point to the end of your `sys.path` including your usersite paths. When the nefarious system/dist `easy-install.pth` will then insert its items at the end of the system path.
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
You can use a [collections.defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict) to create the inner dicts automatically when needed: ``` from collections import defaultdict data = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) out = defaultdict(dict) for (name, first, value) in data: out[name][first] = value # {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
Yes, with something like this. ```py my_query = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) dict_query = {} for key1, key2, value in my_query: if key1 not in dict_query: dict_query[key1] = {} dict_query[key1][key2] = value ``` Edit1 More elegant. ```py from collections import defaultdict dict_query = defaultdict(dict) for key1, key2, value in my_query: dict_query[key1][key2] = value ```
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
Yes, with something like this. ```py my_query = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) dict_query = {} for key1, key2, value in my_query: if key1 not in dict_query: dict_query[key1] = {} dict_query[key1][key2] = value ``` Edit1 More elegant. ```py from collections import defaultdict dict_query = defaultdict(dict) for key1, key2, value in my_query: dict_query[key1][key2] = value ```
Alternatively to `defaultdict` you can use the dictionary method `setdefault()`: ``` dct = {} for k1, k2, v2 in c.fetchall(): v1 = dct.setdefault(k1, {}) v1[k2] = v2 ``` or ``` for k1, k2, v2 in c.fetchall(): dct.setdefault(k1, {})[k2] = v2 ``` Result: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
Yes, with something like this. ```py my_query = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) dict_query = {} for key1, key2, value in my_query: if key1 not in dict_query: dict_query[key1] = {} dict_query[key1][key2] = value ``` Edit1 More elegant. ```py from collections import defaultdict dict_query = defaultdict(dict) for key1, key2, value in my_query: dict_query[key1][key2] = value ```
You can solve it holding on `itertools.groupby` and `functools.itemgetter`. ```py from operator import itemgetter from itertools import groupby result = { name: dict(v for _, v in values) for name, values in groupby(((x[0], x[1:]) for x in c.fetchall()), itemgetter(0))} print(result) # {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ``` The outer loop splits the data between name and its values, that then `groupby(...)` collapses by name. The inner loop simply builds a dict with the corresponding values.
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
You can use a [collections.defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict) to create the inner dicts automatically when needed: ``` from collections import defaultdict data = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) out = defaultdict(dict) for (name, first, value) in data: out[name][first] = value # {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
Coming with *dict comprehension* though with `itertools.groupby` magic: ``` from itertools import groupby counts = {k: dict(_[1:] for _ in g) for k, g in groupby(c.fetchall(), key=lambda t: t[0])} print(counts) ``` The output: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
You can use a [collections.defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict) to create the inner dicts automatically when needed: ``` from collections import defaultdict data = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) out = defaultdict(dict) for (name, first, value) in data: out[name][first] = value # {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
Alternatively to `defaultdict` you can use the dictionary method `setdefault()`: ``` dct = {} for k1, k2, v2 in c.fetchall(): v1 = dct.setdefault(k1, {}) v1[k2] = v2 ``` or ``` for k1, k2, v2 in c.fetchall(): dct.setdefault(k1, {})[k2] = v2 ``` Result: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
You can use a [collections.defaultdict](https://docs.python.org/2/library/collections.html#collections.defaultdict) to create the inner dicts automatically when needed: ``` from collections import defaultdict data = (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) out = defaultdict(dict) for (name, first, value) in data: out[name][first] = value # {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
You can solve it holding on `itertools.groupby` and `functools.itemgetter`. ```py from operator import itemgetter from itertools import groupby result = { name: dict(v for _, v in values) for name, values in groupby(((x[0], x[1:]) for x in c.fetchall()), itemgetter(0))} print(result) # {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ``` The outer loop splits the data between name and its values, that then `groupby(...)` collapses by name. The inner loop simply builds a dict with the corresponding values.
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
Coming with *dict comprehension* though with `itertools.groupby` magic: ``` from itertools import groupby counts = {k: dict(_[1:] for _ in g) for k, g in groupby(c.fetchall(), key=lambda t: t[0])} print(counts) ``` The output: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
Alternatively to `defaultdict` you can use the dictionary method `setdefault()`: ``` dct = {} for k1, k2, v2 in c.fetchall(): v1 = dct.setdefault(k1, {}) v1[k2] = v2 ``` or ``` for k1, k2, v2 in c.fetchall(): dct.setdefault(k1, {})[k2] = v2 ``` Result: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
58,466,616
I am executing the following sqlite command: ``` c.execute("SELECT surname,forename,count(*) from census_data group by surname, forename") ``` so that c.fetchall() is as follows: ``` (('Griffin','John', 7), ('Griffin','James', 23), ('Griffin','Mary',30), ('Griffith', 'John', 4), ('Griffith','Catherine', 5) ) ``` Is it possible to construct a dict of the following form using a dict comprehension: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary':30}, 'Griffith': {'John':4,'Catherine':5}} ``` this is as far as I got: ``` counts = {s:(f,c) for s,f,c in c.fetchall()} ``` which overwrites values. Im using python 3.
2019/10/19
[ "https://Stackoverflow.com/questions/58466616", "https://Stackoverflow.com", "https://Stackoverflow.com/users/607846/" ]
Coming with *dict comprehension* though with `itertools.groupby` magic: ``` from itertools import groupby counts = {k: dict(_[1:] for _ in g) for k, g in groupby(c.fetchall(), key=lambda t: t[0])} print(counts) ``` The output: ``` {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ```
You can solve it holding on `itertools.groupby` and `functools.itemgetter`. ```py from operator import itemgetter from itertools import groupby result = { name: dict(v for _, v in values) for name, values in groupby(((x[0], x[1:]) for x in c.fetchall()), itemgetter(0))} print(result) # {'Griffin': {'John': 7, 'James': 23, 'Mary': 30}, 'Griffith': {'John': 4, 'Catherine': 5}} ``` The outer loop splits the data between name and its values, that then `groupby(...)` collapses by name. The inner loop simply builds a dict with the corresponding values.
66,912,406
Been following a tutorial on udemy for python, and atm im suppose to get a django app deployed. Since I already had a vps, I didnt go with the solution on the tutorial using google cloud, so tried to configure the app on my vps, which is also running plesk. Followed the tutorial at <https://www.plesk.com/blog/tag/django-plesk/> to the letter the best I could, but keep getting the 403 error. ``` httpdocs -djangoProject ---djangoProject ------asgi.py ------__init__.py ------settings.py ------urls.py ------wsgi.py ---manage.py -passenger_wsgi.py -python-app-venv -tmp ``` passenger\_wsgi.py: ``` import sys, os ApplicationDirectory = 'djangoProject' ApplicationName = 'djangoProject' VirtualEnvDirectory = 'python-app-venv' VirtualEnv = os.path.join(os.getcwd(), VirtualEnvDirectory, 'bin', 'python') if sys.executable != VirtualEnv: os.execl(VirtualEnv, VirtualEnv, *sys.argv) sys.path.insert(0, os.path.join(os.getcwd(), ApplicationDirectory)) sys.path.insert(0, os.path.join(os.getcwd(), ApplicationDirectory, ApplicationName)) sys.path.insert(0, os.path.join(os.getcwd(), VirtualEnvDirectory, 'bin')) os.chdir(os.path.join(os.getcwd(), ApplicationDirectory)) os.environ.setdefault('DJANGO_SETTINGS_MODULE', ApplicationName + '.settings') from django.core.wsgi import get_wsgi_application application = get_wsgi_application() ``` passenger is enabled in > > "Tools & Settngs > Apache Web Server" > > > in "Websites & Domains > Domain > Hosting & DNS > Apache & nginx settings" I've got: "Additional directives for HTTP" and "Additional directives for HTTPS" both with: ``` PassengerEnabled On PassengerAppType wsgi PassengerStartupFile passenger_wsgi.py ``` and nginx proxy mode marked "Reverse Proxy Server (nginx)" is also running No idea what else I can give to aid in getting a solution, so if you're willing to assist and need more info please let me know. Very thankfull in advance EDIT: on a previous attempt, deploying a real app on a subdomain, was getting: > > [Thu Apr 01 22:52:37.928495 2021] [autoindex:error] [pid 23614:tid > 140423896925952] [client xx:xx:xx:xx:0] AH01276: Cannot serve > directory /var/www/vhosts/baya.pt/leve/leve/: No matching > DirectoryIndex > (index.html,index.cgi,index.pl,index.php,index.xhtml,index.htm,index.shtml) > found, and server-generated directory index forbidden by Options > directive > > > This time I'm getting no errors logged EDIT2: @Chris: Not sure what you mean, find no errors on the log folders (ssh), but on Plesk I get this several times: > > 2021-04-01 23:40:48 Error 94.61.142.214 403 GET / > HTTP/1.0 <https://baya.pt/> Mozilla/5.0 (X11; Linux x86\_64) > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 > Safari/537.36 2.52 K Apache SSL/TLS access 2021-04-01 > 23:40:48 Error 94.61.142.214 AH01276: Cannot serve directory > /var/www/vhosts/baya.pt/httpdocs/djangoProject/: No matching > DirectoryIndex > (index.html,index.cgi,index.pl,index.php,index.xhtml,index.htm,index.shtml) > found, and server-generated directory index forbidden by Options > directive, referer: <https://baya.pt/> Apache error > > > EDIT 3: removing apache directives and adding to nginx directives: ``` passenger_enabled on; passenger_app_type wsgi; passenger_startup_file passenger_wsgi.py; ``` Now gives me a Passenger error page, log as follows: ``` [ N 2021-04-01 23:50:59.1819 908/T9 age/Cor/CoreMain.cpp:671 ]: Signal received. Gracefully shutting down... (send signal 2 more time(s) to force shutdown) [ N 2021-04-01 23:50:59.1819 908/T1 age/Cor/CoreMain.cpp:1246 ]: Received command to shutdown gracefully. Waiting until all clients have disconnected... [ N 2021-04-01 23:50:59.1820 908/Tb Ser/Server.h:902 ]: [ApiServer] Freed 0 spare client objects [ N 2021-04-01 23:50:59.1820 908/Tb Ser/Server.h:558 ]: [ApiServer] Shutdown finished [ N 2021-04-01 23:50:59.1820 908/T9 Ser/Server.h:902 ]: [ServerThr.1] Freed 0 spare client objects [ N 2021-04-01 23:50:59.1820 908/T9 Ser/Server.h:558 ]: [ServerThr.1] Shutdown finished [ N 2021-04-01 23:50:59.2765 30199/T1 age/Wat/WatchdogMain.cpp:1373 ]: Starting Passenger watchdog... [ N 2021-04-01 23:50:59.2871 908/T1 age/Cor/CoreMain.cpp:1325 ]: Passenger core shutdown finished [ N 2021-04-01 23:50:59.3329 30209/T1 age/Cor/CoreMain.cpp:1340 ]: Starting Passenger core... [ N 2021-04-01 23:50:59.3330 30209/T1 age/Cor/CoreMain.cpp:256 ]: Passenger core running in multi-application mode. [ N 2021-04-01 23:50:59.3472 30209/T1 age/Cor/CoreMain.cpp:1015 ]: Passenger core online, PID 30209 [ N 2021-04-01 23:51:01.4339 30209/T7 age/Cor/SecurityUpdateChecker.h:519 ]: Security update check: no update found (next check in 24 hours) App 31762 output: Error: Directory '/var/www/vhosts/baya.pt' is inaccessible because of a filesystem permission error. [ E 2021-04-01 23:51:02.9127 30209/Tc age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/vhosts/baya.pt/httpdocs: Directory '/var/www/vhosts/baya.pt' is inaccessible because of a filesystem permission error. ```
2021/04/01
[ "https://Stackoverflow.com/questions/66912406", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6291059/" ]
The '.no-content' has display: none which instantly removes the element and the animation does not take place I just removed that and it worked just fine. I added the other CSS to fix the position of the div 'I assumed you want it like this' ```js const App = () => { const [opened, setOpened] = React.useState(false) const overlay = document.getElementById('overlay'); const handleClick = ()=>{ setOpened(!opened) if(overlay){ if(!opened) overlay.style.display = "block" else{ setTimeout(() => { overlay.style.display = 'none' }, 1000) // animation time } } } return ( <div> <button className='btn' onClick={handleClick }>{opened ? 'Close' : 'Open'}</button> <div style={{margin: 50 }}>Some page content that will be covered</div> <div id="overlay" className={opened ? 'content': 'no-content'}> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. </p> <button>I do nothing</button> <p>Tellus molestie nunc non blandit massa enim nec dui. Sapien et ligula ullamcorper malesuada proin libero nunc consequat interdum. Sit amet consectetur adipiscing elit pellentesque habitant morbi tristique. Nisi quis eleifend quam adipiscing vitae proin sagittis nisl rhoncus. Sit amet commodo nulla facilisi nullam vehicula. Vestibulum lorem sed risus ultricies tristique. Sit amet nisl suscipit adipiscing bibendum est ultricies integer quis. Tempus egestas sed sed risus pretium quam vulputate dignissim. Feugiat vivamus at augue eget arcu dictum. Consequat interdum varius sit amet mattis vulputate enim nulla. Sit amet risus nullam eget. Id neque aliquam vestibulum morbi blandit cursus risus at ultrices. Massa tempor nec feugiat nisl pretium fusce id velit. Vestibulum morbi blandit cursus risus. Id diam vel quam elementum pulvinar etiam non. Faucibus a pellentesque sit amet porttitor eget dolor morbi. Dictumst vestibulum rhoncus est pellentesque elit ullamcorper dignissim cras tincidunt. Lorem sed risus ultricies tristique. Viverra orci sagittis eu volutpat odio facilisis mauris. </p> </div> </div> ) } ReactDOM.render( <App />, document.getElementById('app') ); ``` ```css body { width: 500px; height: 500px; } .btn { position: fixed; top: 0; z-index: 9999 } @keyframes fadeOut { 0% { opacity: 1 } 100% { opacity: 0, display: none } } @keyframes fadeIn { 0% { opacity: 0 } 100% { opacity: 1 } } .no-content { opacity: 0; animation: fadeOut 500ms linear; position: absolute; top:0; right:0; width: 100%; height: 100%; background-color: black; color: white; } .content { opacity: 1; animation: fadeIn 1s linear; position: absolute; top:0; right:0; width: 100%; height: 100%; background-color: black; color: white; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.0/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.0/umd/react-dom.production.min.js"></script> <div id="app"></div> ```
You dont need animation for that transition is much cleaner ```js const App = () => { const [opened, setOpened] = React.useState(false) return ( < div > < button className = 'btn' onClick = { () => setOpened(!opened) } > { opened ? 'Close' : 'Open' } < /button> < div style = { { margin: 50 } } > Some page content that will be covered < /div> < div className = {'content ' +( opened ? 'show-content' : '') } > < p > Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. < /p> < button > I do nothing < /button> < p > Tellus molestie nunc non blandit massa enim nec dui.Sapien et ligula ullamcorper malesuada proin libero nunc consequat interdum.Sit amet consectetur adipiscing elit pellentesque habitant morbi tristique.Nisi quis eleifend quam adipiscing vitae proin sagittis nisl rhoncus.Sit amet commodo nulla facilisi nullam vehicula.Vestibulum lorem sed risus ultricies tristique.Sit amet nisl suscipit adipiscing bibendum est ultricies integer quis.Tempus egestas sed sed risus pretium quam vulputate dignissim.Feugiat vivamus at augue eget arcu dictum.Consequat interdum varius sit amet mattis vulputate enim nulla.Sit amet risus nullam eget.Id neque aliquam vestibulum morbi blandit cursus risus at ultrices.Massa tempor nec feugiat nisl pretium fusce id velit.Vestibulum morbi blandit cursus risus.Id diam vel quam elementum pulvinar etiam non.Faucibus a pellentesque sit amet porttitor eget dolor morbi.Dictumst vestibulum rhoncus est pellentesque elit ullamcorper dignissim cras tincidunt.Lorem sed risus ultricies tristique.Viverra orci sagittis eu volutpat odio facilisis mauris. < /p> < /div> < /div> ) } ReactDOM.render( < App / > , document.getElementById('app') ); ``` ```css body { width: 500px; height: 500px; } .btn { position: fixed; top: 0; z-index: 999; } .content { transition: 0.3s; opacity:0; position: absolute; top: 0; right: 0; width: 100%; height: 100%; background-color: black; color: white; } .show-content { opacity:1; } ``` ```html <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.8.0/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.8.0/umd/react-dom.production.min.js"></script> <div id="app"></div> ```
63,510,765
I am following with Datacamp's tutorial on using convolutional autoencoders for classification [here](https://www.datacamp.com/community/tutorials/autoencoder-classifier-python). I understand in the tutorial that we only need the autoencoder's head (i.e. the encoder part) stacked to a fully-connected layer to do the classification. After stacking, the resulting network (convolutional-autoencoder) is trained twice. The first by setting the encoder's weights to false as: ``` for layer in full_model.layers[0:19]: layer.trainable = False ``` And then setting back to true, and re-trained the network: ``` for layer in full_model.layers[0:19]: layer.trainable = True ``` I cannot understand why we are doing this twice. Anyone with experience working with conv-net or autoencoders?
2020/08/20
[ "https://Stackoverflow.com/questions/63510765", "https://Stackoverflow.com", "https://Stackoverflow.com/users/-1/" ]
You are trying to render an object, and when you convert Javascript objects to string it will convert to "`[object Object]`". I think you are misunderstanding what the "name" prop of the input field should do in this context, you don't actually need that. A better way of writing your handleChange function would be: ```js function handleChange(e) { setUserReview(e.target.value); } ``` This code fragment will set the state of `userReview` to the string that is currently in the textarea, instead of an object `{ userReview: string }` like before. Note that this is not using the `name` prop. There is also another error, `type="text-area"` is not a valid property of the `<input/>` tag. You should do like that: ``` <textarea placeholder="Leave a review..." onChange={handleChange} > {userReview} </textarea> ```
You're using invalid html: ``` <input type="text-area" /> ``` [Textarea](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/textarea) element is to be: ``` <textarea /> ``` --- Okay, I found the issue with the userReview state. You should use: ``` <textarea onChange...> {userReview.userReview} /* name of the userReview state */ </textarea> ```
2,893,193
I'd like to tell you what I've tried and then I'd really welcome any comments you can provide on how I can get PortAudio and PyAudio setup correctly! I've tried installing the stable and svn releases of PortAudio from [their website](http://www.portaudio.com/download.html) for my Core 2 Duo MacBook Pro running Snow Leopard. The stable release has a sizeof error that [can be fixed(?)](http://pujansrt.blogspot.com/2010/03/compiling-portaudio-on-snow-leopard-mac.html), but the daily svn release installs fine with `./configure && make && make install` (so this is what I'm using). The tests are compiled properly and I can get the binaries to produce output/can read microphone input. Ok, so then PyAudio has troubles. Installing from [source](http://people.csail.mit.edu/hubert/pyaudio/#sources) I get errors about not finding the libraries: ``` mwoods 13 pyaudio-0.2.3$ python setup.py build running build running build_py running build_ext building '_portaudio' extension gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch i386 -arch ppc -arch x86_64 -pipe -DMACOSX=1 -I/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _portaudiomodule.c -o build/temp.macosx-10.6-universal-2.6/_portaudiomodule.o -fno-strict-aliasing _portaudiomodule.c:35:25: error: pa_mac_core.h: No such file or directory _portaudiomodule.c:679: error: expected specifier-qualifier-list before ‘PaMacCoreStreamInfo’ _portaudiomodule.c: In function ‘_pyAudio_MacOSX_hostApiSpecificStreamInfo_cleanup’: _portaudiomodule.c:690: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘paMacCoreStreamInfo’ _portaudiomodule.c:691: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘paMacCoreStreamInfo’ _portaudiomodule.c:692: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘paMacCoreStreamInfo’ _portaudiomodule.c:695: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘channelMap’ _portaudiomodule.c:696: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘channelMap’ _portaudiomodule.c:699: error: ‘_pyAudio_Mac_HASSI’ has no member named ‘flags’ ... another 100 lines of this ... _portaudiomodule.c:2471: error: ‘paMacCoreMinimizeCPUButPlayNice’ undeclared (first use in this function) _portaudiomodule.c:2473: error: ‘paMacCoreMinimizeCPU’ undeclared (first use in this function) lipo: can't open input file: /var/folders/Qc/Qcl516fqHAWupTUV9BE9rU+++TI/-Tmp-//cc7BqpBc.out (No such file or directory) error: command 'gcc-4.2' failed with exit status 1 ``` I can install PyAudio from [their .dmg installer](http://people.csail.mit.edu/hubert/pyaudio/#binaries), but it targets python2.5. If I copy all of the related contents of /Library/Python/2.5/site-packages/ to /Library/Python/2.6/site-packages/ (this includes PyAudio-0.2.3-py2.5.egg-info, \_portaudio.so, pyaudio.py, pyaudio.pyc, and pyaudio.pyo) then my python2.6 can recognize it. ``` In [1]: import pyaudio Please build and install the PortAudio Python bindings first. ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> File "/Library/Python/2.6/site-packages/pyaudio.py", line 103, in <module> sys.exit(-1) SystemExit: -1 Type %exit or %quit to exit IPython (%Exit or %Quit do so unconditionally). In [2]: ``` So this happens because `_portaudio` can't be imported. If I try to import that directly: ``` In [2]: import _portaudio ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> ImportError: /Library/Python/2.6/site-packages/_portaudio.so: no appropriate 64-bit architecture (see "man python" for running in 32-bit mode) ``` Ok, so if I `export VERSIONER_PYTHON_PREFER_32_BIT=yes` and then run python again (well, ipython I suppose), we can see it works but with consequences: ``` In [1]: import pyaudio In [2]: pyaudio Out[2]: <module 'pyaudio' from '/Library/Python/2.6/site-packages/pyaudio.pyc'> In [3]: import pylab ------------------------------------------------------------ Traceback (most recent call last): File "<ipython console>", line 1, in <module> File "/Library/Python/2.6/site-packages/matplotlib-1.0.svn_r8037-py2.6-macosx-10.6-universal.egg/pylab.py", line 1, in <module> from matplotlib.pylab import * File "/Library/Python/2.6/site-packages/matplotlib-1.0.svn_r8037-py2.6-macosx-10.6-universal.egg/matplotlib/__init__.py", line 129, in <module> from rcsetup import defaultParams, validate_backend, validate_toolbar File "/Library/Python/2.6/site-packages/matplotlib-1.0.svn_r8037-py2.6-macosx-10.6-universal.egg/matplotlib/rcsetup.py", line 19, in <module> from matplotlib.colors import is_color_like File "/Library/Python/2.6/site-packages/matplotlib-1.0.svn_r8037-py2.6-macosx-10.6-universal.egg/matplotlib/colors.py", line 52, in <module> import numpy as np File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/__init__.py", line 130, in <module> import add_newdocs File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/add_newdocs.py", line 9, in <module> from lib import add_newdoc File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/lib/__init__.py", line 4, in <module> from type_check import * File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/lib/type_check.py", line 8, in <module> import numpy.core.numeric as _nx File "/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/core/__init__.py", line 5, in <module> import multiarray ImportError: dlopen(/Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/core/multiarray.so, 2): no suitable image found. Did find: /Library/Python/2.6/site-packages/numpy-1.4.0.dev7542_20091216-py2.6-macosx-10.6-universal.egg/numpy/core/multiarray.so: mach-o, but wrong architecture ``` We can assume pylab was working before! I spent a while getting this far, but can someone help with this install or lend advice from a successful Snow Leopard install? Sorry for the long post, but I'm notorious for only giving partial information and I'm trying to fix that!
2010/05/23
[ "https://Stackoverflow.com/questions/2893193", "https://Stackoverflow.com", "https://Stackoverflow.com/users/51532/" ]
Thanks to PyAudio's author's speedy response to my inquiries, I now have a nicely installed copy. His directions are posted below for anyone who has similar issues. > > Hi Michael, > > > Try this: > > > 1) Make sure your directory layout is > like: > > > ./foo/pyaudio/portaudio-v19/ ./foo/pyaudio/ > > > 2) Build portaudio-v19 from sources, > as you have done > > > 3) cd ./foo/pyaudio/ 4) python > setup.py build --static-link > > > (See the comments at the top of > setup.py for more info on > --static-link) > > > If all goes well, inside > ./foo/pyaudio/build/lib.macosx-10.6-.../, > you'll find the built (fat) objects > comprising i386, ppc, and x86\_64 > binaries. You can also do a "python > setup.py install" if you like. > > > Best, Hubert > > >
I'm on Mac 10.5.8 Intel Core 2 duo and hitting the same issue. The directory layout you need is ``` ./foo/pyaudio/portaudio-v19/ ./foo/pyaudio ``` The reason is setup.py has the following: portaudio\_path = os.environ.get("PORTAUDIO\_PATH", "./portaudio-v19") alternatively, you should be able to set PORTAUDIO\_PATH env variable and get it to work.
70,872,795
i am new to c++ and i know so much more python than c++ and i have to change a code from c++ to python, in the code to change i found this sentence: ``` p->arity = std::stoi(x, nullptr, 10); ``` i think for sake of simplicity we can use ``` p->arity = x; /* or some whit pointers im really noob on c++ but i think this is not importan */ ``` in python does this means something like ``` p[arity] = x ``` or similar? or what? im a little lost whit all these new(for me) concepts like pointers and memory stuff but now whit -> operator thanks in advance
2022/01/27
[ "https://Stackoverflow.com/questions/70872795", "https://Stackoverflow.com", "https://Stackoverflow.com/users/11579387/" ]
The API keys are only available in the deployed function, not in the react app. You can call a function from your react app which then calls the MailChimp API. This keeps you API key out of the client side code which keeps it secure. As the documentation says, you set the API keys with the CLI in the terminal ``` firebase functions:config:set someservice.key="THE API KEY" someservice.id="THE CLIENT ID" ``` and then you can use them in a firebase function by calling ``` const apiKey = functions.config().someservice.key ```
Firebase can use Google Cloud Platform Services, you can integrate [GCP Secret Manager](https://cloud.google.com/secret-manager) on your functions. Google Secret Manager is a fully-managed, secure, and convenient storage system for such secrets. Developers have historically leveraged environment variables or the filesystem for managing secrets in Cloud Functions. This was largely because integrating with Secret Manager required developers to write custom code… until now. With this service you can store your raw keys (these will be encrypted) and retrieved by your code function, this carries the benefit that you can restrict the access via [Cloud IAM](https://cloud.google.com/iam) and service accounts. Also this can help you to define which members of your projects or which service accounts can access to the API keys(secrets) The permissions over the secrets can be configured so that even developers cannot see production environment keys, by assigning [access permissions](https://cloud.google.com/secret-manager/docs/access-control) to secrets, but allowing that the function can get the secret (because the service account associated with your function can read the secret). Other benefits include : * Zero code changes. Cloud functions that already consume secrets via environment variables or files bundled with the source upload simply require an additional flag during deployment. The Cloud Functions service resolves and injects the secrets at runtime and the plaintext values are only visible inside the process. * Easy environment separation. It’s easy to use the same codebase across multiple environments, e.g., dev, staging, and prod, because the secrets are decoupled from the code and are resolved at runtime. * Supports the 12-factor app pattern. Because secrets can be injected into environment variables at runtime, the native integration supports the 12-factor pattern while providing stronger security guarantees. * Centralized secret storage, access, and auditing. Leveraging Secret Manager as the centralized secrets management solution enables easy management of access controls, auditing, and access logs. In this document you can find a [code example](https://cloud.google.com/secret-manager/docs/quickstart#secretmanager-quickstart-nodejs) in JS about how to use GCP Secret Manager.
45,224,882
As I understand, the current java.net.URL handshake (for a GSS/Kerberos authentication mode) always entails a 401 as a first leg operation, which is kind of inefficient if we know the client and server are going to use GSS/Kerberos, right? Does anyone know if preemptive authentication (where you can present the token upfront like in the python one <https://github.com/requests/requests-kerberos#preemptive-authentication>) is available in the java world? A quick google points towards <https://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html> but the preemptive example seems to be for the Basic scheme alone. Thanks!
2017/07/20
[ "https://Stackoverflow.com/questions/45224882", "https://Stackoverflow.com", "https://Stackoverflow.com/users/874076/" ]
I have faced the same issue and came to the same conclusion as you - preemptive SPNEGO authentication is not supported neither in Oracle JRE HttpUrlConnection nor in Apache HTTP Components. I haven't checked other HTTP clients but almost sure that it should be the same. I started working on an alternative Spnego client which can be used with any HTTP client - it's called [Kerb4J](https://github.com/bedrin/kerb4j) You can use it like this: ``` SpnegoClient spnegoClient = SpnegoClient.loginWithKeyTab("clientPrincipal", "C:/kerberos/clientPrincipal.keytab"); URL url = new URL("http://kerberized.service/helloworld"); URLConnection urlConnection = url.openConnection(); HttpURLConnection huc = (HttpURLConnection) urlConnection; SpnegoContext context = spnegoClient.createContext(url); huc.setRequestProperty("Authorization", context.createTokenAsAuthroizationHeader()); // Optional mutual authentication step String challenge = huc.getHeaderField("WWW-Authenticate").substring("Negotiate ".length()); byte[] decode = Base64.getDecoder().decode(challenge); context.processMutualAuthorization(decode, 0, decode.length); ```
After much investigation, it looks like preemptive kerberos authentication is not available in the default Hotspot java implementation. The http-components from Apache is also not able to help with this. However, the default implementation does have the ability to only send headers when the payload is potentially large as noted in the [Expect Header and 100-Continue response](https://www.rfc-editor.org/rfc/rfc7231#section-5.1.1) section. To enable this, we need to use [the fixed length streaming mode](https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html#setFixedLengthStreamingMode-int-) ([or other similar means](https://stackoverflow.com/a/41710065/874076)). But as noted in the [javadoc](https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html#setFixedLengthStreamingMode-int-), authentication and redirection cannot be handled automatically - we are again back to the original problem.
45,224,882
As I understand, the current java.net.URL handshake (for a GSS/Kerberos authentication mode) always entails a 401 as a first leg operation, which is kind of inefficient if we know the client and server are going to use GSS/Kerberos, right? Does anyone know if preemptive authentication (where you can present the token upfront like in the python one <https://github.com/requests/requests-kerberos#preemptive-authentication>) is available in the java world? A quick google points towards <https://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html> but the preemptive example seems to be for the Basic scheme alone. Thanks!
2017/07/20
[ "https://Stackoverflow.com/questions/45224882", "https://Stackoverflow.com", "https://Stackoverflow.com/users/874076/" ]
After much investigation, it looks like preemptive kerberos authentication is not available in the default Hotspot java implementation. The http-components from Apache is also not able to help with this. However, the default implementation does have the ability to only send headers when the payload is potentially large as noted in the [Expect Header and 100-Continue response](https://www.rfc-editor.org/rfc/rfc7231#section-5.1.1) section. To enable this, we need to use [the fixed length streaming mode](https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html#setFixedLengthStreamingMode-int-) ([or other similar means](https://stackoverflow.com/a/41710065/874076)). But as noted in the [javadoc](https://docs.oracle.com/javase/8/docs/api/java/net/HttpURLConnection.html#setFixedLengthStreamingMode-int-), authentication and redirection cannot be handled automatically - we are again back to the original problem.
The following example shows how you can do preemptive spnego login that uses a custom entry in the login.conf. This completely bypasses the AuthScheme stuff and does all the work of generating the "authorization" header. ``` import org.apache.commons.io.IOUtils; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.ietf.jgss.GSSContext; import org.ietf.jgss.GSSCredential; import org.ietf.jgss.GSSException; import org.ietf.jgss.GSSManager; import org.ietf.jgss.GSSName; import org.ietf.jgss.Oid; import javax.security.auth.Subject; import javax.security.auth.login.LoginContext; import java.io.File; import java.net.InetAddress; import java.security.PrivilegedExceptionAction; import java.util.Base64; public class AsyncHttpSpnego { public static final String SPNEGO_OID = "1.3.6.1.5.5.2"; private static final String KERBEROS_OID = "1.2.840.113554.1.2.2"; public static void main(String[] args) throws Exception { InetAddress inetAddress = InetAddress.getLocalHost(); String host = inetAddress.getHostName().toUpperCase(); System.setProperty("java.security.krb5.conf", new File(host + "-krb5.ini").getCanonicalPath()); System.setProperty("javax.security.auth.useSubjectCredsOnly", "false"); System.setProperty("java.security.auth.login.config", new File(host + "-login.conf").getCanonicalPath()); LoginContext lc = new LoginContext("anotherentry"); lc.login(); byte[] token = new byte[0]; token = getAuthToken(host, lc, token); String authorizationHeader = "Negotiate" + " " + Base64.getEncoder().encodeToString(token); System.out.println("Next Authorization header: " + authorizationHeader); CloseableHttpClient closeableHttpClient = HttpClients.createMinimal(); HttpGet httpget = new HttpGet("http://" + host + ":81/nick.txt"); httpget.setHeader("Authorization", authorizationHeader); CloseableHttpResponse closeableHttpResponse = closeableHttpClient.execute(httpget); try { System.out.println(IOUtils.toString(closeableHttpResponse.getEntity().getContent())); } finally { closeableHttpResponse.close(); } } private static byte[] getAuthToken(String host, LoginContext lc, byte[] inToken) throws GSSException, java.security.PrivilegedActionException { Oid negotiationOid = new Oid(SPNEGO_OID); GSSManager manager = GSSManager.getInstance(); final PrivilegedExceptionAction<GSSCredential> action = () -> manager.createCredential(null, GSSCredential.INDEFINITE_LIFETIME, negotiationOid, GSSCredential.INITIATE_AND_ACCEPT); boolean tryKerberos = false; GSSContext gssContext = null; try { GSSName serverName = manager.createName("HTTP@" + host, GSSName.NT_HOSTBASED_SERVICE); gssContext = manager.createContext(serverName.canonicalize(negotiationOid), negotiationOid, Subject.doAs(lc.getSubject(), action), GSSContext.DEFAULT_LIFETIME); gssContext.requestMutualAuth(true); gssContext.requestCredDeleg(true); } catch (GSSException ex) { if (ex.getMajor() == GSSException.BAD_MECH) { System.out.println("GSSException BAD_MECH, retry with Kerberos MECH"); tryKerberos = true; } else { throw ex; } } if (tryKerberos) { Oid kerbOid = new Oid(KERBEROS_OID); GSSName serverName = manager.createName("HTTP@" + host, GSSName.NT_HOSTBASED_SERVICE); gssContext = manager.createContext(serverName.canonicalize(kerbOid), kerbOid, Subject.doAs(lc.getSubject(), action), GSSContext.DEFAULT_LIFETIME); gssContext.requestMutualAuth(true); gssContext.requestCredDeleg(true); } return gssContext.initSecContext(inToken, 0, inToken.length); } } ```
45,224,882
As I understand, the current java.net.URL handshake (for a GSS/Kerberos authentication mode) always entails a 401 as a first leg operation, which is kind of inefficient if we know the client and server are going to use GSS/Kerberos, right? Does anyone know if preemptive authentication (where you can present the token upfront like in the python one <https://github.com/requests/requests-kerberos#preemptive-authentication>) is available in the java world? A quick google points towards <https://hc.apache.org/httpcomponents-client-ga/tutorial/html/authentication.html> but the preemptive example seems to be for the Basic scheme alone. Thanks!
2017/07/20
[ "https://Stackoverflow.com/questions/45224882", "https://Stackoverflow.com", "https://Stackoverflow.com/users/874076/" ]
I have faced the same issue and came to the same conclusion as you - preemptive SPNEGO authentication is not supported neither in Oracle JRE HttpUrlConnection nor in Apache HTTP Components. I haven't checked other HTTP clients but almost sure that it should be the same. I started working on an alternative Spnego client which can be used with any HTTP client - it's called [Kerb4J](https://github.com/bedrin/kerb4j) You can use it like this: ``` SpnegoClient spnegoClient = SpnegoClient.loginWithKeyTab("clientPrincipal", "C:/kerberos/clientPrincipal.keytab"); URL url = new URL("http://kerberized.service/helloworld"); URLConnection urlConnection = url.openConnection(); HttpURLConnection huc = (HttpURLConnection) urlConnection; SpnegoContext context = spnegoClient.createContext(url); huc.setRequestProperty("Authorization", context.createTokenAsAuthroizationHeader()); // Optional mutual authentication step String challenge = huc.getHeaderField("WWW-Authenticate").substring("Negotiate ".length()); byte[] decode = Base64.getDecoder().decode(challenge); context.processMutualAuthorization(decode, 0, decode.length); ```
The following example shows how you can do preemptive spnego login that uses a custom entry in the login.conf. This completely bypasses the AuthScheme stuff and does all the work of generating the "authorization" header. ``` import org.apache.commons.io.IOUtils; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpGet; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClients; import org.ietf.jgss.GSSContext; import org.ietf.jgss.GSSCredential; import org.ietf.jgss.GSSException; import org.ietf.jgss.GSSManager; import org.ietf.jgss.GSSName; import org.ietf.jgss.Oid; import javax.security.auth.Subject; import javax.security.auth.login.LoginContext; import java.io.File; import java.net.InetAddress; import java.security.PrivilegedExceptionAction; import java.util.Base64; public class AsyncHttpSpnego { public static final String SPNEGO_OID = "1.3.6.1.5.5.2"; private static final String KERBEROS_OID = "1.2.840.113554.1.2.2"; public static void main(String[] args) throws Exception { InetAddress inetAddress = InetAddress.getLocalHost(); String host = inetAddress.getHostName().toUpperCase(); System.setProperty("java.security.krb5.conf", new File(host + "-krb5.ini").getCanonicalPath()); System.setProperty("javax.security.auth.useSubjectCredsOnly", "false"); System.setProperty("java.security.auth.login.config", new File(host + "-login.conf").getCanonicalPath()); LoginContext lc = new LoginContext("anotherentry"); lc.login(); byte[] token = new byte[0]; token = getAuthToken(host, lc, token); String authorizationHeader = "Negotiate" + " " + Base64.getEncoder().encodeToString(token); System.out.println("Next Authorization header: " + authorizationHeader); CloseableHttpClient closeableHttpClient = HttpClients.createMinimal(); HttpGet httpget = new HttpGet("http://" + host + ":81/nick.txt"); httpget.setHeader("Authorization", authorizationHeader); CloseableHttpResponse closeableHttpResponse = closeableHttpClient.execute(httpget); try { System.out.println(IOUtils.toString(closeableHttpResponse.getEntity().getContent())); } finally { closeableHttpResponse.close(); } } private static byte[] getAuthToken(String host, LoginContext lc, byte[] inToken) throws GSSException, java.security.PrivilegedActionException { Oid negotiationOid = new Oid(SPNEGO_OID); GSSManager manager = GSSManager.getInstance(); final PrivilegedExceptionAction<GSSCredential> action = () -> manager.createCredential(null, GSSCredential.INDEFINITE_LIFETIME, negotiationOid, GSSCredential.INITIATE_AND_ACCEPT); boolean tryKerberos = false; GSSContext gssContext = null; try { GSSName serverName = manager.createName("HTTP@" + host, GSSName.NT_HOSTBASED_SERVICE); gssContext = manager.createContext(serverName.canonicalize(negotiationOid), negotiationOid, Subject.doAs(lc.getSubject(), action), GSSContext.DEFAULT_LIFETIME); gssContext.requestMutualAuth(true); gssContext.requestCredDeleg(true); } catch (GSSException ex) { if (ex.getMajor() == GSSException.BAD_MECH) { System.out.println("GSSException BAD_MECH, retry with Kerberos MECH"); tryKerberos = true; } else { throw ex; } } if (tryKerberos) { Oid kerbOid = new Oid(KERBEROS_OID); GSSName serverName = manager.createName("HTTP@" + host, GSSName.NT_HOSTBASED_SERVICE); gssContext = manager.createContext(serverName.canonicalize(kerbOid), kerbOid, Subject.doAs(lc.getSubject(), action), GSSContext.DEFAULT_LIFETIME); gssContext.requestMutualAuth(true); gssContext.requestCredDeleg(true); } return gssContext.initSecContext(inToken, 0, inToken.length); } } ```
52,216,312
Data: ``` {"Survived":{"0":0,"1":1,"2":1,"3":1,"4":0,"5":0,"6":0,"7":0,"8":1,"9":1,"10":1,"11":1,"12":0,"13":0,"14":0,"15":1,"16":0,"17":1,"18":0,"19":1,"20":0,"21":1,"22":1,"23":1,"24":0,"25":1,"26":0,"27":0,"28":1,"29":0,"30":0,"31":1,"32":1,"33":0,"34":0,"35":0,"36":1,"37":0,"38":0,"39":1,"40":0,"41":0,"42":0,"43":1,"44":1,"45":0,"46":0,"47":1,"48":0,"49":0},"Pclass":{"0":3,"1":1,"2":3,"3":1,"4":3,"5":3,"6":1,"7":3,"8":3,"9":2,"10":3,"11":1,"12":3,"13":3,"14":3,"15":2,"16":3,"17":2,"18":3,"19":3,"20":2,"21":2,"22":3,"23":1,"24":3,"25":3,"26":3,"27":1,"28":3,"29":3,"30":1,"31":1,"32":3,"33":2,"34":1,"35":1,"36":3,"37":3,"38":3,"39":3,"40":3,"41":2,"42":3,"43":2,"44":3,"45":3,"46":3,"47":3,"48":3,"49":3},"Sex":{"0":"male","1":"female","2":"female","3":"female","4":"male","5":"male","6":"male","7":"male","8":"female","9":"female","10":"female","11":"female","12":"male","13":"male","14":"female","15":"female","16":"male","17":"male","18":"female","19":"female","20":"male","21":"male","22":"female","23":"male","24":"female","25":"female","26":"male","27":"male","28":"female","29":"male","30":"male","31":"female","32":"female","33":"male","34":"male","35":"male","36":"male","37":"male","38":"female","39":"female","40":"female","41":"female","42":"male","43":"female","44":"female","45":"male","46":"male","47":"female","48":"male","49":"female"},"Age":{"0":22.0,"1":38.0,"2":26.0,"3":35.0,"4":35.0,"5":28.0,"6":54.0,"7":2.0,"8":27.0,"9":14.0,"10":4.0,"11":58.0,"12":20.0,"13":39.0,"14":14.0,"15":55.0,"16":2.0,"17":28.0,"18":31.0,"19":28.0,"20":35.0,"21":34.0,"22":15.0,"23":28.0,"24":8.0,"25":38.0,"26":28.0,"27":19.0,"28":28.0,"29":28.0,"30":40.0,"31":28.0,"32":28.0,"33":66.0,"34":28.0,"35":42.0,"36":28.0,"37":21.0,"38":18.0,"39":14.0,"40":40.0,"41":27.0,"42":28.0,"43":3.0,"44":19.0,"45":28.0,"46":28.0,"47":28.0,"48":28.0,"49":18.0},"SibSp":{"0":1,"1":1,"2":0,"3":1,"4":0,"5":0,"6":0,"7":3,"8":0,"9":1,"10":1,"11":0,"12":0,"13":1,"14":0,"15":0,"16":4,"17":0,"18":1,"19":0,"20":0,"21":0,"22":0,"23":0,"24":3,"25":1,"26":0,"27":3,"28":0,"29":0,"30":0,"31":1,"32":0,"33":0,"34":1,"35":1,"36":0,"37":0,"38":2,"39":1,"40":1,"41":1,"42":0,"43":1,"44":0,"45":0,"46":1,"47":0,"48":2,"49":1},"Parch":{"0":0,"1":0,"2":0,"3":0,"4":0,"5":0,"6":0,"7":1,"8":2,"9":0,"10":1,"11":0,"12":0,"13":5,"14":0,"15":0,"16":1,"17":0,"18":0,"19":0,"20":0,"21":0,"22":0,"23":0,"24":1,"25":5,"26":0,"27":2,"28":0,"29":0,"30":0,"31":0,"32":0,"33":0,"34":0,"35":0,"36":0,"37":0,"38":0,"39":0,"40":0,"41":0,"42":0,"43":2,"44":0,"45":0,"46":0,"47":0,"48":0,"49":0},"Fare":{"0":7.25,"1":71.2833,"2":7.925,"3":53.1,"4":8.05,"5":8.4583,"6":51.8625,"7":21.075,"8":11.1333,"9":30.0708,"10":16.7,"11":26.55,"12":8.05,"13":31.275,"14":7.8542,"15":16.0,"16":29.125,"17":13.0,"18":18.0,"19":7.225,"20":26.0,"21":13.0,"22":8.0292,"23":35.5,"24":21.075,"25":31.3875,"26":7.225,"27":263.0,"28":7.8792,"29":7.8958,"30":27.7208,"31":146.5208,"32":7.75,"33":10.5,"34":82.1708,"35":52.0,"36":7.2292,"37":8.05,"38":18.0,"39":11.2417,"40":9.475,"41":21.0,"42":7.8958,"43":41.5792,"44":7.8792,"45":8.05,"46":15.5,"47":7.75,"48":21.6792,"49":17.8},"Embarked":{"0":"S","1":"C","2":"S","3":"S","4":"S","5":"Q","6":"S","7":"S","8":"S","9":"C","10":"S","11":"S","12":"S","13":"S","14":"S","15":"S","16":"Q","17":"S","18":"S","19":"C","20":"S","21":"S","22":"Q","23":"S","24":"S","25":"S","26":"C","27":"S","28":"Q","29":"S","30":"C","31":"C","32":"Q","33":"S","34":"C","35":"S","36":"C","37":"S","38":"S","39":"C","40":"S","41":"S","42":"C","43":"C","44":"Q","45":"S","46":"Q","47":"Q","48":"C","49":"S"},"Sex_Code":{"0":1,"1":0,"2":0,"3":0,"4":1,"5":1,"6":1,"7":1,"8":0,"9":0,"10":0,"11":0,"12":1,"13":1,"14":0,"15":0,"16":1,"17":1,"18":0,"19":0,"20":1,"21":1,"22":0,"23":1,"24":0,"25":0,"26":1,"27":1,"28":0,"29":1,"30":1,"31":0,"32":0,"33":1,"34":1,"35":1,"36":1,"37":1,"38":0,"39":0,"40":0,"41":0,"42":1,"43":0,"44":0,"45":1,"46":1,"47":0,"48":1,"49":0},"Embarked_Code":{"0":2,"1":0,"2":2,"3":2,"4":2,"5":1,"6":2,"7":2,"8":2,"9":0,"10":2,"11":2,"12":2,"13":2,"14":2,"15":2,"16":1,"17":2,"18":2,"19":0,"20":2,"21":2,"22":1,"23":2,"24":2,"25":2,"26":0,"27":2,"28":1,"29":2,"30":0,"31":0,"32":1,"33":2,"34":0,"35":2,"36":0,"37":2,"38":2,"39":0,"40":2,"41":2,"42":0,"43":0,"44":1,"45":2,"46":1,"47":1,"48":0,"49":2}} ``` I'm playing around with the Titanic data-set from Kaggle. I'm trying to find out what percentage within each Pclass of men and women survived. Groupby example: ``` train_df.groupby(['Pclass','Sex','Survived']).apply(lambda x: len(x)).unstack(2).plot(kind='bar') ``` This shows me within each class how many men and women survived and how many did not, but it would visually be better to see what percentage of men and women survived within each class. Desired Result: ``` train_df.groupby(['Pclass','Sex','Survived']).apply(lambda x: len(x)).unstack(2)[1]/(train_df.groupby(['Pclass','Sex','Survived']).apply(lambda x: len(x)).unstack(2)[1]+train_df.groupby(['Pclass','Sex','Survived']).apply(lambda x: len(x)).unstack(2)[0]) ``` This looks like it gets the desired result, but I'm wondering if there is a much more pythonic way of doing this? like a normalize=True option would be slick. End goal: A bar chart of the ratio survived for each sex within each Pclass
2018/09/07
[ "https://Stackoverflow.com/questions/52216312", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7170271/" ]
Try this ``` @Override public void onValidationFailed(View failedView, Rule<?> failedRule) { String message = failedRule.getFailureMessage(); if (failedView instanceof EditText) { failedView.requestFocus(); if(!TextUtils.isEmpty(message){ ((EditText) failedView).setError(message); } } else { Toast.makeText(this, "Record Not Saved", Toast.LENGTH_SHORT).show(); } ```
> > I found the problem : > > > ``` android:theme="@style/TextLabel" ``` > > Had to create a theme first and than a style and use it like : > > > ``` <style name="TextLabel" parent="BellicTheme"> ``` > > Thanks everyone > > >
44,550,192
Suppose I have the following dict... ``` sample = { 'a' : 100, 'b' : 3, 'e' : 42, 'c' : 250, 'f' : 42, 'd' : 42, } ``` I want to sort this dict with the highest order sort being by value and the lower order sort being by key. The key-value pairs of the result would be this ... ``` ( ('b', 3), ('d', 42), ('e', 42), ('f', 42), ('a', 100), ('c', 250) ) ``` I already know how to do this by writing several lines of python code. However, I'm looking for a python one-liner that will perform this sort, possibly using a comprehension or one or more of python's functional programming constructs. Is such a one-liner even possible in python?
2017/06/14
[ "https://Stackoverflow.com/questions/44550192", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1800838/" ]
You can define a lambda that uses both the value and key. ``` sorted(sample.items(), key=lambda x: (x[1], x[0])) ```
You can use the operator module: ``` import operator sample = { 'a' : 100, 'b' : 3, 'e' : 42, 'c' : 250, 'f' : 42, 'd' : 42, } sorted_by_value = tuple(sorted(sample.items(), key=operator.itemgetter(1))) sorted_by_key = tuple(sorted(sample.items(), key=operator.itemgetter(0))) ``` sorted\_by\_value: ``` (('b', 3), ('e', 42), ('d', 42), ('f', 42), ('a', 100), ('c', 250)) ``` sorted\_by\_key: ``` (('a', 100), ('b', 3), ('c', 250), ('d', 42), ('e', 42), ('f', 42)) ```
37,000,231
I have a Pandas DataFrame as follow: ```python In [28]: df = pd.DataFrame({'A':['CA', 'FO', 'CAP', 'CP'], 'B':['Name1', 'Name2', 'Name3', 'Name4'], 'C':['One', 'Two', 'Other', 'Some']}) In [29]: df Out[29]: A B C 0 CA Name1 One 1 FO Name2 Two 2 CAP Name3 Other 3 CP Name4 Some ``` I am trying to count all records in column A with values of `'CA'` and `'CP'`, to do this I am executing the next: ```python In [30]: len(df.groupby('A').filter(lambda x: x['A'] == 'CA')) Out[30]: 1 ``` There is a way to get both information in a single sentence?, because if I try to do something like this: ```python In [32]: len(df.groupby('A').filter(lambda x: x['A'] == 'CA' or ....: x['A'] == 'CP')) ``` I am getting this error: ```python ValueError Traceback (most recent call last) <ipython-input-32-111c3fde30f2> in <module>() ----> 1 len(df.groupby('A').filter(lambda x: x['A'] == 'CA') or 2 x['A'] == 'CP') c:\python27\lib\site-packages\pandas\core\generic.pyc in __nonzero__(self) 885 raise ValueError("The truth value of a {0} is ambiguous. " 886 "Use a.empty, a.bool(), a.item(), a.any() or a.all()." --> 887 .format(self.__class__.__name__)) 888 889 __bool__ = __nonzero__ ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). ```
2016/05/03
[ "https://Stackoverflow.com/questions/37000231", "https://Stackoverflow.com", "https://Stackoverflow.com/users/3908401/" ]
You explicitly set the input's ID as "TextBox" but you're trying to retrieve it as `<%= TextBox.ClientID %>`, use "TextBox" as the ID (and better, give a more meaningful name to the ID). ClientId is used for ASP .net controls which have it's ID autogenerated by the ASP runtime.
Turns out I was referencing the wrong control. Referencing the correct control in the script solved my problem. The rest of the code is totally usable. Referencing a control with `#` in the call to the ID is required for jQuery functions but not plain Javascript. `<%= TextBox.ClientID %>` will work even for HTML inputs where the runat attribute is set to server. This will provide the correct ID when using master pages.
10,414,210
Python 2.6 introduced a `next` function. Why was this necessary? One could always type `obj.next()` instead of `next(obj)`. Is the latter more `pythonic`?
2012/05/02
[ "https://Stackoverflow.com/questions/10414210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/552671/" ]
> > [`next(iterator[, default])`](http://docs.python.org/2/library/functions.html#next) > > > Retrieve the next item from the *iterator* by calling its `next()``(__next__()` in python 3) method. If *default* is given, it is returned if the iterator is exhausted, otherwise `StopIteration` is raised. > > > You get the `default` option.
Apart from the obvious additional functionality, it also looks better when used together with generator expressions. Compare ``` (x for x in lst if x > 2).next() ``` to ``` next(x for x in lst if x > 2) ``` The latter is a lot more consistent with the rest of Python's style, IMHO.
10,414,210
Python 2.6 introduced a `next` function. Why was this necessary? One could always type `obj.next()` instead of `next(obj)`. Is the latter more `pythonic`?
2012/05/02
[ "https://Stackoverflow.com/questions/10414210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/552671/" ]
[PEP 3114](http://www.python.org/dev/peps/pep-3114/) describes this change. An excerpt about the motivation: > > This PEP proposes that the `next` method be renamed to `__next__`, > consistent with all the other protocols in Python in which a method is > implicitly called as part of a language-level protocol, and that a > built-in function named `next` be introduced to invoke `__next__` method, > consistent with the manner in which other protocols are explicitly > invoked. > > > Be sure to read this PEP for more interesting details. As for why *you* want to use the `next` built-in: one good reason is that the `next` method disappears in Python 3, so for portability it's better to start using the `next` built-in as soon as possible.
> > [`next(iterator[, default])`](http://docs.python.org/2/library/functions.html#next) > > > Retrieve the next item from the *iterator* by calling its `next()``(__next__()` in python 3) method. If *default* is given, it is returned if the iterator is exhausted, otherwise `StopIteration` is raised. > > > You get the `default` option.
10,414,210
Python 2.6 introduced a `next` function. Why was this necessary? One could always type `obj.next()` instead of `next(obj)`. Is the latter more `pythonic`?
2012/05/02
[ "https://Stackoverflow.com/questions/10414210", "https://Stackoverflow.com", "https://Stackoverflow.com/users/552671/" ]
[PEP 3114](http://www.python.org/dev/peps/pep-3114/) describes this change. An excerpt about the motivation: > > This PEP proposes that the `next` method be renamed to `__next__`, > consistent with all the other protocols in Python in which a method is > implicitly called as part of a language-level protocol, and that a > built-in function named `next` be introduced to invoke `__next__` method, > consistent with the manner in which other protocols are explicitly > invoked. > > > Be sure to read this PEP for more interesting details. As for why *you* want to use the `next` built-in: one good reason is that the `next` method disappears in Python 3, so for portability it's better to start using the `next` built-in as soon as possible.
Apart from the obvious additional functionality, it also looks better when used together with generator expressions. Compare ``` (x for x in lst if x > 2).next() ``` to ``` next(x for x in lst if x > 2) ``` The latter is a lot more consistent with the rest of Python's style, IMHO.
28,242,398
I am trying to create an algorithm in Python 2.7.9 which can be viewed below: ![enter image description here](https://i.stack.imgur.com/4qsGy.gif) This equates to: `10/3 (-510 + sqrt(15) * sqrt(-44879 + 1000 * y))` When I try to solve it in python with the following code: ``` from __future__ import division import math y = 66 x = "%0.2f" % (10/3 (-510 + sqrt(15) * sqrt(-44879 + 1000 * y))) print x ``` I receive the following error: `TypeError: 'int' object is not callable` Why is this? I have another algorithm below which works just fine: ``` x = "%0.2f" % (-5/4*(-463 + math.sqrt(1216881 - 16000 *y))) ```
2015/01/30
[ "https://Stackoverflow.com/questions/28242398", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4241308/" ]
You are missing the multiplication operator below: ``` x = "%0.2f" % (10/3 (-510 + sqrt(15) * sqrt(-44879 + 1000 * y))) ^ Need to add '*' ```
Where is the multiplication operator? ``` x = "%0.2f" % (10/3 * (-510 + sqrt(15) * sqrt(-44879 + 1000 * y))) ``` Tip Whenever you get `TypeError: 'int' object is not callable`, it means that you have something like an integer followed immediately by a brace. Check out for that, Debugging will be a piece of cake.
28,064,563
I am using distutils (setup.py) to create rpm-packages from my python projects. Now, one of my projects which had a very specific task (say png-creation) is moved to a more general project (image-toolkit). 1. Is there a way to tell the user that the old package (png-creation) is obsolete when he/she installs the new package (image-toolkit). 2. Is there a way to make a new version of the old package (png-creation) which tells the user that he/she should use the new package (image-toolkit) instead? These are two different scenarios from which the first one would be my favorite. In both scenarios I assume that a user has installed my package (png-creation) with his package manager. In the first (my favorite) scenario the following would happen: * The user runs an update with his package manager. * The package manager recognizes that png-creation is obsolete and that image-toolkit has to be installed instead. So the package manger removes png-creation and installs image-toolkit. If this scenario is not possible, the second one would be: * I tell my users that they have to install image-toolkit. * The user runs install image-toolkit with his package manager. * The package manager recognizes that png-creation is not needed anymore and removes it.
2015/01/21
[ "https://Stackoverflow.com/questions/28064563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4034527/" ]
ARI uses a subscription based model for events. Quoting from the documentation on the [wiki](https://wiki.asterisk.org/wiki/display/AST/Introduction+to+ARI+and+Channels): > > Resources in Asterisk do not, by default, send events about themselves to a connected ARI application. In order to get events about resources, one of three things must occur: > > > 1. The resource must be a channel that entered into a Stasis dialplan application. A subscription is implicitly created in this case. The > subscription is implicitly destroyed when the channel leaves the > Stasis dialplan application. > 2. While a channel is in a Stasis dialplan application, the channel may interact with other resources - such as a bridge. While channels > interact with the resource, a subscription is made to that resource. > When no more channels in a Stasis dialplan application are interacting > with the resource, the implicit subscription is destroyed. > 3. At any time, an ARI application may make a subscription to a resource in Asterisk through application operations. While that > resource exists, the ARI application owns the subscription. > > > So, the reason you get events about a channel over your ARI WebSocket is because it went into the Stasis dialplan application. That isn't, however, the only way to get events. If you're interested in events from other event sources, you can subscribe to those resources using the [applications](https://wiki.asterisk.org/wiki/display/AST/Asterisk+13+Applications+REST+API#Asterisk13ApplicationsRESTAPI-subscribe) resource. For example, if I wanted to receive all events that were in relation to PJSIP endpoint "Alice", I would subscribe using the following: ``` POST https://localhost:8080/ari/applications/my_app/subscription?eventSource=endpoint:PJSIP%2FAlice ``` Note that subscriptions to endpoints implicitly subscribe you to all channels that are created for that endpoint. If you want to subscribe to all endpoints of a particular technology, you can also subscribe to the resource itself: ``` POST https://localhost:8080/ari/applications/my_app/subscription?eventSource=endpoint:PJSIP ```
For more clarity regarding what Matt Jordan has already provided, here's an example of doing what he suggests with [ari-py](https://github.com/asterisk/ari-py): ``` import ari import logging logging.basicConfig(level=logging.ERROR) client = ari.connect('http://localhost:8088', 'username', 'password') postRequest=client.applications.subscribe(applicationName=["NameOfAppThatWillReapThisEvent-ThisAppShouldBeRunning"], eventSource="endpoint:PJSIP/alice") print postRequest ```
28,064,563
I am using distutils (setup.py) to create rpm-packages from my python projects. Now, one of my projects which had a very specific task (say png-creation) is moved to a more general project (image-toolkit). 1. Is there a way to tell the user that the old package (png-creation) is obsolete when he/she installs the new package (image-toolkit). 2. Is there a way to make a new version of the old package (png-creation) which tells the user that he/she should use the new package (image-toolkit) instead? These are two different scenarios from which the first one would be my favorite. In both scenarios I assume that a user has installed my package (png-creation) with his package manager. In the first (my favorite) scenario the following would happen: * The user runs an update with his package manager. * The package manager recognizes that png-creation is obsolete and that image-toolkit has to be installed instead. So the package manger removes png-creation and installs image-toolkit. If this scenario is not possible, the second one would be: * I tell my users that they have to install image-toolkit. * The user runs install image-toolkit with his package manager. * The package manager recognizes that png-creation is not needed anymore and removes it.
2015/01/21
[ "https://Stackoverflow.com/questions/28064563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4034527/" ]
ARI uses a subscription based model for events. Quoting from the documentation on the [wiki](https://wiki.asterisk.org/wiki/display/AST/Introduction+to+ARI+and+Channels): > > Resources in Asterisk do not, by default, send events about themselves to a connected ARI application. In order to get events about resources, one of three things must occur: > > > 1. The resource must be a channel that entered into a Stasis dialplan application. A subscription is implicitly created in this case. The > subscription is implicitly destroyed when the channel leaves the > Stasis dialplan application. > 2. While a channel is in a Stasis dialplan application, the channel may interact with other resources - such as a bridge. While channels > interact with the resource, a subscription is made to that resource. > When no more channels in a Stasis dialplan application are interacting > with the resource, the implicit subscription is destroyed. > 3. At any time, an ARI application may make a subscription to a resource in Asterisk through application operations. While that > resource exists, the ARI application owns the subscription. > > > So, the reason you get events about a channel over your ARI WebSocket is because it went into the Stasis dialplan application. That isn't, however, the only way to get events. If you're interested in events from other event sources, you can subscribe to those resources using the [applications](https://wiki.asterisk.org/wiki/display/AST/Asterisk+13+Applications+REST+API#Asterisk13ApplicationsRESTAPI-subscribe) resource. For example, if I wanted to receive all events that were in relation to PJSIP endpoint "Alice", I would subscribe using the following: ``` POST https://localhost:8080/ari/applications/my_app/subscription?eventSource=endpoint:PJSIP%2FAlice ``` Note that subscriptions to endpoints implicitly subscribe you to all channels that are created for that endpoint. If you want to subscribe to all endpoints of a particular technology, you can also subscribe to the resource itself: ``` POST https://localhost:8080/ari/applications/my_app/subscription?eventSource=endpoint:PJSIP ```
ws://(host):8088/ari/events?app=dialer&subscibeAll=true Adding SubscribeAll=true make what you want =)
28,064,563
I am using distutils (setup.py) to create rpm-packages from my python projects. Now, one of my projects which had a very specific task (say png-creation) is moved to a more general project (image-toolkit). 1. Is there a way to tell the user that the old package (png-creation) is obsolete when he/she installs the new package (image-toolkit). 2. Is there a way to make a new version of the old package (png-creation) which tells the user that he/she should use the new package (image-toolkit) instead? These are two different scenarios from which the first one would be my favorite. In both scenarios I assume that a user has installed my package (png-creation) with his package manager. In the first (my favorite) scenario the following would happen: * The user runs an update with his package manager. * The package manager recognizes that png-creation is obsolete and that image-toolkit has to be installed instead. So the package manger removes png-creation and installs image-toolkit. If this scenario is not possible, the second one would be: * I tell my users that they have to install image-toolkit. * The user runs install image-toolkit with his package manager. * The package manager recognizes that png-creation is not needed anymore and removes it.
2015/01/21
[ "https://Stackoverflow.com/questions/28064563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4034527/" ]
ARI uses a subscription based model for events. Quoting from the documentation on the [wiki](https://wiki.asterisk.org/wiki/display/AST/Introduction+to+ARI+and+Channels): > > Resources in Asterisk do not, by default, send events about themselves to a connected ARI application. In order to get events about resources, one of three things must occur: > > > 1. The resource must be a channel that entered into a Stasis dialplan application. A subscription is implicitly created in this case. The > subscription is implicitly destroyed when the channel leaves the > Stasis dialplan application. > 2. While a channel is in a Stasis dialplan application, the channel may interact with other resources - such as a bridge. While channels > interact with the resource, a subscription is made to that resource. > When no more channels in a Stasis dialplan application are interacting > with the resource, the implicit subscription is destroyed. > 3. At any time, an ARI application may make a subscription to a resource in Asterisk through application operations. While that > resource exists, the ARI application owns the subscription. > > > So, the reason you get events about a channel over your ARI WebSocket is because it went into the Stasis dialplan application. That isn't, however, the only way to get events. If you're interested in events from other event sources, you can subscribe to those resources using the [applications](https://wiki.asterisk.org/wiki/display/AST/Asterisk+13+Applications+REST+API#Asterisk13ApplicationsRESTAPI-subscribe) resource. For example, if I wanted to receive all events that were in relation to PJSIP endpoint "Alice", I would subscribe using the following: ``` POST https://localhost:8080/ari/applications/my_app/subscription?eventSource=endpoint:PJSIP%2FAlice ``` Note that subscriptions to endpoints implicitly subscribe you to all channels that are created for that endpoint. If you want to subscribe to all endpoints of a particular technology, you can also subscribe to the resource itself: ``` POST https://localhost:8080/ari/applications/my_app/subscription?eventSource=endpoint:PJSIP ```
May be help someone: Subscribe to all events on channels, bridge and endpoints ``` POST http://localhost:8088/ari/applications/appName/subscription?api_key=user:password&eventSource=channel:,bridge:,endpoint: ``` Unsubscribe ``` DELETE http://localhost:8088/ari/applications/appName/subscription?api_key=user:password&eventSource=channel:__AST_CHANNEL_ALL_TOPIC,bridge:__AST_BRIDGE_ALL_TOPIC,endpoint:__AST_ENDPOINT_ALL_TOPIC ```
28,064,563
I am using distutils (setup.py) to create rpm-packages from my python projects. Now, one of my projects which had a very specific task (say png-creation) is moved to a more general project (image-toolkit). 1. Is there a way to tell the user that the old package (png-creation) is obsolete when he/she installs the new package (image-toolkit). 2. Is there a way to make a new version of the old package (png-creation) which tells the user that he/she should use the new package (image-toolkit) instead? These are two different scenarios from which the first one would be my favorite. In both scenarios I assume that a user has installed my package (png-creation) with his package manager. In the first (my favorite) scenario the following would happen: * The user runs an update with his package manager. * The package manager recognizes that png-creation is obsolete and that image-toolkit has to be installed instead. So the package manger removes png-creation and installs image-toolkit. If this scenario is not possible, the second one would be: * I tell my users that they have to install image-toolkit. * The user runs install image-toolkit with his package manager. * The package manager recognizes that png-creation is not needed anymore and removes it.
2015/01/21
[ "https://Stackoverflow.com/questions/28064563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4034527/" ]
ws://(host):8088/ari/events?app=dialer&subscibeAll=true Adding SubscribeAll=true make what you want =)
For more clarity regarding what Matt Jordan has already provided, here's an example of doing what he suggests with [ari-py](https://github.com/asterisk/ari-py): ``` import ari import logging logging.basicConfig(level=logging.ERROR) client = ari.connect('http://localhost:8088', 'username', 'password') postRequest=client.applications.subscribe(applicationName=["NameOfAppThatWillReapThisEvent-ThisAppShouldBeRunning"], eventSource="endpoint:PJSIP/alice") print postRequest ```
28,064,563
I am using distutils (setup.py) to create rpm-packages from my python projects. Now, one of my projects which had a very specific task (say png-creation) is moved to a more general project (image-toolkit). 1. Is there a way to tell the user that the old package (png-creation) is obsolete when he/she installs the new package (image-toolkit). 2. Is there a way to make a new version of the old package (png-creation) which tells the user that he/she should use the new package (image-toolkit) instead? These are two different scenarios from which the first one would be my favorite. In both scenarios I assume that a user has installed my package (png-creation) with his package manager. In the first (my favorite) scenario the following would happen: * The user runs an update with his package manager. * The package manager recognizes that png-creation is obsolete and that image-toolkit has to be installed instead. So the package manger removes png-creation and installs image-toolkit. If this scenario is not possible, the second one would be: * I tell my users that they have to install image-toolkit. * The user runs install image-toolkit with his package manager. * The package manager recognizes that png-creation is not needed anymore and removes it.
2015/01/21
[ "https://Stackoverflow.com/questions/28064563", "https://Stackoverflow.com", "https://Stackoverflow.com/users/4034527/" ]
ws://(host):8088/ari/events?app=dialer&subscibeAll=true Adding SubscribeAll=true make what you want =)
May be help someone: Subscribe to all events on channels, bridge and endpoints ``` POST http://localhost:8088/ari/applications/appName/subscription?api_key=user:password&eventSource=channel:,bridge:,endpoint: ``` Unsubscribe ``` DELETE http://localhost:8088/ari/applications/appName/subscription?api_key=user:password&eventSource=channel:__AST_CHANNEL_ALL_TOPIC,bridge:__AST_BRIDGE_ALL_TOPIC,endpoint:__AST_ENDPOINT_ALL_TOPIC ```
40,282,812
I have a data in mongoDB, I want to retrieve all the values of a key `"category"` using python code. I have tried several ways but in every case I have to give the "value" to retrieve. Any suggestions would be appreciated. ``` { id = "my_id1" tags: [tag1, tag2, tag3], category: "movie", }, { id = "my_id2" tags: [tag3, tag6, tag9], category: "tv", }, { id = "my_id3" tags: [tag2, tag6, tag8], category: "movie", } ``` I want the output as ``` category: "movie" category: "tv" category: "movie" ```
2016/10/27
[ "https://Stackoverflow.com/questions/40282812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6858122/" ]
This Should Work ``` db.test.find({},{"category":1}); ```
Pymongo's `distinct()` method returns a list of all values associated with a key across all documents in a collection. The following code: ``` db.collection.distinct('category') ``` should return the following list: ``` ['movie', 'tv', 'movie'] ```
62,657,673
alright so I've been working on some program and I need to send emails from my gmail account.. so I wrote a code (irrelevent, it works) however the mails not send until I approve the captcha.. [captcha url](https://accounts.google.com/b/0/DisplayUnlockCaptcha) and then this solution only work once, What should I do to make it work as I want? some details: python3, smtp-module, ubuntu server built on aws.
2020/06/30
[ "https://Stackoverflow.com/questions/62657673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13046336/" ]
``` using System.Collections; using System.Collections.Generic; using UnityEngine; public class CoroutineController : MonoBehaviour { static CoroutineController _singleton; static Dictionary<string,IEnumerator> _routines = new Dictionary<string,IEnumerator>(100); [RuntimeInitializeOnLoadMethod( RuntimeInitializeLoadType.BeforeSceneLoad )] static void InitializeType () { _singleton = new GameObject($"#{nameof(CoroutineController)}").AddComponent<CoroutineController>(); DontDestroyOnLoad( _singleton ); } public static Coroutine Start ( IEnumerator routine ) => _singleton.StartCoroutine( routine ); public static Coroutine Start ( IEnumerator routine , string id ) { var coroutine = _singleton.StartCoroutine( routine ); if( !_routines.ContainsKey(id) ) _routines.Add( id , routine ); else { _singleton.StopCoroutine( _routines[id] ); _routines[id] = routine; } return coroutine; } public static void Stop ( IEnumerator routine ) => _singleton.StopCoroutine( routine ); public static void Stop ( string id ) { if( _routines.TryGetValue(id,out var routine) ) { _singleton.StopCoroutine( routine ); _routines.Remove( id ); } else Debug.LogWarning($"coroutine '{id}' not found"); } public static void StopAll () => _singleton.StopAllCoroutines(); } ``` Then: ``` CoroutineController.Start( Test() ); ``` You can also stop specific coroutines here by giving them labels: ``` CoroutineController.Start( Test() , "just a test" ); // <few moments later, meme> CoroutineController.Stop( "just a test" ); ```
My solution to starting the Coroutines from places that can't do this is making a Singleton CoroutineManager. I then use this CoroutineManager to invoke these Coroutines from places like ScriptableObjects. You can also use it to cache WaitForEndOfFrame or WaitForFixedUpdate so you don't need to create new ones every time.
62,657,673
alright so I've been working on some program and I need to send emails from my gmail account.. so I wrote a code (irrelevent, it works) however the mails not send until I approve the captcha.. [captcha url](https://accounts.google.com/b/0/DisplayUnlockCaptcha) and then this solution only work once, What should I do to make it work as I want? some details: python3, smtp-module, ubuntu server built on aws.
2020/06/30
[ "https://Stackoverflow.com/questions/62657673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13046336/" ]
``` using System.Collections; using System.Collections.Generic; using UnityEngine; public class CoroutineController : MonoBehaviour { static CoroutineController _singleton; static Dictionary<string,IEnumerator> _routines = new Dictionary<string,IEnumerator>(100); [RuntimeInitializeOnLoadMethod( RuntimeInitializeLoadType.BeforeSceneLoad )] static void InitializeType () { _singleton = new GameObject($"#{nameof(CoroutineController)}").AddComponent<CoroutineController>(); DontDestroyOnLoad( _singleton ); } public static Coroutine Start ( IEnumerator routine ) => _singleton.StartCoroutine( routine ); public static Coroutine Start ( IEnumerator routine , string id ) { var coroutine = _singleton.StartCoroutine( routine ); if( !_routines.ContainsKey(id) ) _routines.Add( id , routine ); else { _singleton.StopCoroutine( _routines[id] ); _routines[id] = routine; } return coroutine; } public static void Stop ( IEnumerator routine ) => _singleton.StopCoroutine( routine ); public static void Stop ( string id ) { if( _routines.TryGetValue(id,out var routine) ) { _singleton.StopCoroutine( routine ); _routines.Remove( id ); } else Debug.LogWarning($"coroutine '{id}' not found"); } public static void StopAll () => _singleton.StopAllCoroutines(); } ``` Then: ``` CoroutineController.Start( Test() ); ``` You can also stop specific coroutines here by giving them labels: ``` CoroutineController.Start( Test() , "just a test" ); // <few moments later, meme> CoroutineController.Stop( "just a test" ); ```
To better understand the concept look at this absolutely bare-minimum version of CoroutineController class. It's just a field and a method, that's all: ``` using UnityEngine; public class CoroutinePawn : MonoBehaviour { public static CoroutinePawn Instance { get; private set; } [RuntimeInitializeOnLoadMethod(RuntimeInitializeLoadType.BeforeSceneLoad)] static void InitializeType () { Instance = new GameObject($"#{nameof(CoroutinePawn)}").AddComponent<CoroutinePawn>(); DontDestroyOnLoad( Instance ); } } ``` And because this `Instance` static field gives you a reference to an existing `MonoBehaviour` (GameObject) - you can start coroutines manually from anywhere: ``` CoroutinePawn.Instance.StartCoroutine( Test() ); ```
62,657,673
alright so I've been working on some program and I need to send emails from my gmail account.. so I wrote a code (irrelevent, it works) however the mails not send until I approve the captcha.. [captcha url](https://accounts.google.com/b/0/DisplayUnlockCaptcha) and then this solution only work once, What should I do to make it work as I want? some details: python3, smtp-module, ubuntu server built on aws.
2020/06/30
[ "https://Stackoverflow.com/questions/62657673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/13046336/" ]
My solution to starting the Coroutines from places that can't do this is making a Singleton CoroutineManager. I then use this CoroutineManager to invoke these Coroutines from places like ScriptableObjects. You can also use it to cache WaitForEndOfFrame or WaitForFixedUpdate so you don't need to create new ones every time.
To better understand the concept look at this absolutely bare-minimum version of CoroutineController class. It's just a field and a method, that's all: ``` using UnityEngine; public class CoroutinePawn : MonoBehaviour { public static CoroutinePawn Instance { get; private set; } [RuntimeInitializeOnLoadMethod(RuntimeInitializeLoadType.BeforeSceneLoad)] static void InitializeType () { Instance = new GameObject($"#{nameof(CoroutinePawn)}").AddComponent<CoroutinePawn>(); DontDestroyOnLoad( Instance ); } } ``` And because this `Instance` static field gives you a reference to an existing `MonoBehaviour` (GameObject) - you can start coroutines manually from anywhere: ``` CoroutinePawn.Instance.StartCoroutine( Test() ); ```
53,058,052
I am trying to create executable python file using pyinstaller, but while loading hooks, it shows error like this, ``` 24021 INFO: Removing import of PySide from module PIL.ImageQt 24021 INFO: Loading module hook "hook-pytz.py"... 24506 INFO: Loading module hook "hook-encodings.py"... 24600 INFO: Loading module hook "hook-pandas.py"... 25037 INFO: Loading module hook "hook-lib2to3.py"... 25131 INFO: Loading module hook "hook-lxml.etree.py"... 25131 INFO: Loading module hook "hook-pycparser.py"... 25396 INFO: Loading module hook "hook-setuptools.py"... 25506 WARNING: Hidden import "setuptools.msvc" not found! 25506 INFO: Loading module hook "hook-distutils.py"... 25521 INFO: Loading module hook "hook-nltk.py"... Unable to find "C:\nltk_data" when adding binary and data files. ``` I have tried coping nltk\_data from Appdata to C drive. but same error.
2018/10/30
[ "https://Stackoverflow.com/questions/53058052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9921123/" ]
I have been working on this issue for a few days not and don't have hair left. For some reason nltk and pyinstaller do not work well together. So my first solution to this issue is to use something other than nltk if it is possible to code the solution without nltk. If you must use NLTK, I solved this by forcing the nltk\_data path into datas. 1. Locate your nltk\_data path. Mine was in C:\Users\user-name\AppData\Roaming\nltk\_data 2. In hook-nltk.py (within pyinstaller directory) I commented out and added lines to look like this. ``` import nltk from PyInstaller.utils.hooks import collect_data_files datas = collect_data_files('nltk', False) ''' for p in nltk.data.path: datas.append((p, "nltk_data")) ''' datas.append(("C:\\Users\\nedhu\\AppData\\Roaming\\nltk_data", "nltk_data")) hiddenimports = ["nltk.chunk.named_entity"] ``` There is a deeper problem with pyinstaller looping through the datas list of paths, but this solution works as a patch.
I solved the problems editing the pyinstaller nltk-hook. After much research, I decided to go it alone in the code structure. I solved my problem by commenting on the lines: `datas=[]` `'''for p in nltk.data.path: datas.append((p, "nltk_data"))'''` `hiddenimports = ["nltk.chunk.named_entity"]` What's more, you need to rename the file: pyi\_rth\_\_nltk.cpython-36.pyc to pyi\_rth\_nltk.cpython-36.pyc . This file have 1 more underline. Warning with the python version.
53,058,052
I am trying to create executable python file using pyinstaller, but while loading hooks, it shows error like this, ``` 24021 INFO: Removing import of PySide from module PIL.ImageQt 24021 INFO: Loading module hook "hook-pytz.py"... 24506 INFO: Loading module hook "hook-encodings.py"... 24600 INFO: Loading module hook "hook-pandas.py"... 25037 INFO: Loading module hook "hook-lib2to3.py"... 25131 INFO: Loading module hook "hook-lxml.etree.py"... 25131 INFO: Loading module hook "hook-pycparser.py"... 25396 INFO: Loading module hook "hook-setuptools.py"... 25506 WARNING: Hidden import "setuptools.msvc" not found! 25506 INFO: Loading module hook "hook-distutils.py"... 25521 INFO: Loading module hook "hook-nltk.py"... Unable to find "C:\nltk_data" when adding binary and data files. ``` I have tried coping nltk\_data from Appdata to C drive. but same error.
2018/10/30
[ "https://Stackoverflow.com/questions/53058052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9921123/" ]
[This answer](https://stackoverflow.com/a/54760437/8379443) worked for me... it modifies the code in `hook-nltk.py` to only include the path if it exists. `hook-nltk.py` can be found in your PyInstaller location within the hooks folder (something like <'path-to-python-installation'>\Lib\site-packages\PyInstaller\hooks)
I solved the problems editing the pyinstaller nltk-hook. After much research, I decided to go it alone in the code structure. I solved my problem by commenting on the lines: `datas=[]` `'''for p in nltk.data.path: datas.append((p, "nltk_data"))'''` `hiddenimports = ["nltk.chunk.named_entity"]` What's more, you need to rename the file: pyi\_rth\_\_nltk.cpython-36.pyc to pyi\_rth\_nltk.cpython-36.pyc . This file have 1 more underline. Warning with the python version.
53,058,052
I am trying to create executable python file using pyinstaller, but while loading hooks, it shows error like this, ``` 24021 INFO: Removing import of PySide from module PIL.ImageQt 24021 INFO: Loading module hook "hook-pytz.py"... 24506 INFO: Loading module hook "hook-encodings.py"... 24600 INFO: Loading module hook "hook-pandas.py"... 25037 INFO: Loading module hook "hook-lib2to3.py"... 25131 INFO: Loading module hook "hook-lxml.etree.py"... 25131 INFO: Loading module hook "hook-pycparser.py"... 25396 INFO: Loading module hook "hook-setuptools.py"... 25506 WARNING: Hidden import "setuptools.msvc" not found! 25506 INFO: Loading module hook "hook-distutils.py"... 25521 INFO: Loading module hook "hook-nltk.py"... Unable to find "C:\nltk_data" when adding binary and data files. ``` I have tried coping nltk\_data from Appdata to C drive. but same error.
2018/10/30
[ "https://Stackoverflow.com/questions/53058052", "https://Stackoverflow.com", "https://Stackoverflow.com/users/9921123/" ]
[This answer](https://stackoverflow.com/a/54760437/8379443) worked for me... it modifies the code in `hook-nltk.py` to only include the path if it exists. `hook-nltk.py` can be found in your PyInstaller location within the hooks folder (something like <'path-to-python-installation'>\Lib\site-packages\PyInstaller\hooks)
I have been working on this issue for a few days not and don't have hair left. For some reason nltk and pyinstaller do not work well together. So my first solution to this issue is to use something other than nltk if it is possible to code the solution without nltk. If you must use NLTK, I solved this by forcing the nltk\_data path into datas. 1. Locate your nltk\_data path. Mine was in C:\Users\user-name\AppData\Roaming\nltk\_data 2. In hook-nltk.py (within pyinstaller directory) I commented out and added lines to look like this. ``` import nltk from PyInstaller.utils.hooks import collect_data_files datas = collect_data_files('nltk', False) ''' for p in nltk.data.path: datas.append((p, "nltk_data")) ''' datas.append(("C:\\Users\\nedhu\\AppData\\Roaming\\nltk_data", "nltk_data")) hiddenimports = ["nltk.chunk.named_entity"] ``` There is a deeper problem with pyinstaller looping through the datas list of paths, but this solution works as a patch.
28,616,942
I am trying to upload video files to a Bucket in S3 server from android app using a signed URLs which is generated from server side (coded in python) application. We are making a PUT request to the signed URL but we are getting > > `connection reset by peer exception`. > > > But when I try the same URL on the POSTMAN REST CLIENT get a success message. Any help will be appreciated.
2015/02/19
[ "https://Stackoverflow.com/questions/28616942", "https://Stackoverflow.com", "https://Stackoverflow.com/users/2562861/" ]
Done this using [Retrofit](http://square.github.io/retrofit/) HTTP client library,it successfully uploaded file to Amazon s3 server. code: ``` public interface UploadService { String BASE_URL = "https://bucket.s3.amazonaws.com/folder"; /** * @param url :signed s3 url string after 'BASE_URL'. * @param file :file to upload,( usage: new TypedFile("mp4", videoFile);. * @param cb :callback. */ @PUT("/{url}") void uploadFile(@Path(value = "url", encode=false) String url, @Body() TypedFile file, Callback<String> cb); } ``` service class ``` public final class ServiceGenerator { private ServiceGenerator() { } public static <S> S createService(Class<S> serviceClass, String baseUrl) { return createService(serviceClass, baseUrl, null, null); } public static <S> S createService(Class<S> serviceClass, String baseUrl, final String accessToken, final String tokenType) { class MyErrorHandler implements ErrorHandler { @Override public Throwable handleError(RetrofitError cause) { return cause; } } Gson gson = new GsonBuilder() .setFieldNamingPolicy(FieldNamingPolicy.LOWER_CASE_WITH_UNDERSCORES) .registerTypeAdapter(Date.class, new DateTypeAdapter()) .disableHtmlEscaping() .create(); RestAdapter.Builder builder = new RestAdapter.Builder() .setEndpoint(baseUrl) .setClient(new OkClient(new OkHttpClient())) .setErrorHandler(new MyErrorHandler()) .setLogLevel(RestAdapter.LogLevel.FULL) .setConverter(new GsonConverter(gson)); if (accessToken != null) { builder.setRequestInterceptor(new RequestInterceptor() { @Override public void intercept(RequestFacade request) { request.addHeader("Accept", "application/json;versions=1"); request.addHeader("Authorization", tokenType + " " + accessToken); } }); } RestAdapter adapter = builder.build(); return adapter.create(serviceClass); } ``` and use: ``` UploadService uploadService = ServiceGenerator.createService(UploadService.class,UploadService.BASE_URL); uploadService.uploadFile(remUrl,typedFile,new CallbackInstance()); ```
Use dynamic URL instead of providing the base URL, use @Url instead of @Path and pass a complete URI, encode= false is by default Eg: `@Multipart @PUT @Headers("x-amz-acl:public-read") Call<Void> uploadFile(@Url String url, @Header("Content-Type") String contentType, @Part MultipartBody.Part part);`
41,363,888
I could send mail using the following code ``` E:\Python\django-test\LYYDownloaderServer>python manage.py shell Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (In tel)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django.core.mail import send_mail >>> >>> send_mail( ... 'Subject here', ... 'Here is the message.', ... 'redstone-cold@163.com', ... ['2281570025@qq.com'], ... fail_silently=False, ... ) 1 >>> ``` According to the [doc](https://docs.djangoproject.com/en/1.10/howto/error-reporting/#server-errors): > > When DEBUG is False, Django will email the users listed in the ADMINS > setting whenever your code raises an unhandled exception and results > in an internal server error (HTTP status code 500). This gives the > administrators immediate notification of any errors. The ADMINS will > get a description of the error, a complete Python traceback, and > details about the HTTP request that caused the error. > > > but in my case, Django doesn't email reporting an internal server error (HTTP status code 500) [![enter image description here](https://i.stack.imgur.com/zQFGZ.png)](https://i.stack.imgur.com/zQFGZ.png) what's the problem? please help fix the problem settings.py ``` """ Django settings for LYYDownloaderServer project. Generated by 'django-admin startproject' using Django 1.9.1. For more information on this file, see https://docs.djangoproject.com/en/1.9/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.9/ref/settings/ """ import os ADMINS = [('Philip', 'r234327894@163.com'), ('Philip2', '768799875@qq.com')] EMAIL_HOST = 'smtp.163.com' # 'localhost'#'smtp.139.com' # EMAIL_PORT = 25 # EMAIL_USE_TLS = True EMAIL_HOST_USER = 'r234327894@163.com' # '13529123633@139.com' EMAIL_HOST_PASSWORD = '******' # DEFAULT_FROM_EMAIL = 'r234327894@163.com' # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 's4(z8qzt$=x(2t(ok5bb58_!u==+x97t0vpa=*8bb_68baekkh' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = False ALLOWED_HOSTS = ['127.0.0.1']#, '.0letter.com' # Application definition INSTALLED_APPS = [ 'VideoParser.apps.VideoparserConfig', 'FileHost.apps.FilehostConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] MIDDLEWARE_CLASSES = [ 'django.middleware.common.BrokenLinkEmailsMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'LYYDownloaderServer.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'LYYDownloaderServer.wsgi.application' # Database # https://docs.djangoproject.com/en/1.9/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Password validation # https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/1.9/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.9/howto/static-files/ STATIC_URL = '/static/' ``` the start of views.py ``` from django.http import JsonResponse, HttpResponse import logging import m3u8 import os from VideoParser.parsers.CommonParsers import * import urllib.parse import hashlib from datetime import datetime, timedelta, date from django.views.decorators.csrf import csrf_exempt from django.db import IntegrityError from VideoParser.models import * from importlib import import_module # print('-------------views --------') FILES_DIR = 'files' # specialHostName2module = {'56': 'v56'} logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d %I:%M:%S %p', level=logging.ERROR, handlers=[logging.handlers.RotatingFileHandler(filename=os.path.join(FILES_DIR, 'LYYDownloaderServer.log'), maxBytes=1024 * 1024, backupCount=1)]) ... ```
2016/12/28
[ "https://Stackoverflow.com/questions/41363888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1485853/" ]
First, it doesn't matter if you were able to send the mail using the console, but if you received the mail. I assume you did. Second, it's best to try with exactly the same email address in the console as the one set in the `ADMINS`, just to be sure. Finally, the sender address might also matter. The default is "root@localhost", and while "root" is OK, "localhost" is not, and some mail servers may refuse the email. Specify another sender email address by setting the `SERVER_EMAIL` Django setting.
Djano sends admin emails on error using logging system. As I can see from your `views.py` you are changing logging settings. This can be the cause of the problem as you cleared that django admin handler `mail_admins`. For more information check [django documentation](https://docs.djangoproject.com/en/1.10/topics/logging/#topic-logging-parts-handlers)
41,363,888
I could send mail using the following code ``` E:\Python\django-test\LYYDownloaderServer>python manage.py shell Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:01:18) [MSC v.1900 32 bit (In tel)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django.core.mail import send_mail >>> >>> send_mail( ... 'Subject here', ... 'Here is the message.', ... 'redstone-cold@163.com', ... ['2281570025@qq.com'], ... fail_silently=False, ... ) 1 >>> ``` According to the [doc](https://docs.djangoproject.com/en/1.10/howto/error-reporting/#server-errors): > > When DEBUG is False, Django will email the users listed in the ADMINS > setting whenever your code raises an unhandled exception and results > in an internal server error (HTTP status code 500). This gives the > administrators immediate notification of any errors. The ADMINS will > get a description of the error, a complete Python traceback, and > details about the HTTP request that caused the error. > > > but in my case, Django doesn't email reporting an internal server error (HTTP status code 500) [![enter image description here](https://i.stack.imgur.com/zQFGZ.png)](https://i.stack.imgur.com/zQFGZ.png) what's the problem? please help fix the problem settings.py ``` """ Django settings for LYYDownloaderServer project. Generated by 'django-admin startproject' using Django 1.9.1. For more information on this file, see https://docs.djangoproject.com/en/1.9/topics/settings/ For the full list of settings and their values, see https://docs.djangoproject.com/en/1.9/ref/settings/ """ import os ADMINS = [('Philip', 'r234327894@163.com'), ('Philip2', '768799875@qq.com')] EMAIL_HOST = 'smtp.163.com' # 'localhost'#'smtp.139.com' # EMAIL_PORT = 25 # EMAIL_USE_TLS = True EMAIL_HOST_USER = 'r234327894@163.com' # '13529123633@139.com' EMAIL_HOST_PASSWORD = '******' # DEFAULT_FROM_EMAIL = 'r234327894@163.com' # Build paths inside the project like this: os.path.join(BASE_DIR, ...) BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/1.9/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = 's4(z8qzt$=x(2t(ok5bb58_!u==+x97t0vpa=*8bb_68baekkh' # SECURITY WARNING: don't run with debug turned on in production! DEBUG = False ALLOWED_HOSTS = ['127.0.0.1']#, '.0letter.com' # Application definition INSTALLED_APPS = [ 'VideoParser.apps.VideoparserConfig', 'FileHost.apps.FilehostConfig', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] MIDDLEWARE_CLASSES = [ 'django.middleware.common.BrokenLinkEmailsMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] ROOT_URLCONF = 'LYYDownloaderServer.urls' TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] WSGI_APPLICATION = 'LYYDownloaderServer.wsgi.application' # Database # https://docs.djangoproject.com/en/1.9/ref/settings/#databases DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } # Password validation # https://docs.djangoproject.com/en/1.9/ref/settings/#auth-password-validators AUTH_PASSWORD_VALIDATORS = [ { 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', }, { 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', }, { 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', }, { 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', }, ] # Internationalization # https://docs.djangoproject.com/en/1.9/topics/i18n/ LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.9/howto/static-files/ STATIC_URL = '/static/' ``` the start of views.py ``` from django.http import JsonResponse, HttpResponse import logging import m3u8 import os from VideoParser.parsers.CommonParsers import * import urllib.parse import hashlib from datetime import datetime, timedelta, date from django.views.decorators.csrf import csrf_exempt from django.db import IntegrityError from VideoParser.models import * from importlib import import_module # print('-------------views --------') FILES_DIR = 'files' # specialHostName2module = {'56': 'v56'} logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d %I:%M:%S %p', level=logging.ERROR, handlers=[logging.handlers.RotatingFileHandler(filename=os.path.join(FILES_DIR, 'LYYDownloaderServer.log'), maxBytes=1024 * 1024, backupCount=1)]) ... ```
2016/12/28
[ "https://Stackoverflow.com/questions/41363888", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1485853/" ]
First, it doesn't matter if you were able to send the mail using the console, but if you received the mail. I assume you did. Second, it's best to try with exactly the same email address in the console as the one set in the `ADMINS`, just to be sure. Finally, the sender address might also matter. The default is "root@localhost", and while "root" is OK, "localhost" is not, and some mail servers may refuse the email. Specify another sender email address by setting the `SERVER_EMAIL` Django setting.
The [Django doc](https://docs.djangoproject.com/en/1.10/howto/error-reporting/#server-errors) says: In order to send email, EMAIL\_HOST, EMAIL\_HOST\_USER and EMAIL\_HOST\_PASSWORD are at the very least needed, but as I tested, we should also specify SERVER\_EMAIL, and only when SERVER\_EMAIL is equal to EMAIL\_HOST\_USER so can send the email, e.g. ``` EMAIL_HOST = 'smtp.163.com' SERVER_EMAIL = '234327894-cold@163.com' # EMAIL_HOST_USER = '234327894-cold@163.com' # EMAIL_HOST_PASSWORD = '234327894123' # ``` Django uses [AdminEmailHandler](https://docs.djangoproject.com/en/1.10/topics/logging/#django.utils.log.AdminEmailHandler) to send an email to the site admins for each log message it receives. Besides Django, we could also use Python's [logging.handlers.SMTPHandler(mailhost, fromaddr, toaddrs, subject, credentials=None, secure=None, timeout=1.0)](https://docs.python.org/3/library/logging.handlers.html#smtphandler) to do the same. For example, put the following code in `views.py`(do change to your account), it will email reporting unhandled exceptions and results in an internal server error (HTTP status code 500). ``` import logging logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%m/%d %I:%M:%S %p', handlers=[logging.handlers.SMTPHandler('smtp.163.com', '234327894-cold@163.com', ['234327894-cold@163.com'], 'LYYDownloader Server Exception', credentials=('234327894-cold@163.com', '234327894123'))]) ``` Comparing `AdminEmailHandler` and `SMTPHandler`, I advocate to use `SMTPHandler` whenever possible. First, `AdminEmailHandler` is Django specific, while you can use `SMTPHandler` in any Python program, one thing you should care is when using `SMTPHandler` in client side software, some anti-virus software may treat the software as spyware, so you should inform users when your software is about to send email. Second, I found email sending by `AdminEmailHandler` has a bunch of Information while `SMTPHandler` just send the Python exception Information which makes debugging a bit clear! Third , If you configure your email within `settings.py` in Django , there is no exception throw out even If you have made something wrong with the email confirmation , while `SMTPHandler` always throw out exception on what's wrong in using . cite from <http://redstoneleo.blogspot.com/2016/12/email-reporting-exceptions-and-errors_30.html>
70,453,702
I am a trader, I want to use the XTB API to access the account,T try to learn Python I found XTBApi I install it Windows (python3 -m venv env) but when I enter the command (. \ venv \ Scripts \ activate) it doesn't work: The specified path could not be found. What do I have to do? Thanks How can i convert linux script to windows script : git clone git@github.com:federico123579/XTBApi.git cd XTBApi/ python3 -m venv env . env/bin/activate pip install .
2021/12/22
[ "https://Stackoverflow.com/questions/70453702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17330089/" ]
`stringr` is fine, but here a very good solution exists in base R. ``` x <- head(state.name) x # [1] "Alabama" "Alaska" "Arizona" "Arkansas" "California" "Colorado" substring(x, 5) # [1] "ama" "ka" "ona" "nsas" "fornia" "rado" ```
You may find this useful to rename columns. ```r library(dplyr) library(stringr) df %>% rename_with(str_sub, start = 5L) ``` If you don't want to do it for all of the columns, you can use the `.cols` argument. ```r # like this iris %>% rename_with(str_sub, start = 5L, .cols = starts_with("Sepal")) # or this iris %>% rename_with(str_sub, start = 5L, .cols = 1:2) # or this, and so on... iris %>% rename_with(str_sub, start = 5L, .cols = c("Petal.Length", "Species")) ```
70,453,702
I am a trader, I want to use the XTB API to access the account,T try to learn Python I found XTBApi I install it Windows (python3 -m venv env) but when I enter the command (. \ venv \ Scripts \ activate) it doesn't work: The specified path could not be found. What do I have to do? Thanks How can i convert linux script to windows script : git clone git@github.com:federico123579/XTBApi.git cd XTBApi/ python3 -m venv env . env/bin/activate pip install .
2021/12/22
[ "https://Stackoverflow.com/questions/70453702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17330089/" ]
`stringr` is fine, but here a very good solution exists in base R. ``` x <- head(state.name) x # [1] "Alabama" "Alaska" "Arizona" "Arkansas" "California" "Colorado" substring(x, 5) # [1] "ama" "ka" "ona" "nsas" "fornia" "rado" ```
This is not the best way but should be here for offering an alternative with `Base R`. ``` substr(state.name,5,nchar(state.name)) # [1] "ama" "ka" "ona" "nsas" "fornia" "rado" "ecticut" ```
70,453,702
I am a trader, I want to use the XTB API to access the account,T try to learn Python I found XTBApi I install it Windows (python3 -m venv env) but when I enter the command (. \ venv \ Scripts \ activate) it doesn't work: The specified path could not be found. What do I have to do? Thanks How can i convert linux script to windows script : git clone git@github.com:federico123579/XTBApi.git cd XTBApi/ python3 -m venv env . env/bin/activate pip install .
2021/12/22
[ "https://Stackoverflow.com/questions/70453702", "https://Stackoverflow.com", "https://Stackoverflow.com/users/17330089/" ]
You may find this useful to rename columns. ```r library(dplyr) library(stringr) df %>% rename_with(str_sub, start = 5L) ``` If you don't want to do it for all of the columns, you can use the `.cols` argument. ```r # like this iris %>% rename_with(str_sub, start = 5L, .cols = starts_with("Sepal")) # or this iris %>% rename_with(str_sub, start = 5L, .cols = 1:2) # or this, and so on... iris %>% rename_with(str_sub, start = 5L, .cols = c("Petal.Length", "Species")) ```
This is not the best way but should be here for offering an alternative with `Base R`. ``` substr(state.name,5,nchar(state.name)) # [1] "ama" "ka" "ona" "nsas" "fornia" "rado" "ecticut" ```
36,730,812
I'm newbie for raspberry pi and python coding. I'm working on a school project. I've already looked for some tutorials and examples but maybe I'm missing something. I want to build a web server based gpio controller. I'm using flask for this. For going into this, I've started with this example. Just turning on and off the led by refreshing the page. So the problem is, I can't see the response value on the web server side. It's turning on and off the led. But I want to see the situation online. But I just couldn't. I'm getting and internal server error. I'm giving the python and html codes. Can you help me with solving the problem. ``` from flask import Flask from flask import render_template import RPi.GPIO as GPIO app=Flask(__name__) GPIO.setmode(GPIO.BCM) GPIO.setup(4, GPIO.OUT) GPIO.output(4,1) status=GPIO.HIGH @app.route('/') def readPin(): global status global response try: if status==GPIO.LOW: status=GPIO.HIGH print('ON') response="Pin is high" else: status=GPIO.LOW print('OFF') response="Pin is low" except: response="Error reading pin" GPIO.output(4, status) templateData= { 'title' : 'Status of Pin' + status, 'response' : response } return render_template('pin.html', **templateData) if __name__=="__main__": app.run('192.168.2.5') ``` And basically just this line is on my html page. ``` <h1>{{response}}</h1> ``` I think "response" doesn't get a value. What's wrong on this?
2016/04/19
[ "https://Stackoverflow.com/questions/36730812", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6227347/" ]
Firstly it helps to run it in debug mode: `app.run(debug=True)` This will help you track down any errors which are being suppressed. Next have a look at the line where you are building the title string: `'title' : 'Status of Pin' + status` If you enable the debug mode, then you should see something saying that an int/bool can't be converted to str implicitly. (Python doesn't know how to add a string and an int/bool). In order to fix this, you should explicitly cast status to a string: `'title' : 'Status of Pin' + str(status)` Or better yet: `'title' : 'Status of Pin: {}'.format(status)`
Your server was probably throwing an exception when trying to create your dictionary, therefore the templateData value was being sent as an empty value. Notice in this example, the TypeError which is thrown when trying to concatenate 2 variables of different type. Hence, wrapping your variable in the str(status) will cast the status variable to it's string repersentation before attempting to combine the variables. ``` [root@cloud-ms-1 alan]# cat add.py a = 'one' b = 2 print a + b [root@cloud-ms-1 alan]# python add.py Traceback (most recent call last): File "add.py", line 6, in <module> print a + b TypeError: cannot concatenate 'str' and 'int' objects [root@cloud-ms-1 alan]# cat add.py a = 'one' b = str(2) print a + b [root@cloud-ms-1 alan]# python add.py one2 ```
53,480,646
I wrote a python script which makes calculation at every hour. I run this script with crontab scheduled for every hour. But there is one more thing to do; Additionally, I should make calculation once a day by using the results evaluated at every hour. In this context, I defined a thread function which checks the current time is equal to the specified time (15:00 PM, once a day ). If it is, thread function is called and calculation made. What I wanna ask here is; is this approach applicable ? I mean, running the first script at every hour using crontab, and calling the second function using thread function once a day. Is there any other way of doing this ?
2018/11/26
[ "https://Stackoverflow.com/questions/53480646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8118659/" ]
Using jquery method as follows: ``` $("button[name^=t]").click(function(){ //process } ``` The above method will be invoked whenever a `button` whose name starts with `'t'` is clicked.
Run this snippet. You can read text using prev(). ```js $('.btn').on('click', function(){ alert($( this ).prev().val()); }) ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>1 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>2 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name="">3 <i class="fas fa-redo"></i> </button> </div> ```
53,480,646
I wrote a python script which makes calculation at every hour. I run this script with crontab scheduled for every hour. But there is one more thing to do; Additionally, I should make calculation once a day by using the results evaluated at every hour. In this context, I defined a thread function which checks the current time is equal to the specified time (15:00 PM, once a day ). If it is, thread function is called and calculation made. What I wanna ask here is; is this approach applicable ? I mean, running the first script at every hour using crontab, and calling the second function using thread function once a day. Is there any other way of doing this ?
2018/11/26
[ "https://Stackoverflow.com/questions/53480646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8118659/" ]
No need to add any identifier, Just grab the data of the previous element using Jquery using the following code. ``` $(document).on('click','.btnComment',function(){ var CommentText = $(this).prev().val(); alert(CommentText); }); ``` Check out the working fiddle [here](http://jsfiddle.net/gokulmaha/whje4vx0/7/)
Run this snippet. You can read text using prev(). ```js $('.btn').on('click', function(){ alert($( this ).prev().val()); }) ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>1 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>2 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name="">3 <i class="fas fa-redo"></i> </button> </div> ```
53,480,646
I wrote a python script which makes calculation at every hour. I run this script with crontab scheduled for every hour. But there is one more thing to do; Additionally, I should make calculation once a day by using the results evaluated at every hour. In this context, I defined a thread function which checks the current time is equal to the specified time (15:00 PM, once a day ). If it is, thread function is called and calculation made. What I wanna ask here is; is this approach applicable ? I mean, running the first script at every hour using crontab, and calling the second function using thread function once a day. Is there any other way of doing this ?
2018/11/26
[ "https://Stackoverflow.com/questions/53480646", "https://Stackoverflow.com", "https://Stackoverflow.com/users/8118659/" ]
Here how you can do it without jquery: ```js const buttons = document.querySelectorAll('.btn') buttons.forEach(button => button.addEventListener('click', (event) => { // init listeners for buttons const value = event.target.parentNode // get to the textarea through the parent (div) .querySelector('textarea').value; console.log(value) })) ``` ```html <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>1 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>2 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name="">3 <i class="fas fa-redo"></i> </button> </div> ```
Run this snippet. You can read text using prev(). ```js $('.btn').on('click', function(){ alert($( this ).prev().val()); }) ``` ```html <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>1 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name=""> <i class="fas fa-redo"></i>2 </button> </div> <div class="post-comment"> <textarea class="comment-box" name="" id="" cols="80" rows="1" spellcheck="false"></textarea> <button class="btn btn-sm btnComment" id="" name="">3 <i class="fas fa-redo"></i> </button> </div> ```
50,397,060
I have dataframe like this: ``` >>df L1 L0 desc_L0 4956 10 Hi 1509 nan I am 1510 20 Here 1511 nan where r u ? ``` I want to insert a new column `desc_L1` when value for `L0` is null and same time move respective `desc_L0` value to `desc_L1`. Desired output: ``` L1 L0 desc_L0 desc_L1 4956 10 Hi nan 1509 nan nan I am 1510 20 Here nan 1511 nan nan where r u ? ``` How this can be done in pythonic way?
2018/05/17
[ "https://Stackoverflow.com/questions/50397060", "https://Stackoverflow.com", "https://Stackoverflow.com/users/7566673/" ]
First copy your series: ``` df['desc_L1'] = df['desc_L0'] ``` Then use a mask to update the two series: ``` mask = df['L1'].isnull() df.loc[~mask, 'desc_L1'] = np.nan df.loc[mask, 'desc_L0'] = np.nan ```
You can try so: ``` df['desc_L1'] = df['desc_L0'] df['desc_L1'] = np.where(df['L0'].isna(), df['desc_L0'], np.NaN) df['desc_L0'] = np.where(df['L0'].isna(), np.NaN, df['desc_L0']) ``` Input: ``` L0 desc_L0 0 10.0 hi 1 NaN I am 2 20.0 Here 3 NaN where are u? ``` Output: ``` L0 desc_L0 desc_L1 0 10.0 hi NaN 1 NaN NaN I am 2 20.0 Here NaN 3 NaN NaN where are u? ```
49,172,957
I'm learning python, I want to check if the second largest number is duplicated in a list. I've tried several ways, but I couldn't. Also, I have searched on google for this issue, I have got several answers to get/print 2nd largest number from a list but I couldn't find any answer to check if the 2nd largest number is duplicated. can anyone help me, please? Here is my sample list: ``` list1 = [5, 6, 9, 9, 11] list2 = [8, 9, 13, 14, 14] ```
2018/03/08
[ "https://Stackoverflow.com/questions/49172957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6544266/" ]
Here is a *1-liner*: ``` >>> list1 = [5, 6, 9, 9, 11] >>> list1.count(sorted(list1)[-2]) > 1 True ``` or using [heapq](https://docs.python.org/3/library/heapq.html#heapq.nlargest) ``` >>> import heapq >>> list1 = [5, 6, 9, 9, 11] >>> list1.count(heapq.nlargest(2, list1)[1]) > 1 True ```
This is a simple algorithm: 1. Makes values unique 2. Sort your list by max value 3. Takes the second element 4. Check how many occurences of this elemnt exists in list Code: ``` list1 = [5, 6, 9, 9, 11] list2 = [8, 9, 13, 14, 14] def check(data): # 1. Make data unique unique = list(set(data)) # 2. Sort by value sorted_data = sorted(unique, reverse=True) # 3. Takes the second element item = sorted_data[1] # 4. Check occurences if data.count(item) > 1: return True else: return False print(check(list1)) print(check(list2)) ``` Ouput ``` True False ```
49,172,957
I'm learning python, I want to check if the second largest number is duplicated in a list. I've tried several ways, but I couldn't. Also, I have searched on google for this issue, I have got several answers to get/print 2nd largest number from a list but I couldn't find any answer to check if the 2nd largest number is duplicated. can anyone help me, please? Here is my sample list: ``` list1 = [5, 6, 9, 9, 11] list2 = [8, 9, 13, 14, 14] ```
2018/03/08
[ "https://Stackoverflow.com/questions/49172957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6544266/" ]
Here is a *1-liner*: ``` >>> list1 = [5, 6, 9, 9, 11] >>> list1.count(sorted(list1)[-2]) > 1 True ``` or using [heapq](https://docs.python.org/3/library/heapq.html#heapq.nlargest) ``` >>> import heapq >>> list1 = [5, 6, 9, 9, 11] >>> list1.count(heapq.nlargest(2, list1)[1]) > 1 True ```
`collections.Counter` with `sorted` offers one solution: ``` from collections import Counter lst1 = [5, 6, 9, 9, 11] lst2 = [8, 9, 13, 14, 14] res1 = sorted(Counter(lst1).items(), key=lambda x: -x[0])[1] # (9, 2) res2 = sorted(Counter(lst2).items(), key=lambda x: -x[0])[1] # (13, 1) ``` The result is a tuple of second largest item and its count. It is then simple to check if the item is duplicated, e.g. `res1[1] > 1`.
49,172,957
I'm learning python, I want to check if the second largest number is duplicated in a list. I've tried several ways, but I couldn't. Also, I have searched on google for this issue, I have got several answers to get/print 2nd largest number from a list but I couldn't find any answer to check if the 2nd largest number is duplicated. can anyone help me, please? Here is my sample list: ``` list1 = [5, 6, 9, 9, 11] list2 = [8, 9, 13, 14, 14] ```
2018/03/08
[ "https://Stackoverflow.com/questions/49172957", "https://Stackoverflow.com", "https://Stackoverflow.com/users/6544266/" ]
Here is a *1-liner*: ``` >>> list1 = [5, 6, 9, 9, 11] >>> list1.count(sorted(list1)[-2]) > 1 True ``` or using [heapq](https://docs.python.org/3/library/heapq.html#heapq.nlargest) ``` >>> import heapq >>> list1 = [5, 6, 9, 9, 11] >>> list1.count(heapq.nlargest(2, list1)[1]) > 1 True ```
Here is my proposal ``` li = [5, 6, 9, 9, 11] li_uniq = list(set(li)) # list's elements are uniquified li_uniq_sorted = sorted(li_uniq) # sort in ascending order second_largest = li_uniq_sorted[-2] # get the 2nd largest -> 9 li.count(second_largest) # -> 2 (duplicated if > 1) ```
53,569,854
I have two list. I want add values in vp based on the list color. So I want this output: ``` total = [60,90,60] ``` Because I want that the code runs what follows: `total = [10+20+30, 40+50,60]` ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] ``` I don't know how to do. I began something like this in python3: ``` for c, v in zip(color, Vp): total.append .... ``` thank you
2018/12/01
[ "https://Stackoverflow.com/questions/53569854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10532989/" ]
You can play with some slicing through lists to gather elements from original list based on content in another list, sum it up and append to final list: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] i = 0 for x in color: total.append(sum(vp[i:i+x])) i += x print(total) # [60, 90, 60] ```
``` total = [] index = 0 for c in color: inside = 0 for i in range(c): inside += vp[index + i] index += 1 total.append(inside) print(total) ```
53,569,854
I have two list. I want add values in vp based on the list color. So I want this output: ``` total = [60,90,60] ``` Because I want that the code runs what follows: `total = [10+20+30, 40+50,60]` ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] ``` I don't know how to do. I began something like this in python3: ``` for c, v in zip(color, Vp): total.append .... ``` thank you
2018/12/01
[ "https://Stackoverflow.com/questions/53569854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10532989/" ]
This answer is not ideal for this example but it might be useful for other situations when you want to convert a dense representation to sparse representation. In this case, we convert the 1D array to 2D array with padding. For example, you want to be able to use `np.sum`: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] # padding (numpy friendly) max_len = max(color) vp_with_padding = [ vp[sum(color[:i]):sum(color[:i])+l] + [0] * (max_len - l) for i, l in enumerate(color) ] # [[10, 20, 30], [40, 50, 0], [60, 0, 0]] total = np.sum(vp_with_padding, 1) # similar to: #total = [sum(x) for x in vp_with_padding] ```
``` total = [] index = 0 for c in color: inside = 0 for i in range(c): inside += vp[index + i] index += 1 total.append(inside) print(total) ```
53,569,854
I have two list. I want add values in vp based on the list color. So I want this output: ``` total = [60,90,60] ``` Because I want that the code runs what follows: `total = [10+20+30, 40+50,60]` ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] ``` I don't know how to do. I began something like this in python3: ``` for c, v in zip(color, Vp): total.append .... ``` thank you
2018/12/01
[ "https://Stackoverflow.com/questions/53569854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10532989/" ]
You can play with some slicing through lists to gather elements from original list based on content in another list, sum it up and append to final list: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] i = 0 for x in color: total.append(sum(vp[i:i+x])) i += x print(total) # [60, 90, 60] ```
This answer is not ideal for this example but it might be useful for other situations when you want to convert a dense representation to sparse representation. In this case, we convert the 1D array to 2D array with padding. For example, you want to be able to use `np.sum`: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] # padding (numpy friendly) max_len = max(color) vp_with_padding = [ vp[sum(color[:i]):sum(color[:i])+l] + [0] * (max_len - l) for i, l in enumerate(color) ] # [[10, 20, 30], [40, 50, 0], [60, 0, 0]] total = np.sum(vp_with_padding, 1) # similar to: #total = [sum(x) for x in vp_with_padding] ```
53,569,854
I have two list. I want add values in vp based on the list color. So I want this output: ``` total = [60,90,60] ``` Because I want that the code runs what follows: `total = [10+20+30, 40+50,60]` ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] ``` I don't know how to do. I began something like this in python3: ``` for c, v in zip(color, Vp): total.append .... ``` thank you
2018/12/01
[ "https://Stackoverflow.com/questions/53569854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10532989/" ]
You can play with some slicing through lists to gather elements from original list based on content in another list, sum it up and append to final list: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] i = 0 for x in color: total.append(sum(vp[i:i+x])) i += x print(total) # [60, 90, 60] ```
Using List comprehensions - ``` vp = [10,20,30,40,50,60] color = [3,2,1] commu = np.cumsum(color) # Get the commulative sum - [3,5,6] commu = list([0])+list(commu[0:len(commu)-1]) # [0,3,5] and these are the beginning indexes total=[sum(vp[commu[i]:commu[i+1]]) if i < (len(range(len(commu)))-1) else sum(vp[commu[i]:]) for i in range(len(commu))] total [60, 90, 60] ```
53,569,854
I have two list. I want add values in vp based on the list color. So I want this output: ``` total = [60,90,60] ``` Because I want that the code runs what follows: `total = [10+20+30, 40+50,60]` ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] ``` I don't know how to do. I began something like this in python3: ``` for c, v in zip(color, Vp): total.append .... ``` thank you
2018/12/01
[ "https://Stackoverflow.com/questions/53569854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10532989/" ]
You can play with some slicing through lists to gather elements from original list based on content in another list, sum it up and append to final list: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] i = 0 for x in color: total.append(sum(vp[i:i+x])) i += x print(total) # [60, 90, 60] ```
Other option build a list with slices then map to sum: Destructive: ``` slices = [] for x in color: slices.append(vp[0:x]) del vp[0:x] sums = [sum(x) for x in slices] print (sums) #=> [60, 90, 60] ``` Non destructive: ``` slices = [] i = 0 for x in color: slices.append(vp[i:x+i]) i += x sums = [sum(x) for x in slices] print (sums) #=> [60, 90, 60] ```
53,569,854
I have two list. I want add values in vp based on the list color. So I want this output: ``` total = [60,90,60] ``` Because I want that the code runs what follows: `total = [10+20+30, 40+50,60]` ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] ``` I don't know how to do. I began something like this in python3: ``` for c, v in zip(color, Vp): total.append .... ``` thank you
2018/12/01
[ "https://Stackoverflow.com/questions/53569854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10532989/" ]
This answer is not ideal for this example but it might be useful for other situations when you want to convert a dense representation to sparse representation. In this case, we convert the 1D array to 2D array with padding. For example, you want to be able to use `np.sum`: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] # padding (numpy friendly) max_len = max(color) vp_with_padding = [ vp[sum(color[:i]):sum(color[:i])+l] + [0] * (max_len - l) for i, l in enumerate(color) ] # [[10, 20, 30], [40, 50, 0], [60, 0, 0]] total = np.sum(vp_with_padding, 1) # similar to: #total = [sum(x) for x in vp_with_padding] ```
Using List comprehensions - ``` vp = [10,20,30,40,50,60] color = [3,2,1] commu = np.cumsum(color) # Get the commulative sum - [3,5,6] commu = list([0])+list(commu[0:len(commu)-1]) # [0,3,5] and these are the beginning indexes total=[sum(vp[commu[i]:commu[i+1]]) if i < (len(range(len(commu)))-1) else sum(vp[commu[i]:]) for i in range(len(commu))] total [60, 90, 60] ```
53,569,854
I have two list. I want add values in vp based on the list color. So I want this output: ``` total = [60,90,60] ``` Because I want that the code runs what follows: `total = [10+20+30, 40+50,60]` ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] ``` I don't know how to do. I began something like this in python3: ``` for c, v in zip(color, Vp): total.append .... ``` thank you
2018/12/01
[ "https://Stackoverflow.com/questions/53569854", "https://Stackoverflow.com", "https://Stackoverflow.com/users/10532989/" ]
This answer is not ideal for this example but it might be useful for other situations when you want to convert a dense representation to sparse representation. In this case, we convert the 1D array to 2D array with padding. For example, you want to be able to use `np.sum`: ``` total = [] vp = [10,20,30,40,50,60] color = [3,2,1] # padding (numpy friendly) max_len = max(color) vp_with_padding = [ vp[sum(color[:i]):sum(color[:i])+l] + [0] * (max_len - l) for i, l in enumerate(color) ] # [[10, 20, 30], [40, 50, 0], [60, 0, 0]] total = np.sum(vp_with_padding, 1) # similar to: #total = [sum(x) for x in vp_with_padding] ```
Other option build a list with slices then map to sum: Destructive: ``` slices = [] for x in color: slices.append(vp[0:x]) del vp[0:x] sums = [sum(x) for x in slices] print (sums) #=> [60, 90, 60] ``` Non destructive: ``` slices = [] i = 0 for x in color: slices.append(vp[i:x+i]) i += x sums = [sum(x) for x in slices] print (sums) #=> [60, 90, 60] ```
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
Don't use to much reflection.
For Windows Forms on XP and Vista: Turn double buffering on across the board. It does cause transparency issues, so you would definitely want to test the UI: ``` protected override System.Windows.Forms.CreateParams CreateParams { get { CreateParams cp = base.CreateParams; cp.ExStyle = cp.ExStyle | 0x2000000; return cp; } } ```
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
Use a decent quality profiler and determine where your bottlenecks are. *Then* start asking how to improve performance. Anyone who makes any blanket statements like 'avoid reflection' without understanding both your performance profile and your problem domain should be shot (or at least reeducated). And given the size of the .Net landscape it's pretty much meaningless to talk about C# optimization: are we talking about WinForms, ASP.Net, BizTalk, Workflow, SQL-CLR? Without the context even general guidelines may be at best a waste of time. Consider also what you mean by 'speed it up' and 'improve performance'. Do you mean greater resource efficiency, or lower perceived wait time for an end user (assuming there is one)? These are very different problems to solve. Given the forum I feel obliged to point out that there some quite good coverage on these topics in Code Complete. Not C# specific mind. But that's a good thing. Bear in mind the language-specific micro-optimisations might well be subsumed into the next version of whatever compiler you're using, And if the difference between for and foreach is a big deal to you you're probably writing C++ anyway, right? [I liked RedGate's ANTS Profiler, but I think it could be bettered] With that out the way, some thoughts: * Use type(SomeType) in preference to instance.GetType() when possible * Use foreach in preference to for * Avoid boxing * Up to (I think) 3 strings it's ok to do StringA + StringB + StringC. After that you should use a StringBuilder
NGEN will help with some code, but do not bank on it. Personally, if your design is bad/slow, there is not much you can do. The best suggestion in such a case, is to implement some form of caching of expensive tasks.
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
A lot of slowness is related to database access. Make your database queries efficient and you'll do a lot for your app.
I recomend you those books: [Effective C#](https://rads.stackoverflow.com/amzn/click/com/0321245660). [More Effective C#](https://rads.stackoverflow.com/amzn/click/com/0321485890)
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
Unfortunately, relatively few optimisations are language specific. The basics apply across languages: * Measure performance against realistic loads * Have clearly-defined goals to guide you * Use a good profiler * Optimise architecture/design relatively early * Only micro-optimise when you've got a proven problem When you've absolutely proved you need to micro-optimise, the profiler tends to make it obvious what to look for - things like avoiding boxing and virtual calls. Oh, one thing I can think of which is .NET-specific: if you need to make a call frequently and are currently using reflection, [convert those calls into delegates](http://msmvps.com/blogs/jon_skeet/archive/2008/08/09/making-reflection-fly-and-exploring-delegates.aspx). EDIT: The other answers suggesting using generics and StringBuilder etc are of course correct. I (probably wrongly) assumed that those optimisations were too "obvious" ;)
Use Ngen.exe (Should come shipped with Visual Studio.) <http://msdn.microsoft.com/en-us/library/6t9t5wcf(VS.80).aspx> > > The Native Image Generator (Ngen.exe) is a tool that improves the performance of managed applications. Ngen.exe creates native images, which are files containing compiled processor-specific machine code, and installs them into the native image cache on the local computer. The runtime can use native images from the cache instead using the just-in-time (JIT) compiler to compile the original assembly. > > >
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
One simple thing is to ensure that your build configuration is set to "Release". This will enable optimizations and eliminate debugging information, making your executable smaller. More info [on MSDN](http://msdn.microsoft.com/en-us/library/ms173083.aspx) if needed.
A lot of slowness is related to database access. Make your database queries efficient and you'll do a lot for your app.
183,033
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: * **Firstly**: Given an established C# project, what are some decent ways to speed it up beyond just plain in-code optimization? * **Secondly**: When writing a program from scratch in C#, what are some good ways to greatly improve performance? **Please stay away from general optimization techniques unless they are *C# specific*.** This has previously been asked for [Python](https://stackoverflow.com/questions/172720/speeding-up-python), [Perl](https://stackoverflow.com/questions/177122/speeding-up-perl), and [Java](https://stackoverflow.com/questions/179745/speeding-up-java).
2008/10/08
[ "https://Stackoverflow.com/questions/183033", "https://Stackoverflow.com", "https://Stackoverflow.com/users/145/" ]
One simple thing is to ensure that your build configuration is set to "Release". This will enable optimizations and eliminate debugging information, making your executable smaller. More info [on MSDN](http://msdn.microsoft.com/en-us/library/ms173083.aspx) if needed.
Caching items that result from a query: ``` private Item _myResult; public Item Result { get { if (_myResult == null) { _myResult = Database.DoQueryForResult(); } return _myResult; } } ``` Its a basic technique that is frequently overlooked by starting programmers, and one of the EASIEST ways to improve performance in an application. Answer ported from a question that was ruled a dupe of this one.