text
stringlengths
8
267k
meta
dict
Q: GAMS, matrix direct assignment I want to assign values to a 3-D table in GAMS. But it seems it doesn't work as in Matlab.....Any luck ? Code is as followed and the problem is at the last few lines: Sets n nodes / Sto , Lon , Par , Ber , War , Mad , Rom / i scenarios / 1 * 4 / k capacity level / L, N, H / ; alias(n,m); Table balance(n,i) traffic balance for different nodes 1 2 3 4 Sto 50 50 -50 -50 Lon -40 40 -40 40 Par 0 0 0 0 Ber 0 0 0 0 War 40 -40 40 -40 Mad 0 0 0 0 Rom -50 -50 50 50 ; Scalar r fluctuation rate of the capacity level /0.15/; Parameter p(k) probability of each level / L 0.25 N 0.5 H 0.25 / ; Table nor_cap(n,m) Normal capacity level from n to m Sto Lon Par Ber War Mad Rom Sto 0 11 14 25 30 0 0 Lon 11 0 21 0 0 14 0 Par 14 21 0 22 0 31 19 Ber 25 0 22 0 26 0 18 War 30 0 0 26 0 18 22 Mad 0 14 31 0 18 0 15 Rom 0 0 19 18 22 15 0 ; Table max_cap(n,m,k) capacity level under each k max_cap(n,m,'N')=nor_cap(n,m) max_cap(n,m,'L')=nor_cap(n,m)*(1-r) max_cap(n,m,'H')=nor_cap(n,m)*(1+r); A: The final assignment to a 3-D matrix should be done with PARAMETER as opposed to TABLE. In general I would also note that TABLE is very restrictive (2 dimensional, text input inside the code). You might want to consider $GDXIN (or EXECUTE_LOAD) and some of the GAMS utilities for loading xls or csv files. As a user of both MATLAB and GAMS I would note that GAMS depends on "indices" for every array, but otherwise they can be quite similar. In your case max_cap(n,m,k) would be something like the maximum capacity between from_city and to_city under each capacity level scenario. Your matrix needs to be declared as a PARAMETER which can be any n-dimensional (indexed) matrix, including even a SCALAR. Also, try the GAMS mailing list if you really need an answer quickly, the number of proficient GAMS users globally can't be more than a few thousand, so it might be hard to find a quick answer on StackOverflow - awesome as it is for the more common languages.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SiteMapPath not showing sitemap as expected Using a <asp:SiteMapPath> control with the Web.sitemap file below: <asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" /> <asp:SiteMapPath ID="SiteMapPath1" runat="server"></asp:SiteMapPath> <asp:Image ID="Image1" runat="server" ImageUrl="~/closed-sign.jpg" Height="300" Width="400" /> While running it's not showing me way it should show as in this example. It's only showing the image, with no sitemap. How can this be fixed? <?xml version="1.0" encoding="utf-8" ?> <siteMap xmlns="http://schemas.microsoft.com/AspNet/SiteMap-File-1.0" > <siteMapNode url="Default.aspx" title="Home" description="Home Page"> <siteMapNode url="StandardToolBox.aspx" title="Standard ToolBox Control" description="Standard ToolBox Control"> <siteMapNode url="BulletedList.aspx" title="BulltedList Example" description="BulltedList Control Simple Example" /> <siteMapNode url="CheckBox.aspx" title="CheckBox Example" description="CheckBox Control Simple Example" /> <siteMapNode url="CheckBoxList.aspx" title="CheckBoxList Example" description="CheckBoxList Control Simple Example" /> <siteMapNode url="Image.aspx" title="Image Control Example" description="Image Control Simple Example"/> </siteMapNode> <siteMapNode url="DataToolBox.aspx" title="Data ToolBox Control" description="Data ToolBox Control"> <siteMapNode url="SqlDataSource.aspx" title="SqlDataSource Example" description="SqlDataSource Simple Example" /> <siteMapNode url="XmlDataSource.aspx" title="XmlDataSource Example" description="XmlDataSource Simple Example" /> </siteMapNode> A: The code you're using works fine. Likely the page you're looking at is NOT anywhere in the list on the site map. Ensure you're running this sample on a page named like: * *StandardToolBox.aspx *CheckBox.aspx *DataToolBox.aspx This is a breadcrumb control. It will only show links back to its parents. It will not send links to siblings.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to keep an important column value in a GROUP BY Here is the example data set i have : id | messageread | timestamp 1 | 'yes' | 999 1 | 'yes' | 0 1 | 'no' | 0 I'm doing so : SELECT * FROM nvm GROUP BY id You will note that the data set is already ORDER BY timestamp DESC As it should, MySQL is returning the first row. What i'd want is MySQL to return the first row, but with messageread='no' IF messageread='no' in one of the grouped rows, no matter if the normally returned row is 'yes' or 'no' Is that possible with MySQL? I promised myself to do as much as possible with MySQL and not PHP :-) Thanks! A: In order to make sure all rows are from the same column do: SELECT * FROM table1 t1 LEFT JOIN table1 t2 ON ((t1.messageread, t1.id) < (t2.messageread, t2.id)) WHERE T2.ID IS NULL This will select the minimum or maximum row from table1. And all columns will be from the same row. If it doesn't work you need to change the < to a > it's late here and I cannot test the query, but it should do the job. Warning, antipattern ahead This has the smell of rotten eggs all over it, but if you want to mix and match fast do: SELECT id, min(messageread), timestamp as random_timestamp FROM table1 GROUP BY id A: You'll need to do a CASE statement along with ANY. SELECT CASE WHEN 'no' = ANY(SELECT messageread FROM nvm t2 WHEN t2.id = t1.id) THEN 'no' ELSE 'yes' AS messageread, timestamp, FROM nvm t1 GROUP BY t1.id Note: you're basically getting a random timestamp here, do you want MAX(timestamp) or something?
{ "language": "en", "url": "https://stackoverflow.com/questions/7507666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Get the source code from an html file I am wondering if you could please help with generating .cpp/.h file from the following html file in a programmatic way (using whatever scripting language, or programming language, or even using editors such as vi or emacs): <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en-US" xml:lang="en-US"> <head> <title>Class</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> </head> <body link="blue" vlink="purple" bgcolor="#FFFABB" text="black"> <h2><font face="Helvetica">Code Fragment: Class</font></h2> </center><br><dl><dd><pre> <font color=#A000A0>template</font> &lt;<font color=#A000A0>typename</font> G&gt; <font color=#A000A0>class</font> Components : <font color=#A000A0>public</font> DFS&lt;G&gt; { <font color=#0000FF>// count components</font> <font color=#A000A0>private</font>: <font color=#A000A0>int</font> nComponents; <font color=#0000FF>// num of components</font> <font color=#A000A0>public</font>: <font color=#000000>Components</font>(<font color=#A000A0>const</font> G& g): DFS&lt;G&gt;(g) {} <font color=#0000FF>// constructor</font> <font color=#A000A0>int</font> <font color=#A000A0>operator</font>()(); <font color=#0000FF>// count components</font> }; </dl> </body> </html> If you could please point out how this was done in the other direction too, that would be great. Thanks a lot. A: Does this work for you? [18:56:44 jaidev@~]$ lynx --dump foo.html Code Fragment: Class template <typename G> class Components : public DFS<G> { // count components private: int nComponents; // num of components public: Components(const G& g): DFS<G>(g) {} // constructor int operator()(); // count components }; [18:56:49 jaidev@~]$ Edit: For the reverse direction. If you use vim as your editor, you can enter :TOhtml to generate a syntax highlighted HTML version of your code in a new buffer. It generates a html based on your vim colorscheme. To change the colorscheme, use the :colorscheme <name> command. A: PHP script: $doc = new DOMDocument(); $doc->loadHTMLFile("file.html"); $xpath = new DOMXpath($doc); $str = ''; foreach ($xpath->query("//dl//text()") as $node) { $str .= $node->nodeValue . ' '; } file_put_contents('file.cpp', $str); contents of file.cpp: template < typename G> class Components : public DFS<G> { // count components private : int nComponents; // num of components public : Components ( const G& g): DFS<G>(g) {} // constructor int operator ()(); // count components }; A: You could use regular expressions to... * *...keep only what's in the <body> of the HTML page, *...strip all the HTML tags (everything that looks like <.*> should be removed from the file). *...unescape special characters such as &lt;, &gt;, &amp; etc. What's left should be the code you're looking for. A: Another option for going from HTML to the source code is the html2text utility, that is often found installed in many Linux distributions. matteo@teomint:~/Desktop$ html2text out.html ***** Code Fragment: Class ***** template <typename G> class Components : public DFS<G> { // count components private: int nComponents; // num of components public: Components(const G& g): DFS<G>(g) {} // constructor int operator()(); // count components }; A: * *Fix the HTML. You're missing some closing tags. *Get PHP out * * *Obtain the pre code block with DOMDocument * * *strip_tags() from the result *Profit. A: If you're trying to strip all HTML tags to get back the original, non-highlighted source code, then you have a two options that I can think of: * *Parse the DOM tree and just grab all relevant text. *Use some regular expressions to remove the tags themselves. For example, maybe "s///" would be a good start?
{ "language": "en", "url": "https://stackoverflow.com/questions/7507676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to generate array-list of all combinations of a table in C/C++ I want to make automated measurements varying several parameters. There is a table with variable row/column-number containing parameter values, e.g.: a1 b1 c1 a2 b2 c2 a3 b3 c3 Is there an easy was for generating list of Arrays containing all combinations in column direction like that: a1 a2 a3 b1 a2 a3 c1 a2 a3 a1 b2 a3 b1 b2 a3 ... c1 c2 c3 3x3 Table should result in 27 combinations (3!). The algorthm should be if possible in C/C++, STL/Qt would be also great. Thank you for any hint! P.S.: It looks easy, but I have sat on this problem since 2 hours already! :-( A: You can use recursion: int selection[rows]; // Stores which item is selected for each row void func(int row_num) { if (row_num == rows) { // If we've selected for all the rows // Do your thing with selection[] return; } for (int i = 0; i < columns; i++) { // For each possible selection you can make row_num selection[row_num] = i; // Choose it func(row_num + 1); // Recurse over all possible combinations for the remaining rows } } func(0); // Goes over all possibilities A: Here is the promised code. It requires C++11 to run, but it's not that hard to modify it to work with C++98. #include <iostream> #include <vector> #include <string> #include <limits> #include <stdexcept> #include <algorithm> typedef ::std::vector< ::std::string > rowvec_t; typedef ::std::vector< rowvec_t > combovec_t; constexpr unsigned long long int_log(unsigned long long v, unsigned long long base) { return (v <= base) ? 0 : (1u + int_log(v / base, base)); } constexpr unsigned long long int_pow(unsigned long long base, unsigned long long exp) { return (exp < 1) ? 1 : ((exp & 1) ? (base * int_pow(base, exp - 1)) : int_pow(base * base, exp / 2)); } combovec_t count_em_all(const combovec_t &input) { const combovec_t::size_type rows = input.size(); if (rows <= 0) { return combovec_t(); } const rowvec_t::size_type cols = input[0].size(); if (int_log(::std::numeric_limits<unsigned long long>::max(), cols) < rows) { throw ::std::overflow_error("Too many rows and columns"); } const unsigned long long total_ct = int_pow(cols, rows); combovec_t result; for (unsigned long long ct = 0; ct < total_ct; ++ct) { rowvec_t cur_row; unsigned long long alldigits = ct; for (unsigned outcol = 0; outcol < rows; ++outcol) { const unsigned long long digit = alldigits % cols; alldigits /= cols; cur_row.emplace_back(input[outcol][digit]); } result.emplace_back(::std::move(cur_row)); } return ::std::move(result); } const combovec_t test = { { "a1", "b1", "c1" }, { "a2", "b2", "c2" }, { "a3", "b3", "c3" } }; int main(int argc, const char *argv[]) { combovec_t result = count_em_all(test); for (rowvec_t &row: result) { for (::std::string &col: row) { ::std::cout << col << ' '; } ::std::cout << '\n'; } return 0; } This basically treats the problem as the problem of counting in base b (where b is the number of columns). Each output is an n (where n is the number of rows in the input) digit number where each digit is one of the columns. A: @quasiverse: Thank you, looks like recursive function is the most elegant way. OK, my version looks ugly but works now. I post it only because of boring comments. // array containing column-indexes, all set to column 0 QByteArray idxs(rows,0); // array containing max indexes for column QByteArray endIdxs(rows,cols-1); //repeat, untill all indexes in idxs are max - all combinations done. do{ _varParamTestList.append( idxs ); for( int r=0; r< rows; ++r ) { int v = idxs[r]; v++; if ( v >= cols ) { idxs[r]=0; continue; } else { idxs[r] = v; _varParamTestList.append( idxs ); break; } _varParamTestList.append( idxs ); } }while ( idxs != endIdxs );
{ "language": "en", "url": "https://stackoverflow.com/questions/7507682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-2" }
Q: Dismiss keyboard button doesn't work 0 iOS For some reason, in my app, the dismiss keybaord button on the lower right hand corner never works. I am able to use resignFirstResponder on the UITextField correctly, but if the user tries to us that button, nothing happens. any ideas? - (BOOL)textFieldShouldEndEditing:(UITextField *)textField{ NSLog(@"shouldend"); return YES; } - (BOOL) textFieldShouldBeginEditing:(UITextField *)textField { NSLog(@"should begin"); return YES; } Additional Info: This is a problem for the entire app. The button never works in any textField in my app. A: You probably need to implement the UITextFieldDelegate method – textFieldShouldEndEditing:: - (BOOL)textFieldShouldEndEditing:(UITextField *)textField { return YES; } To test what's going on, you can also put in simple NSLogs in – textFieldDidBeginEditing: and – textFieldDidEndEditing:. This will tell you whether to gesture is being received or not.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Tab suppression callback I have a tab app and I'd like to know if there is a way to get a notification or to setup a callback for when a user suppress the tab from his Facebook page? Because I noticed that even if you remove the tab, you don't necessarily deauthorize the application (and for me it's not the same operation at all). Thank you for helping me.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to handle multitouch I am writing an app in xcode using box2d. Right now I am using the code below. The problem is that it will only handle one touch event. How can I make my code handle all of the touch events, in this case check the location of each touch. I also want to store the touches so that when they end I can use the proper code to end whatever the individual touches started. -(void) ccTouchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *myTouch = [touches anyObject]; CGPoint location = [myTouch locationInView:[myTouch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; b2Vec2 locationWorld = b2Vec2(location.x/PTM_RATIO, location.y/PTM_RATIO); CGSize screenSize = [CCDirector sharedDirector].winSize; if (locationWorld.x >= screenSize.width*2/5/PTM_RATIO && locationWorld.x <= screenSize.width*3.25/5/PTM_RATIO) { //do something } else if (0 && locationWorld.x <= screenSize.width*2/5/PTM_RATIO) { //do something else } } A: It should be something like this: - (void)ccTouchesBegan:(NSSet*)touches withEvent:(UIEvent*)event { for (UITouch *touch in touches) { if (touch.phase == UITouchPhaseBegan) { // Insert code here } } } A: You can get the number of fingers touching the screen with: NSSet *touchEvents = [event allTouches]; You can get each touches individual location, multi-taps, etc., using and enumerated for loop and stepping through touchEvents. A: In addition to iterating through the set of touches, you'll need to make sure that the view is multi-touch enabled. This can be done in Interface Builder/Xcode 4 A: In COCOS2D-X void LayerHero::ccTouchesEnded(CCSet* touches, CCEvent* event) { CCTouch* touch = (CCTouch*)( touches->anyObject() ); CCPoint location = touch->getLocationInView(); location = CCDirector::sharedDirector()->convertToGL(location); CCSize visibleSize = CCDirector::sharedDirector()->getVisibleSize(); if(location.x<visibleSize.width/2) { } else if(location.x>visibleSize.width/2) { CCLOG("We are in the touch2 %f",location.x); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7507689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Algorithm for placing a grid over a disordered set of points Given a large set (tens of thousands up to millions) of disordered points represented as 3D Cartesian vectors, what's a good algorithm for making a regular square grid (of user-defined spacing) that encloses all of the points? Some constraints: * *The grid needs to be square and regular *I need to be able to adjust the grid spacing (the length of a side of one of the squares), ideally with a single variable *I want a grid of minimum size, ie every 'block' in the grid should contain at least one of the disordered points, and every disordered point should be enclosed in a 'block' *The return value of the algorithm should be the list of coordinates of the grid points To illustrate in 2D, given this set of points: for some grid spacing X, one possible return value of the algorithm would be the coordinates of these red points (dashed lines for illustration purposes only): and for grid spacing X/2, one possible return value of the algorithm would be the coordinates of these red points (dashed lines for illustration purposes only): For anyone who's interested, the disordered points that I'm working with are the atomic coordinates of large protein molecules, like what you can get out of a .pdb file. Python is preferred for solutions, although pseudocode is also good. EDIT: I think that my first description of what I needed was maybe a little fuzzy, so I added some constraints and images in order to clarify things. A: I'd suggest you make a k-d tree. It's fast-ish, simple, and easy to implement: And Wikipedia code: class Node: pass def kdtree(point_list, depth=0): if not point_list: return # Select axis based on depth so that axis cycles through all valid values k = len(point_list[0]) # assumes all points have the same dimension axis = depth % k # Sort point list and choose median as pivot element point_list.sort(key=lambda point: point[axis]) median = len(point_list) // 2 # choose median # Create node and construct subtrees node = Node() node.location = point_list[median] node.left_child = kdtree(point_list[:median], depth + 1) node.right_child = kdtree(point_list[median + 1:], depth + 1) return node You'd have to slightly modify it, though, to fit within your constraints. A: How about Voronoi Diagram? It can be generated in O(n log n) using Fortunes algorithm. I don't know if it addresses your problem, but Voronoi Diagrams are very "narural". They are very common in the nature. Example (from Wikipedia): A: Because you are asking for a regular square grid of user-specified spacing, it sounds like a reasonably straightforward approach should work. Start by passing through the data to work out the minimum and maximum co-ordinate in each dimension. Work out the number of steps of user-specified spacing required to cover the distance between maximum and minimum. Pass through the data again to allocate each point to a cell in the grid, using a grid with a point at the minimum of each co-ordinate and the specified spacing (e.g. X_cell = Math.floor((x_i - x_min) / spacing)). Use a dictionary or an array to record the number of points in each cell. Now print out the co-ordinates of the cells with at least one point in them. You do have some freedom that I have not attempted to optimise: unless the distance between minimum and maximum co-ordinate is an exact multiple of the grid spacing, there will be some slop that allows you to slide the grid around and still have it contain all the points: at the moment the grid starts at the position of the lowest point, but it probably ends before the highest points, so you have room to move it down a little in each dimension. As you do this, some points will move from cell to cell, and the number of occupied cells will change. If you consider only moves in one dimension at a time, you can work out what will happen reasonably efficiently. Work out the distance in that dimension between each point and the maximum co-ordinate in that dimension of its cell, and then sort these values. As you move the grid down, the point with the smallest distance to its maximum co-ordinate will swap cells first, and you can iterate through these points one by one by moving through them in sorted order. If you update the counts of points in cells as you do this you can work out which shift minimises the number of occupied cells. Of course, you have three dimensions to worry about. You could work on them one at a time until you getting reductions in the number of cells. This is a local minimum, but may not be a global minimum. One way to look for other local minima is to start again from a randomly chosen starting point. A: Find a minimum-area square that encloses all of the points. Repeatedly subdivide each square into 4 sub-squares (so going from 1 to 4 to 16 to 64 to …). Stop just before one of the squares becomes empty. It's not hard to prove that the resulting grid is at most four times as coarse as the optimal solution (key insight: an empty square is guaranteed to contain at least one square from any grid at least twice as fine). Probably that constant can be reduced by introducing a random translation. A: I have experience with grid clustering in 2D and implemented an example in C# code. http://kunuk.wordpress.com/2011/09/15/clustering-grid-cluster/ This can handle step handle step 1, 2 and 4. You will have to modify the code and update it to 3D-space. Hope this gives you some ideas. The code runs in O(m*n) where m is number of grids and n is number of points. A: If you want the grid cells to be square and regular, you most likely want an Octree. If you can relax the square and regular constraint, you can make a k-d-tree.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: DelayedJob: How to solve the problem "Job failed to load"? I am using Ruby on Rails 3.1.0 and DelayedJob. As many people on the web I get the "Job failed to load: uninitialized constant Syck::Syck" error, but I think I discovered at least what generates the error (in my case). I have an ActiveModel like the following: class Contact include ActiveModel::Conversion include ActiveModel::Validations include ActiveModel::Dirty extend ActiveModel::Naming extend ActiveModel::Translation attr_accessor :full_name, :email, :subject, :message def initialize(attributes = {}) attributes.keys.each do |attr| instance_variable_set "@" + attr.to_s, attributes[attr.to_sym] end end validates_presence_of :full_name, :email, :subject, :message def persist @persisted = true end def persisted? false end end The related controller action is: def contact @contact = Contact.new(params[:contact]) if @contact.valid? ::Contact::Mailer.delay.contact(@contact) respond_to do |format| format.html { redirect_to root_path } end else respond_to do |format| format.html { render :action => :contact } end end end I noted that my problem with the "famous"\"infamous" Job failed to load: uninitialized constant Syck::Syck happens only if I run the @contact.valid?. If I re-implement the above controller action like this: def contact @contact = Contact.new(params[:contact]) ::Contact::Mailer.delay.contact(@contact) respond_to do |format| format.html { redirect_to root_path } end end all work as expected: I don't get the error and the e-mail is successfully sent. In few words, when I run @contact.valid? inside the controller action (I can run that also without using the if ... else statement) it generates the Job failed to load error. I really do not understand this strange behavior related to the DelayedJob gem and the valid? method. Why it happens? How can I solve the problem? More info at DelayedJob: “Job failed to load: uninitialized constant Syck::Syck” UPDATES If I debug the @contact.errors in both cases using or not using the @contact.valid? method... ... when I use the @contact.valid? method (DelayedJob does not work) I get #<ActiveModel::Errors:0x00000101759408 @base=#<Contact:0x000001017597f0 @full_name="Sample name", @email="foo@bar.com", @subject="Sample subject", @message="Sample message content.", @validation_context=nil, @errors=#<ActiveModel::Errors:0x00000101759408 ...>>, @messages={}> ... when I do not use the @contact.valid? method (DelayedJob works) I get #<ActiveModel::Errors:0x00000101759408 @base=#<Contact:0x000001017597f0 @full_name="Sample name", @email="foo@bar.com", @subject="Sample subject", @message="Sample message content.", @errors=#<ActiveModel::Errors:0x00000101759408 ...>>, @messages={}> Note that in the second case the @validation_context=nil is not present and that in both cases there is a "nested" <ActiveModel::Errors:0x0000010175940 ...> statement. Is that a bug? A: I found a solution that works for me. You can redefine the 'Object#to_yaml_properties' method within your Contact class to only include the properties you need. And thus exclude the 'errors' variable. def to_yaml_properties ['@full_name', '@email', '@subject', '@message'] end Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Tree view in VB.Net Is there any way to find a node in treeview using xpath? Also I wanted to expand only the selected path of the node in treeview when found. Sample: + A + B - C + C.1 + C.2 - C.3 - C.3.1 + C.4 + D + E Problem: Find C.3.1 using "C/C.3/C.3.1" and when found expand only C/C.3/C.3.1
{ "language": "en", "url": "https://stackoverflow.com/questions/7507705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a way to dynamically wrap/intercept HtmlHelper extension methods. Think decorator pattern I would like wrap/intercept HtmlHelper extension methods (TextBox, Hidden, etc) provided in System.Web.Mvc.Html to enable reuse of the same Partial Views in 2 separate use cases. Ex of Partial: @model BlogEntry @Html.TextBoxFor(t => t.Title) @Html.TextAreaFor(t => t.Body) @* Etc *@ The caller of the Partial will know the context (i.e. whether to override or leave the MS imp). The reason for overriding are various. For example: to use in JQuery templates, where the output for the value attribute would be "${Title}" on the example above or to add Html5 meta data. A: I'm not sure what your concerns are with adding your own extension methods -- why you'd have to "create your own base view page and completely take over." You can call your custom helpers in any page just as you would the built-in helpers: @Html.TextBoxFor(x => x.Name) @Html.MyTextBoxFor(x => x.Name) Furthermore, you can add some sort of flag parameter to your method to control whether it just executes the default functionality or something custom. When you create your own extension methods, you'll have to either change the signature or the name of the method. I used to use unique names, but ultimately found that I really wanted to be able to quickly discern my own implementations from the default, so I sometimes use: @Html.Custom().TextBoxFor(… @Html.Custom().TextAreaFor(… Basically, you create one new extension method that takes an HtmlHelper<T> and returns a CustomHelpers<T>. public static CustomHelpers<TModel> Custom<TModel>(this HtmlHelper<TModel> html) { return new CustomHelpers<TModel>(html); } The CustomHelpers<T> class defines all of your own implementations: public class CustomHelpers<TModel> { private readonly HtmlHelper<TModel> _html; public CustomHelpers(HtmlHelper<TModel> html) { _html = html; } public MvcHtmlString TextBoxFor<TProperty>(Expression<Func<TModel, TProperty>> expression) { // because you have a reference to the "native" HtmlHelper<TModel>, you // can use it here and extend or modify the result, almost like a decorator; // you can get the "native" result by calling _html.TextBoxFor(expression) } So, your "override" of TextBoxFor can receive a flag from your partial view to determine whether it returns the native result or something specific to the context. Again, the CustomHelpers<T> class is entirely optional. You'll be adding a flag parameter or something similar to the signature of your custom helpers, so you won't collide with existing helpers. The benefit it confers is to potentially namespace your helpers. You could have: @Html.TextBoxFor(… @Html.JQuery().TextBoxFor(… @Html.Mobile().TextBoxFor(… A: There is no way to intercept calls to the built-in helper extension methods. However you could write your own extension methods that do the right thing based on the context.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Javascript wrap div in/remove anchor on hover I have divs I need to wrap in an anchor tag with href="#" on hover and remove when the mouse leaves. When you hover on ps_image it wrap ps_img with <a href="#">DIV HERE</a> Then unwrap when not hovering. <div class="ps_image"> <div class="ps_img"> <img src="albums/album1/thumb/thumb.jpg" alt="Dachshund Puppy Thumbnail"/> </div> </div> p.s. doesn't matter if it only wraps the the child div, but would be nice Basically I'm trying to get the cursor to be the index pointer, without it linking A: That sounds like a terrible idea, but... $('.ps_image').each(function() { var psImg = $(this).find('.ps_img'), a; $(this).hover(function() { a = psImg.wrap('<a href="#" />').parent(); }, function() { a.children().unwrap(); }); }); jsFiddle. However, you probably have a misunderstanding of CSS. If you want the image to create a cursor, add cursor: pointer to its CSS selector.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: how to fill UITableViewCell TextLabels with NSDictionary value? I am on the last stage of setting up my Indexed Uitableview for fast scrolling, Once I have finished this I am going to write about my experience to hopefully help anyone that is trying to achieve the same results as myself. However I have one last step passing my NSDictionary values into my tableviewcells textlabels.. I'm just not sure how to do it and hoping someone can provide me with an example. This is what my dictionary looks like Dictionary: { H = ( Honda, Honda, Honda, Honda, Honda, Honda, Honda ); M = ( Mazda, Mazda, Mitsubishi, Mitsubishi, Mitsubishi, Mitsubishi, Mitsubishi, Mitsubishi ); N = ( Nissan, Nissan, Nissan, Nissan, Nissan, Nissan, Nissan ); T = ( Toyota, Toyota, Toyota ); I have tried a few random things but am pretty much clueless about passing a NSDictionary to the uitableview... heres what i Have attempted inside tableView:cellForRowAtIndexPath: //.. [[cell textLabel] setText:[[arraysByLetter objectAtIndex:indexPath.section] objectAtIndex:indexPath.row]]; return cell; //.. arraysByLetter is my NSDictionary and the line of code is giving me this warning 'NSMutableDictionary' may not respond to 'objectAtIndex:' any help would be greatly appreciated UPDATE::: I have found a great example of how to populate the uitableview cell textlabel with a NSString that is created like so NSString *value = [arraysByLetter objectForKey:[[arraysByLetter allKeys] objectAtIndex:indexPath.row]]; cell.textLabel.text = value; arraysByLetter being my sorted NSDictionary, however this is causing a program received signal "EXC_BAD_ACCESS" not sure why but trying to work through it. A: You appear to be confusing an NSMutableDictionary with an NSMutableArray. The former accesses objects by keys, typically NSStrings, and the latter by NSUIntegers. For example, if dict is an NSMutableDictionary, then you might call [dict objectForKey:@"myKey"]; If array is an NSMutableDictionary, then you might call [array objectAtIndex:0]; For the example you give, you have an NSDictionary where the objectForKey: is an NSArray. Therefore you should be calling something like [[dict objectForKey:@"H"] objectAtIndex:0]; to output "Honda" or [[dict objectForKey:@"M"] objectAtIndex:2]; to output "Mitsubishi". A: You are using an array method call on a dictionary. In a dictionary, you get its members by using a key that is associated with that object (member). In an array, you get a member of the array by using the index into the array. You are using an index where a key is needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: SQL Server 2005 Transaction Level and Stored Procedures If I use the command SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED and then execute a stored procedure using the EXEC storedProcedureName on the same context, will the stored procedure use the transaction level stated previously or will use a default one? If I want to force every stored procedure to use on transaction level do I have to include at the top of the code the same statement (SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED)? PS.: the system is built on top of .NET 2.0 and proprietary third party products with limitations, hence the need of these workarounds. A: The stored procedure will use the transaction isolation in effect when called. If the stored procedure itself sets an explicit isolation level this will be reset when the stored procedure exits. (Edit: Just checked and this is contrary to what BOL says "... it remains set for that connection until it is explicitly changed" but can be seen from the below) CREATE PROC CheckTransLevel AS DECLARE @Result varchar(20) SELECT @Result = CASE transaction_isolation_level WHEN 0 THEN 'Unspecified' WHEN 1 THEN 'ReadUncomitted' WHEN 2 THEN 'Readcomitted' WHEN 3 THEN 'Repeatable' WHEN 4 THEN 'Serializable' WHEN 5 THEN 'Snapshot' END FROM sys.dm_exec_sessions WHERE session_id = @@SPID PRINT @Result GO CREATE PROC SetRCTransLevel AS PRINT 'Enter: SetRCTransLevel' SET TRANSACTION ISOLATION LEVEL READ COMMITTED EXEC CheckTransLevel PRINT 'Exit: SetRCTransLevel' GO SET NOCOUNT ON SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED EXEC CheckTransLevel EXEC SetRCTransLevel EXEC CheckTransLevel Results ReadUncomitted Enter: SetRCTransLevel Readcomitted Exit: SetRCTransLevel ReadUncomitted
{ "language": "en", "url": "https://stackoverflow.com/questions/7507714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: 'SQLSTATE 57017' (Character Conversion Problem)for SQL UDF java program on the iSeries I followed the instructions at http://www.itjungle.com/mpo/mpo100903-story01.html The UDF looks like this CREATE FUNCTION re_Test(input VARCHAR(500), regex VARCHAR(500)) RETURNS INTEGER EXTERNAL NAME 'UDFs.re_Test' LANGUAGE Java PARAMETER STYLE Java FENCED NO SQL RETURNS NULL ON NULL INPUT SCRATCHPAD DETERMINISTIC And the java method in UDFsthat is being called looks like this public static int re_Test(String input, String regex) throws Exception { // returns number of occurrences Pattern pattern = Pattern.compile(regex); Matcher matcher=pattern.matcher(input); int noFound=0; while (matcher.find()) noFound++; return noFound; } If I run the function from SquirrelSQL select re_test('abcdeab','ab') from sysibm/sysdummy1 It works fine, however, if I run STRSQL from the AS/400 5020 console I get this error in the job log SQLSTATE 57017 I am able to fix this problem by running CHGJOB and entering 37 instead of 65535 in the CCSID field. This is hardly desirable as I would need to do this everytime I logged on. Anyone know how to fix this problem? A: Your user profile is probably set with CCSID(*SYSVAL) which means your job will be started based on the system value QCCSID. Consider changing your user profile to CCSID(37).
{ "language": "en", "url": "https://stackoverflow.com/questions/7507719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Error installing NPM for node.js I'm trying to install npm on ubuntu 11.04 using the "git all the way" method found in this gist I keep getting this error after running sudo make install on npm $ sudo make install ! [ -d .git ] || git submodule update --init --recursive node cli.js install -g -f bash: node: command not found make: *** [install] Error 127 I know this is something wrong with bash, but I'm not very good with bash. EDIT running the node command in the terminal brings up the node shell as expected A: Your problem is that when you sudo, you are not sourcing the same bashrc file (or whatever is setting your PATH and/or NODE_PATH), and so the system cannot find node. I would guess that sudo node won't work. You need to export your NODE_PATH as @Ken suggested, WHILE SUDOING: sudo PATH=/path/to/node/bin/dir:$PATH make install EDIT: to use PATH as worked in comments below A: Make sure you export NODE_PATH before installing npm. export NODE_PATH=/path/to/node/install/dir:/path/to/node/install/dir/lib/node_modules A: Looks like you don't have node installed. You need node first - then the node package manager (NPM). A: This page illustrates the complete node installation including npm (step 4). A: Like someone mentioned - why just use yum sudo yum install nodejs npm --enablerepo=epel
{ "language": "en", "url": "https://stackoverflow.com/questions/7507720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can I use web stack to develop native apps on Windows Phone 7, just as on Windows 8? Windows 8 apps can be developed using web stack(JavaScript, HTML, CSS) or .NET stack(C#, C++, Visual Basic, XAML). What's the situation for Windows Phone 7 development? A: WP7 doesn't have built in support for building apps with HTML/CSS/JS in the same way as Win8 but you can do a very similar thing with http://www.phonegap.com/ (WP7 support still in beta.) A: Yes you can, with Visual Studio 2011. As mentioned in the Build Conference Keynote a week ago, windows 8 metro style app (using either stack mentioned) can be changed into windows phone app with very few changes of codes. A: I think this will most likely happen with Windows Phone 8 (Apollo).
{ "language": "en", "url": "https://stackoverflow.com/questions/7507726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Android library having trouble reading stream that openRawResource gives I'm using the plist library to load a plist file: http://code.google.com/p/plist/ I'm using code like so: //InputStream ins = getResources().openRawResource(R.raw.skillsanddrills) InputStream ins = getResources().openRawResource(R.xml.skillsanddrills); //file name is skillsanddrills.plist NSDictionary rootDict; try { rootDict = (NSDictionary)PropertyListParser.parse(ins); ... However I'm getting: java.lang.UnsupportedOperationException: The given data is neither a binary nor a XML property list. ASCII property lists are not supported. I don't believe this is the libraries fault because I got a similar error using another plist library and the file itself is just a plan XML struture. Why would Android be changing my plist files? Any ideas on how to fix this? The library also accepts files instead of streams. But can't work out how to create the file path to the file. A: This should be the source code where it is crashing: /** * Parses a property list from a file. It can either be in XML or binary format. * @param f The property list file * @return The root object in the property list * @throws Exception If an error occurred while parsing */ public static NSObject parse(File f) throws Exception { FileInputStream fis = new FileInputStream(f); String magicString = new String(readAll(fis, 8), 0, 8); fis.close(); if (magicString.startsWith("bplist00")) { return BinaryPropertyListParser.parse(f); } else if (magicString.startsWith("<?xml")) { return XMLPropertyListParser.parse(f); } else { throw new UnsupportedOperationException("The given data is neither a binary nor a XML property list. ASCII property lists are not supported."); } } Maybe you should put your plist not in xml but the raw folder and load it like that: getResources().openRawResource(R.raw.skillsanddrills) If that fails put it in asset and load it like that: getAssets().open("filename"); If that fails than your plist might be simply wrong formatted.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Getting the current global messages from FacesContext I have a problem. I need to know whether my page has global errors or not. This is because I have 2 different h:messages (error containers) <h:messages id="errorMsgsContainer" layout="table" rendered="true" styleClass="message-container error-message" infoClass="info" errorClass=" error" warnClass="warn-message warn" globalOnly="true"/> <h:messages id="errorMsgsContainerValidation" layout="table" styleClass="message-container error-message-validation" infoClass="info" errorClass="error" globalOnly="false"/> One will show the business related messages and the other will just show the validation messages. There are two messages because of business requirements. When validation error messages are produced, the facelet works fine, because one of the messages tag has the globalOnly="true" attribute-value pair. The problem comes when I've a global-only error. It will appear in both boxes. I want to know if any of there errors are global, so I don't show the validation container till the global errors are fixed by the user on my form. I've tried to get it through the FacesContext with FacesContext.getCurrentInstance().getMessageList().get(i).getSeverity() and some other commands but it does not seem to work. Please help me to solve this problem. How can I get the current global messages list, so I can know if there is any global error? A: When validation error messages are produced, the facelet works fine, because one of the messages tag has the globalOnly="true" attribute-value pair. This is incorrect. You are seeing messages for validation errors, in the other h:messages tag with the globalOnly="false" attribute-value pair. Validation messages always have a client Id, which happens to be the Id of the form element that failed validation, and hence will be displayed in a messages tag that allows non-global messages to be displayed, or has the value of the for attribute set to the applicable Id. The problem comes when I've a global-only error. It will appear in both boxes. This is expected behavior. I believe you've confused the meaning of the globalOnly attribute. When the value of the globalOnly attribute is true, only global messages (i.e. messages without a client Id) will be displayed; when the value is false, global messages will be displayed in addition to other messages that are already queued. This would explain why global messages are displayed twice - the first h:messages tag would display the global message because it should display only global messages, and the second would display it because it can display it. Please help me to solve this problem. How can I get the current global messages list, so I can know if there is any global error? If you want to continue having two h:messages tags in your facelet, then you can use a "pseudo-global" Id when queuing your FacesMessages for display, instead of specifying an Id of null; the value of the pseudo-global Id in the following example is inputForm which is a valid client Id (of the form) that would not have any validation messages produced in this case: FacesContext.getCurrentInstance().addMessage("inputForm", new FacesMessage(FacesMessage.SEVERITY_INFO, "Added a global message", null)); You can then add a EL expression to render the messages tag responsible for display of the input-validation messages: <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:fn="http://java.sun.com/jsp/jstl/functions"> ... <h:form id="inputForm"> ... </h:form> <h:messages id="psuedoGlobalMessages" for="inputForm" globalOnly="true" infoStyle="color:green" errorStyle="color:red" warnClass="color:orange" /> <h:messages id="clientMessages" rendered="#{fn:length(facesContext.getMessageList('inputForm')) == 0}" globalOnly="false" infoStyle="color:green" errorStyle="color:red" warnClass="color:orange" /> ... Note, the use of the globalOnly attribute in only one messages tag. The same messages tag is also not displayed if a pseudo-global message is queued up for display via the EL expression specified in the rendered attribute. You can also use the client Id of a hidden form element created specifically to direct all pseudo-global messages, instead of the form's client Id. A: Try this: rendered="#{not empty facesContext.getMessageList('inputForm')} instead of: rendered="#{fn:length(facesContext.getMessageList('inputForm')) == 0}" in the Vineet Reynolds's answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to make hybrid JPG / PNG with 8-bit alpha I would like to put a big (almost full screen) alpha transparent image on a website, but I have the following problem: * *if I use JPG the size and compression is OK, but I cannot make it transparent (100 kB) *if I use PNG-24 the size is HUGE (3-4 MB) *if I use PNG-24 and http://www.8bitalpha.com/ to convert it to PNG-8 then the size is smaller but I cannot reproduce the original graphics in 256 colors. The size is still quite big (700 kB) What I was thinking about is what if I create PNG-8 files just for the transparent regions and a JPG image for the non-transparent regions. And use absolute positioning to move things into place. Has anyone done anything like this? Or an other idea, but that's something I really don't have experience with: is it possible to use a JPG image and combine it with alpha transparency from an 8-bit PNG? I mean using JS or CSS3 or Canvas or something what I have never used before? Here is the page where I'm using PNG-8 now, but it's quite big (700 kb) and some colors are lost. http://ilhaamproject.com/sand-texture-2/ A: I've used the same JPG + PNG trick before with large, transparent background images. Take your large image and cut it up into 2 types of rectangular pieces: * *Those that don't need transparency (save as JPG) *Those that do need transparency (save as PNG) The goal is to get as much image detail as possible saved as JPG. Next you'll need to piece everything back together using relative and absolute positioning: <div class="bg"> <div class="content"> http://slipsum.com </div> <div class="section top"></div> <div class="section middle"></div> <div class="section bottom"></div> </div> .bg { width: 600px; /* width of your unsliced image */ min-height: 800px; /* height of your unsliced image, min-height allows content to expand */ position: relative; /* creates coordinate system */ } /* Your site's content - make it appear above the sections */ .content { position: relative; z-index: 2; } /* Define the sections and their background images */ .section { position: absolute; z-index: 1; } .section.top { width: 600px; height: 200px; top: 0; left: 0; background: url(top.png) no-repeat 0 0; } .section.middle { width: 600px; height: 400px; top: 200px; left: 0; background: url(middle.jpg) no-repeat 0 0; } .section.bottom { width: 600px; height: 200px; top: 600px; left: 0; background: url(bottom.png) no-repeat 0 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7507731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Trying to give groups the ability to have own URL in Grails. (Similar to how you could browse to facebook.com/{username}) I am developing a group based web application in Grails (1.3.7) and one of the fields I have is groupUrl for my Group domain. The theory behind it is I want to give users the ability to browse to http://www.myapp.com/{userDefinedgroupUrl} I also want to be able to have www.myapp.com/{userDefinedGroupUrl}/$action?/$id? However, with the way I have it implemented now, I have to hardcode all of my other controllers in the mappings as well so they are matched first and executed properly. Right now, I have it working with something like mappings = { "/group/$action?/$id?"(controller:"group") "/user/$action?/$id?"(controller: "user") etc.. "/$groupUrl?/$action?/$id?"(controller: "group") "/$groupUrl?/events/$action?/id?"(controller: "groupEvents") } I think it actually is working right now (I didn't test it too thoroughly yet) but I was wondering if there is a better, more efficient way of accomplishing this. Any advice would be appreciated. Thanks. A: First, it might be better to put all your groups under a sub-path, which makes managing the controllers a lot easier, like this: mappings = { "/$controller/$action?/$id?"() "/g/$groupUrl?/$action?/$id?"(controller: "group") "/g/$groupUrl?/events/$action?/id?"(controller: "groupEvents") } Second, Grails URL Mappings allow for dynamic controllers and actions, so you could use a little code to select the correct controller, like so: mappings = { "/$controller/$action?/$id?"() "/g/$groupUrl/$group_c?/$action?/$id?" { controller = { (params.group_c in [null, '', 'group']) ? 'group' : 'group' + params.group_c.capitalize() } } } That's not perfect, but basically it allows for the following URLs: * */g/mygroup/ -> GroupController.index */g/mygroup/group/view/45 -> GroupController.view */g/mygroup/event/list/64 -> GroupEventController.list It does not, however, allow for GroupController actions to be represented without the /group/ path. You could get around this by hard-coding a list of actions on the GroupController, and if group_c is in that list, bump group_c to action and bump action to id. That would be kinda ugly.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: iphone json data size limitations What's the size limitation for JSON data payloads for an iPhone app? I'm creating a time entry application and am delivering down to the iPhone app all of the time that has been entered for a week for the iPhone user. The JSON data sizes have been around 15-20 KB for a weeks worth of data (dates, project names, hours by day, etc). Is this too large? What's a good range size wise for JSON data to be transferred down to iPhone devices? Thanks A: Basically, for the time being, all of the JSON parsing libraries on the iPhone are third-party. There are several of them, so their specific memory limits are going to vary from library to library, but I think any of them should be able to handle 15-20 KB, since any JSON parser that couldn't handle that much data would be of little use to anyone. For the record, I have usually used JSONKit with no problem with sizes in the hundreds of kilobytes. In the case where you are downloading the whole JSON file first and then parsing it later, it usually seems to give the best performance. Regarding memory usage, if, in the future, you find that your JSON files are so big that you can't parse them, you can also try switching to a streaming parser, which will parse the results as they come in from the network. Some JSON libraries like YAJL support this feature.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Sending Data and accessing components between multiple forms in Visual Studio I am working on a project with a total of 6 forms. One form is the main form and the other 5 are part of its option menu. The forms on the options menu are for changing settings and modifying data that is used in the main form. For example, user clicks on option to change settings, new form pops up with the current settings, then user can modify the settings and apply the changes. When the user returns to the main form the settings should be changed. What I would like to do is send the current settings data from the main form into the components of the new form, after the user applies the changes and closes the new form the data is sent and updates the main form's components. One Specific case I am trying to do is put data into a listbox on the other form from an arrayList created and initialized in the main form. I have been trying code like this example: string data = somestring; newForm.Controls[2].Add(data); But I do not seem to have access to methods like Add and Insert for newForm.Controls[2], so the code fails. My other thought was to create a method in the new form and send the data to it that way, but the method is not recognized as existing in the main form. I get this error: 'System.Windows.Forms.Form' does not contain a definition for 'AddToList' A: To send data what you can do is in the constructor of your option form, take in parameters. eg class OptionForm : Form { public OptionForm(string data) { //put your data in to the form } } Then, the simplest method to get data back out is to include public properties in your OptionsForm class containing whatever information you need. In fact, with the public properties, you could also set the properties this way. Hope this is what you meant
{ "language": "en", "url": "https://stackoverflow.com/questions/7507748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a way to remove a Service Activity from Calendar area in a Microsoft Dynamics CRM Workplace? I need to remove a Service Activity link which is located on the right-hand side of Calendar area under the “Create a New:” section in Workplace. However, I see no way of doing it other than changing /workplace/home_calendar.aspx, which I'm not allowed to do. Is there any other possible way? Thanks a lot in advance. A: It is not possible in a supported way. It is part of the default entities, which you can't remove. Normally you would deny the permission for unneeded entities. This is not applicable for the service activity, since you only could grant permission for 'activity' which bundles all activity types. Please see also my answer at Disable Service Actity in CRM 2011
{ "language": "en", "url": "https://stackoverflow.com/questions/7507749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Drop array values if they exist in a database? I have a list of urls stored in a database, and I want to run a bit of code that checks an array of urls vs what is stored in the database. If the value exists, I want that value from the array dropped. So far, I have a database that contains 3 rows: CREATE TABLE links ( link_id INT(10) NOT NULL AUTO_INCREMENT, url VARCHAR(255) NOT NULL, last_visited TIMESTAMP, PRIMARY KEY (link_id), UNIQUE KEY (url) ) And basically I'm just trying to insert the data vs a unique value via an INSERT command and if it fails, i'd like to remove that array value. Is this possible? My bad code: foreach ($urlArray as $url) { $sql = "INSERT INTO linkz (url, last_visited) VALUES ('".$url."', NOW())"; if (!mysql_query($sql,$con)){ // remove array here somehow? } } Is there a better way? Any help would be appreciated, thanks! Tre A: You can drop a value from an array using unset. To do this, you need to know the key, so you might consider modifying your foreach to include the key: foreach ($urlArray as $key => $url) { ... // Remove the item from the array unset($urlArray[$key]); } A: I suppose that's one way to do it. There are several issues: * *Non-normalized URL notation *Changing a table to test for existence The first issue is that there are a large number of ways of expressing any given URL. For example: http://www.example.com/somepage can be written http://www.example.com/%73omepage The other is that philosophically speaking, a pure test for some data in a database should not change the database, whether or not it already exists. A simple SELECT * FROM links WHERE url=whatever is the cleaner approach. Presumably you have an unstated goal of collecting URLs. @mfonda has already answered the literal question.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Getting Reachability changed notifications in the background Im developing an app that uploads media to a server. I use Andrew Donoho's Reachability class to determine if I've got Reachability via WiFi, WWAN or if it is not reachable. (Users can choose if they upload media only over WiFi or WiFi and WWAN). If the application enters the background, uploads should continue. But if the user loses WiFi connectivity while the app is in the background, uploads should stop. This is why I need to find a way to get Reachability changed notifications in the background, so I can stop uploading if the user loses wifi connectivity. I've looked and looked but haven't seen anyone talking about this. It seems it hasn't been a very widespread need. A: You can refer the apple reachability code A: Instead of depending on Reachability I would handle the error of not being able to reach the server in general. Regardless of whether or not internet is available. Depending on how the server side is implemented you may need to reupload all of the data or continue on uploading the remaining part of the data. In any case updating the bookkeeping locally about what was uploaded or that you will need to retry this in the future can be done in the error handling delegate. - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error This delegate is on NSURLConnection and will be called when an error occurs like a timeout or loss of connection.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Speeding up insert query over 2 databases I have this query below which I am getting certain columns from 1 database and I am then inserting them into another table in another database. I will then Delete the table I am copying from. At the moment it takes 5 minutes and 36 seconds to copy a bit over 5300 records. Is there any way I can improve the speed? Declare @cursor cursor, @Firstname nchar(50), @MiddleInitial nchar(5), @Surname nchar(50), @EmailAddress nchar(100), @DOB nchar(8), @Sex char(1), @altEmail nchar(100) set @cursor = cursor for select Firstname, MiddleInitial, Surname, HomeEmailAddress, DateOfBirth, Sex, WorkEmailAddress from cs_clients open @cursor fetch next from @cursor into @FirstName, @MiddleInitial, @Surname, @EmailAddress, @DOB, @Sex, @altEmail while @@fetch_status = 0 begin set nocount on use hrwb_3_0 declare @Password nvarchar(100), @EncryptedText nvarchar(100) exec L_Password_GetRandomPassword @Password output, @EncryptedText output declare @userID nvarchar(100) exec L_Password_GetRandomPassword @userID output, @EncryptedText output set nocount off set @EmailAddress = isnull(@EmailAddress, @altEmail) insert into A_User values ('CS', 'CLUBSAIL', rtrim(@userID), rtrim(@Password), rtrim(@Surname), rtrim(@FirstName), rtrim(@MiddleInitial), 15, 'NA', 'NA', '', rtrim(@EmailAddress), rtrim(@DOB), 1, 0, 1, 0, '', rtrim(@Sex), '') fetch next from @cursor into @FirstName, @MiddleInitial, @Surname, @EmailAddress, @DOB, @Sex, @altEmail end A: It's slow because you are doing them one at a time. See here for some methods of doing multiple rows at once: http://blog.sqlauthority.com/2008/07/02/sql-server-2008-insert-multiple-records-using-one-insert-statement-use-of-row-constructor/ Or create a temporary table on the local database then use that to insert everything at once (i.e. in one statement). A: If you are regularly performing this kind of database to database transfer, you should probably look at DTS or SSIS (depending on which version of SQL Server you are using). Both technologies are specifically designed to extract, transform and load data between different sources and destinations. A: If all you need is to copy the data between tables with the same structure, this should work: INSERT INTO Database2.dbo.Table2 SELECT * FROM Database1.dbo.Table1 If you need to transform the data as well (as your example seems to indicate), you may or may not be able to do it in a single statement, depending on the complexity of the transformation.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Get row and column indices of matches using `which()` Say I have some matrix, for example: m = matrix(rep(c(0, 0, 1), 4), nrow = 4) m [,1] [,2] [,3] [1,] 0 0 1 [2,] 0 1 0 [3,] 1 0 0 [4,] 0 0 1 If I run which, I get list of normal indices: > which(m == 1) [1] 3 6 9 12 I want to get something like matrix indices - each index containing the row and column number: [,1] [,2] [1,] 3 1 [2,] 2 2 [3,] 1 3 [4,] 4 3 Is there any simple function to do this? Moreover, it should somehow contain the row and column names: > rownames(m) = letters[1:4] > colnames(m) = letters[5:7] > m e f g a 0 0 1 b 0 1 0 c 1 0 0 d 0 0 1 but I don't now how, maybe like [,1] [,2] [,3] [,4] [1,] 3 1 c e [2,] 2 2 b f [3,] 1 3 a g [4,] 4 3 d g or, maybe return 2 vectors (for rows and columns), like c b a d 3 2 1 4 e f g g 1 2 3 3 A: You cannot mix numeric and alpha in a matrix, but you can in a data.frame: > indices <- data.frame(ind= which(m == 1, arr.ind=TRUE)) > indices$rnm <- rownames(m)[indices$ind.row] > indices$cnm <- colnames(m)[indices$ind.col] > indices ind.row ind.col rnm cnm c 3 1 c e b 2 2 b f a 1 3 a g d 4 3 d g A: For your first question you need to also pass arr.ind= TRUE to which: > which(m == 1, arr.ind = TRUE) row col [1,] 3 1 [2,] 2 2 [3,] 1 3 [4,] 4 3
{ "language": "en", "url": "https://stackoverflow.com/questions/7507765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Best methods to group multiple actions together (as one event) At work we use a web application (attask.com) to record projects and tasks etc. It has lots of nice features like comments and status update (ill stop before you think im selling it!) anywho, On the site you can update the status of a project and this is saved via ajax. Then shortly after I could comment on the project. In the update section of this project these two actions will appear together as if they were performed as one... If you were to perform the same sort of functionality, how would you go about it? One method I have thought of is to have a hidden box and store a GUID in this when the page loads. Unquie to the page load. Then any ajax calls would use this GUID when posting data back and therefore things could be grouped this way. But I would like to hear other peoples idea or how they have gone about it if they have had to do something similiar A: Assuming there is no concept of user identity through a sign in system, a guid passed to a javascript variable on the rendered page would provide a simple solution. There would be no need to hide something like that inside a container. When an action is performed, the guid would be sent with the AJAX request. Upon receipt of an action, the server could check if a previous action carried out by a user with the same guid occurred within x many seconds. If this is the case, the actions would be considered a pair and the 'feed' model would be updated accordingly. My only experience of building something similar was a event planner that did not require user sign up to select days on a calendar. The selection of days was carried out by AJAX, so it was necessary to determine which calendar in the database to update based on a guid that was passed to the page on load. It's also worth noting that you don't really need a 'guid', any unique string will do.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: GWT - Annotated time line chart with zooming, scrollbar and navigator I am looking for a GWT extension for displaying a Google finance-like chart. (features like zooming, scroll-bar and navigator are required)I want to stay away from flash. I have looked at GWT Highcharts. Anything else worth considering? A: Why not Google's Own Annotated Timeline? It is already integrated into GWT's Visualization library, and it's EXACTLY the chart you described. Though it is flash-based, it's the best one around (IMO). I've never been a fan of using unsupported charts like Highcharts, but if you're completely against using Flash, that's probably your best option. A: I'm afraid I'm not familiar with Google Finance charting. For annotated time-lines, the Simile Timeline project is just great. It's really mature, pure javascript, and has lots of documentation & examples. Plus, there are a set of GWT Wrappers available for it in Google Code. But, it won't really show a conventional xy line chart. Update: Aaah, I've just seen there is a GWT wrapper for the Simile TimePlot, which does look more like an xy line chart. A: Annotated time line GWT is certainly not supported. You will find that the flash control is a closed black box, and errors and faults just drift away. No messages, no way out and no one will respond. While at least with highstock, you have some hope as you have the source. And folks there at least respond. Also the flash control does not seem to play nice with apple. So you won't have support for chrome or safari on a mac (does seem to work with firefox on the mac). A: I found dygraph to be a good solution. It has a gwt wrapper as well. An example here
{ "language": "en", "url": "https://stackoverflow.com/questions/7507771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: MVC3 View With Abstract Property Consider the following. Models public class DemographicsModel { public List<QuestionModel> Questions { get; set; } } //Does not work at all if this class is abstract. public /*abstract*/ class QuestionModel { //... } public class ChooseOneQuestionModel : QuestionModel { //... } public class ChooseManyQuestionModel : QuestionModel { //... } public class RichTextQuestionModel : QuestionModel { //... } public class TextQuestionModel : QuestionModel { //... } Controller [HttpPost] public ActionResult Demographics(DemographicsModel model) { //... } My view would have a DemographicsModel with numerous question of all the varying types as shown above. After the form is completed and POSTed back to the server, the Questions property of the Demographics model is re-populated with the correct number of questions but they are all of type QuestionModel instead of the concrete type. How do I make this thing understand what type to instantiate? A: Frazell's answer would have worked if my root model was the one that was abstract. However, that's not the case for me so what ended up working was this: http://mvccontrib.codeplex.com/wikipage?title=DerivedTypeModelBinder&referringTitle=Documentation A: ASP.NET MVC's built in Model Binding doesn't support Abstract classes. You can roll your own to handle the Abstract class though see ASP.NET MVC 2 - Binding To Abstract Model . A: I'm going to jump in here, even though it seems you've already solved it. But would public class DemographicsModel<T> Where T : QuestionModel { public List<T> Questions { get; set; } } Work? [HttpPost] public ActionResult Demographics(DemographicsModel<ChooseOneQuestionModel> model) { //... }
{ "language": "en", "url": "https://stackoverflow.com/questions/7507772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Help Merging 2 PHP Functions (Wordpress) I'm trying to merge two separate PHP functions into one that I can use in my Wordpress theme. I'm using Wordpress functions to get the post meta of the key "_videoembed" which is then trimmed from a YouTube URL down to the YouTube video ID. I'll include both the earlier functions & how I use them and then the one I'm working on. All help is appreciated very much! -Matt Previous Method In theme <?php $vidurl = get_post_meta($post->ID, "_videoembed", true ); $youtube_id = getYouTubeIdFromURL($vidurl); $finalid = trim($youtube_id); echo $finalid; ?> In functions.php function getYouTubeIdFromURL($url) { $url_string = parse_url($url, PHP_URL_QUERY); parse_str($url_string, $args); return isset($args['v']) ? $args['v'] : false; } Below is an example of how I'm trying to merge the two: In theme <?php getvidID(); ?> In functions.php function getvidID() { $vidurl = get_post_meta($post->ID, "_videoembed", true ); $url_string = parse_url($vidurl, PHP_URL_QUERY); parse_str($vidurl_string, $args); return isset($args['v']) ? $args['v'] : false; echo $vidurl; } As you can see, the older method I used was quite bulky, and I'm trying to streamline things so that my files are easier to work with and so that there are less PHP functions. Thanks! Matt A: I would keep the functional units smaller, instead of creating a larger single function. Splitting a large function into smaller functional units is a known refactoring pattern called extract method. Merging small functions is pretty much the opposite of refactoring, which is aimed at keeping code clean, easier to follow and maintain. A: Function usage: <?php get_vid_id($post->ID); ?> Function itself: function get_vid_id($id) { $vidurl = get_post_meta($id, "_videoembed", true ); $url_string = parse_url($vidurl, PHP_URL_QUERY); parse_str($vidurl_string, $args); return isset($args['v']) ? $args['v'] : false; echo $vidurl; } But I'd recommend using less functions. Keeps your code clean. Maybe create classes?
{ "language": "en", "url": "https://stackoverflow.com/questions/7507773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: For Guis using Delphi ObjectPascal, does checking .Visible before (potentially) changing it serve any useful purpose? I inherited a GUI implemented in Delphi RadStudio2007 targeted for Windows XP embedded. I am seeing a lot of code that looks like this: procedure TStatusForm.Status_refresh; begin if DataModel.CommStatus = COMM_OK then begin if CommStatusOKImage.Visible<>True then CommStatusOKImage.Visible:=True; if CommStatusErrorImage.Visible<>False then CommStatusErrorImage.Visible:=False; end else begin if CommStatusOKImage.Visible<>False then CommStatusOKImage.Visible:=False; if CommStatusErrorImage.Visible<>True then CommStatusErrorImage.Visible:=True; end; end I did find this code sample on the Embarcadero site: procedure TForm1.ShowPaletteButtonClick(Sender: TObject); begin if Form2.Visible = False then Form2.Visible := True; Form2.BringToFront; end; That shows a check of Visible before changing it, but there is no explanation of what is served by checking it first. I am trying to understand why the original developer felt that it was necessary to only set the Visible flag if it was to be changed, and did not choose to code it this way instead: procedure TStatusForm.Status_refresh; begin CommStatusOKImage.Visible := DataModel.CommStatus = COMM_OK; CommStatusErrorImage.Visible := not CommStatusOKImage.Visible; end Are there performance issues or cosmetic issues (such as screen flicker) that I need to be aware of? A: As Remy Lebeau said, Visible setter already checks if new value differs. For example, in XE, for TImage, assignment to Visible actually invokes inherited code: procedure TControl.SetVisible(Value: Boolean); begin if FVisible <> Value then begin VisibleChanging; FVisible := Value; Perform(CM_VISIBLECHANGED, Ord(Value), 0); RequestAlign; end; end; So, there is no benefits of checking it. However, might there in your code are used some third-party or rare components - for them all may be different, though, I doubt it. You can investigate it yourself, using "Find Declaration" context menu item in editor (or simply Ctrl+click), and/or stepping into VCL code with "Use debug dcus" compiler option turned on. A: Like many properties, the Visible property setter checks if the new value is different than the current value before doing anything. There is no need to check the current property value manually. A: Well, I doubt it will, but maybe there could be issues specifically for forms in recent Delphi versions. The Visible property is redeclared in TCustomForm to assure the execution of the OnCreate event prior to setting the visibility. It is technically not overriden since TControl.SetVisible is not virtual, but it has the same effect: procedure TCustomForm.SetVisible(Value: Boolean); begin if fsCreating in FFormState then if Value then Include(FFormState, fsVisible) else Exclude(FFormState, fsVisible) else begin if Value and (Visible <> Value) then SetWindowToMonitor; inherited Visible := Value; end; end; This implementation in Delphi 7 still does not require checking the visibility manually, but check this yourself for more recent versions. Also, I agree with Larry Lustig's comment because the code you provided does not testify of accepted syntax. It could have better been written as: procedure TForm1.ShowPaletteButtonClick(Sender: TObject); begin if not Form2.Visible then Form2.Visible := True; Form2.BringToFront; end;
{ "language": "en", "url": "https://stackoverflow.com/questions/7507774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Compiling Windows C++ application with long doubles in VS2010 At work we have MSVS2010 Ultimate, and I'm writing a program which runs exhaustive simulations using real numbers. I'm getting non-trivial round-off errors and I've already taken reasonable steps to ensure my algorithm is as numerically stable as possible. I'd like to switch to 128-bit quadruple precision floating point numbers (long double, right?), to see how much of a difference it makes. I've replaced all relevant instances of double with long double, recompiled, and ran my dummy simulation again but have exactly the same result as before. These are my (debug) compiler options as per my project property page in C/C++: /ZI /nologo /W3 /WX- /Od /Oy- /D "_MBCS" /Gm /EHsc /RTC1 /GS /fp:precise /Zc:wchar_t /Zc:forScope /Fp"Debug\FFTU.pch" /Fa"Debug\" /Fo"Debug\" /Fd"Debug\vc100.pdb" /Gd /analyze- /errorReport:queue My dev CPU is a Core2 Duo T7300 but the target machine will be an i7. Both installations are Windows 7 64-bit. A: You could switch to a non-Microsoft compiler such as gcc, Borland, or Intel. Those all recognize long double as 80-bit extended precision, the native internal format of the 8087.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Android system application DEVICE_POWER permission error I try to use goToSleep() method to put phone into deep sleep. Program was installed into /system/app directory so Android System Info says, that it is a system application, but if i try call goToSleep() i get this error Neither user 10085 nor current process has android.permission.DEVICE_POWER. Code sampling: IPowerManager mPowerManager = IPowerManager.Stub.asInterface(ServiceManager.getService("power")); long time = SystemClock.uptimeMillis() + 1000; try { mPowerManager.goToSleep(time); } catch (RemoteException e) { Toast.makeText(getApplicationContext(), "error: " + e.toString(), Toast.LENGTH_LONG).show(); e.printStackTrace(); } AndroidManifest.xml <permission android:name="android.permission.DEVICE_POWER"/> <uses-permission android:name="android.permission.DEVICE_POWER" /> <permission android:name="android.permission.REBOOT"/> <uses-permission android:name="android.permission.REBOOT"/> As i understand, if i run system application than i can gain access to all android hide or system functions, or i'm wrong? Things that i try to do to run app as system applicaiton: * *copy file to /system/app *chown 0:0 *chmod 4755 *chmod ugo+s Maybe someone else has already encountered this problem. Any suggestions would be helpful A: by looking in source codes I see you need signature permission, I think it's not enough to be a system app, you need to be signed with same cert of the rom, the one in /system/framework/android/framework-res.apk A: The DEVICE_POWER permission is not accessable by third-party applications like yours. public static final String DEVICE_POWER Added in API level 1 Allows low-level access to power management. Not for use by third-party applications. Constant Value: "android.permission.DEVICE_POWER" A: Just remove the first and thirds lines in the manifest above and it should be fine. You should call ... and not .... Your code looks fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to implement database locking across several function in a code igniter model? I'm creating a system that involves the reservation of tickets by many users within a short period of time with only a certain number of reservations possible in total. Say 600 tickets available, potentially all being reserved in 3 hour period or less. Ideally I want to ensure the limit of reservations is not reached so before creating a reservation I am checking whether it's possible to make the reservation against the number of tickets available. Crucially I need to make sure no updates take places between that check and assigning the tickets to a user, to be sure the ticket limit won't be exceeded. I'm trying to use mysql table write locks to achieve this however am running into problems implementing this within the codeigniter framework. Within the model handling this I've created several functions, one for creating the reservation and others for counting numbers of different types of tickets. The problem is that they don't seem to be sharing the same database sessions as they ticket counting functions are locking up. The order of execution is * *run $this->model_name->create_reservation in controller *run lock query in model_name->create_reservation *call counting method in model_name->create_reservation *counting function (which is a method in the model_name class) locks up, presumably because using different database session? The database library is loaded in the model __construct method with $this->load->database(); Any ideas? A: In mysql, you run these commands on your DB handle before running your queries the tables will auto lock : begin work; You then run your queries or have code igniter run your various selects and updates using that db handle. Then you either commit; or rollback; Any rows you select from will be locked and can't be read by other processes. If you specifically want the rows to still be readable, you can do: Select ... IN SHARE MODE From Mysql docs: http://dev.mysql.com/doc/refman/5.5/en/select.html If you use FOR UPDATE with a storage engine that uses page or row locks, rows examined by the query are write-locked until the end of the current transaction. Using LOCK IN SHARE MODE sets a shared lock that permits other transactions to read the examined rows but not to update or delete them. See Section 13.3.9.3, “SELECT ... FOR UPDATE and SELECT ... LOCK IN SHARE MODE Locking Reads”. Another person said this in comments already, but from the CI docs: $this->db->trans_start(); $this->db->query('AN SQL QUERY...'); $this->db->query('ANOTHER QUERY...'); $this->db->query('AND YET ANOTHER QUERY...'); $this->db->trans_complete(); trans_start and trans_complete will run those queries for you on your handle... there is probably a trans_rollback too...
{ "language": "en", "url": "https://stackoverflow.com/questions/7507780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Firefox reload page loop debugging hints? One of my javascript heavy pages reloads itself endlessly when using a deep link in firefox. (I'm using jquery address plugin btw.) The page works fine in Chrome and IE, but not in FF. I have tried debugging it using firebug, but the problem is that when the page reloads firebug is reset. Any hints on how I could debug this besides stepping the code and adding log statements? A: You could set Firebug to persist the console so you can see errors on reload.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Multiple Android Application Package .apk files from single source code I would like an Android build system procedure, command line or Eclipse, to generate several .apk files from a single source codebase. Some common reasons for this - having specific versions for markets with different requirements or a free and paid version. This question IS NOT ABOUT: * *Packaging shared code into Android libraries or into external Java jars *Producing a debug vs. signed release .apk Google says "you probably need to create separate Android projects for each APK you intend to publish so that you can appropriately develop them separately. You can do this by simply duplicating your existing project and give it a new name." Then they kindly suggest using libraries, which I understand. Then, they mention in passing exactly what I do want: "a build system that can output different resources based on the build configuration" * *I know that to accomplish conditional compilation in JAVA one can key off a 'public static final' variable. There is an example of tweaking such a value in build.xml. Any more complete example of an Android Ant build configuration for this or a link to an OSS project doing that now, please? BTW, build.xml is auto-generated, but I have seen people hacking it, so how does that work? *With the package name declared in Manifest.xml as package="com.example.appname", if one needs to emit multiple .apks that vary that name, is one stuck with a separate project for each? A: The answer to this screams Gradle, as explained on this website. It's officially built into Android Studio and is encouraged. It's amazing; I've built 3 separate apps using the same source code, with customized text and graphics, with no special coding whatsoever. Just some directory and Gradle setup is required, and other posts of mine can be found with answers to both. It seems to explain all the basics really well. For the answer to your specific question, look for the section Product Flavors under Build Variants, where it describes specifying different flavors. As the website explains, part of the purpose behind this design was to make it more dynamic and more easily allow multiple APKs to be created with essentially the same code, which sounds exactly like what you're doing. I probably didn't explain it the best, but that website does a pretty good job. A: Despite your insistence that this is not about packaging shared code into Android libraries, it sort of is. You've stated that markets may have different requirements or having a free and a paid version. In each of these examples, your two final output APKs have different behavior and/or resources. You can put the vast majority of your code in a shared Android library, and then maintain the differences in your actual projects. For example, I've worked on apps where they need to be released both to the Android Market and the Amazon AppStore. The Amazon AppStore requires that if you link to a market page for the app, it must be Amazon's (as opposed to the Android Market page). You can store a URL in a resource in the library and use that in your code, but then override that resource in the Amazon project to point to the appropriate Amazon URL. If you structure it right, you can do similar things in code because your starting point is your Application object which you can subclass and do different things with. That said, if you want to add an Ant step that changes the package name in the manifest, it is just XML. It shouldn't be hard to modify as a precompilation step. A: This article has a good walk-through with examples of how to amend config files at build time; see in particular the Customizing the build and Using a Java configuration file sections. Note that some of the information about build.xml and ant is a little bit out-of-date now. A: Here's our situation: we have a single codebase from which we release for several clients. Each of them has various requirements regarding titles, backgrounds and other resources in the application (let alone package names). Build is handled by a Ruby script that modifies AndroidManifest, copies/replaces certain resources from client-specific folders and then moves on to Android's standart build routine. After the build is done, script resets changed files back to their original, 'default' state. Well... Maybe it's not optimal and definitely not Android-specific, but that's how we do it. A: I had the same problem but packing all in one project with flags is no solution for me. I wrote an example how to do that with Maven: How to create multiple Android apk files from one codebase organized by a Maven multi module project. A: I'm generating 2 different APK's (demo and production) from one single source tree with 3 small modifications: 1) I have public static final DEMO=true; //false; in my Application class and depending on that value I used to switch code between demo/production features 2) There are 2 main activities, like: package mypackage; public class MyProduction extends Activity { //blah-blah } package mypackage.demo; public class MyDemoActivity extends mypackage.MyProductionActivity { //blah-blah } 3) And in the end 2 separate AndroidManifest.xml files which points to different launcher activities depending on demo/production switch I'm switching between 2 APK's manually, but see nothing difficult in writing small ANT task to switch between them automatically A: One way to do it would be to maintain two separate AndroidManifest.xml, one for each configuration. You can switch back and forth between the two either manually (copying) or automatically (build script). [edit] This person here has a system to do this kind of thing: http://blog.elsdoerfer.name/2010/04/29/android-build-multiple-versions-of-a-project/ A: My team build 2 different build using single code base + additional code. As android build is based on ant script, I use ant script to do this work. I used xmltask to manipulate manifest xml file and many ant task ( regexp , copy..) to edit source code. I prepared template project template ( including build.xml , default.properties, local.properties) and copied new source code into those project templates. when copy completed, run build.xml parallel to shorten build time. when build finished, I get multiple apk files. A: It's easily to achieve your goal by using Android Studio build variants which use graddle as the build system. Check here for more detailed information. A: I think that the best way remain to use libray for common sources and two different Android project for demo and production package. This because in Java it is very simple to make a reverse engeneering from apk to sources. If you use the same sources for demo and production, someone could hacking your apk downloading the demo package, extracting the java sources and unlock the sources changing the variable to use it as production version. With library you can preserve part of sources in the production package, in this way there is no way to use demo package as production package.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Why does my indexed column appear not to have statistics? I'm using SQL Server and I'm currently trying to debug some queries where the optimizer has chosen a poor execution plan and I noticed for one of my indexed columns that when I run the command: DBCC SHOW_STATISTICS ("tablename", columnname); for this indexed column, the database returns: Could not locate statistics 'columnname' in the system catalogs. According to this page: http://msdn.microsoft.com/en-us/library/ms190397.aspx "The query optimizer creates statistics for indexes on tables or views when the index is created." I also have AUTO_CREATE_STATISTICS on. Should I have to manually have to run a CREATE STATISTICS for this column? If so, since it's an index shouldn't it already have statistics for the column? A: From https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-show-statistics-transact-sql: If target is the name of an existing column, and an automatically created statistics on this column exists, information about that auto-created statistic is returned. If an automatically created statistic does not exist for a column target, error message 2767 is returned. So specifying the name of the index for target (the second parameter) will work, but it won't work if you use the column name. If you run this (credit to Erland Sommarskog, http://www.sommarskog.se/query-plan-mysteries.html), you can see if stats were auto-created or not: DECLARE @tbl NVARCHAR(256) SELECT @tbl = 'tableName' SELECT o.name, s.stats_id, s.name, s.auto_created, s.user_created, SUBSTRING(scols.cols, 3, LEN(scols.cols)) AS stat_cols, STATS_DATE(o.object_id, s.stats_id) AS stats_date, s.filter_definition FROM sys.objects o JOIN sys.stats s ON s.object_id = o.object_id CROSS APPLY ( SELECT ', ' + c.name FROM sys.stats_columns sc JOIN sys.columns c ON sc.object_id = c.object_id AND sc.column_id = c.column_id WHERE sc.object_id = s.object_id AND sc.stats_id = s.stats_id ORDER BY sc.stats_column_id FOR XML PATH('') ) AS scols(cols) WHERE o.name = @tbl ORDER BY o.name, s.stats_id
{ "language": "en", "url": "https://stackoverflow.com/questions/7507787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Logging info to a disk file using MS Enterprise Library I am trying to create a class that logs information to a txt or xml file on disk using MS Enterprise Library (5.0). I have been following this guide but so far it has been silently failing (no events in the log viewer). Here is my class: public static void logEntry(String message, String type) { LogEntry logEntry = new LogEntry(); logEntry.Categories.Add(type); logEntry.Message = message; Logger.Write(logEntry); } I have been calling this as follows in a catch block for error logging or at different locations to when I need to log a database modification for a normal log type. Util.logEntry("Error Message", "Error"); Util.logEntry("Normal Message", "Normal"); I know it gets called because I even added a statement as the first line in my program to test it out. Is there a better design for using the MS Enterprise Library if I will have to parse the log file based on the type (Error, Warning, Normal)? A: I suspect your event source is not registered. Normally the .NET framework will automatically create event sources the first time you use them, but creating event sources require administrator privileges. Try running your app as Administrator once, to get event sources registered.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: iPhone application, Documents directory on Mac I'm reading through some of the documentation about File Management on iOS. When you create an application for the simulator, does the application get created somewhere on my hard drive? If so, where is it? Also, does this have access to the Documents directory? Like if I create some test .txt file, and want to see it in the App->Documents folder, is that possible? Thanks. A: Yes. Look here: ~/Library/Application Support/iPhone Simulator/4.3.2/Applications Change the 4.3.2 to be the version of the Simulator you are using. Within that folder you will find your apps, except the they are named cryptically. Open one of those folders and you will find your app, named as you recognize it, and the Documents, Library and tmp folders. Documents is where you find the docs that your app creates and uses. You can, in fact, makes changes to the files in the Documents folder or just access their content to see what your app sees or writes. A: Under XCode 6, the document directory for your app is quite hidden: ~/Library/Developer/CoreSimulator/Devices//data/Containers/Data/Application// You can find the directory for your app with this command: $ sudo find ~/Library/Developer/CoreSimulator/Devices -name <APP_NAME>.app | grep -o '.*/'
{ "language": "en", "url": "https://stackoverflow.com/questions/7507803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Silverlight 4 EventTrigger Handled I have two nested Grid (FrameworkElement) items in my application. <UserControl xmlns:i="http://schemas.microsoft.com/expression/2010/interactivity"> <Grid x:name="OuterGrid"> <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonDown"> <i:InvokeCommandAction x:Name="TheOuterCommand" Command="{Binding OuterCommand}"/> </i:EventTrigger> </i:Interaction.Triggers> <Grid x:name="InnerGrid"> <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonDown"> <i:InvokeCommandAction x:Name="TheInnerCommand" Command="{Binding InnerCommand}"/> </i:EventTrigger> </i:Interaction.Triggers> </Grid> </Grid> </UserControl> Each of the InvokeCommands is attached to a DelegateCommand (from the Prism libraries) in the viewmodel. OuterCommand = new DelegateCommand(OuterCommandMethod, e => true); InnerCommand = new DelegateCommand(InnerCommandMethod, e => true); At the moment, the EventTrigger on InnerGrid also triggers the event on the OuterGrid due to the MouseLeftButtonEvent not being handled at the InnerGrid level. Is there a way I can notify the EventTrigger that it is handled and it should not bubble up to the OuterGrid? At the moment, all I can think to do is have a wrapper FrameworkElement around the InnerGrid that uses an event on the XAML code-behind to set the event to handled. Does anyone have any other ideas? ---- Edit ---- In the end, I have included MVVM Light in my application and replaced InvokeCommandAction with RelayCommand. This is now working as I intended. I'll mark Bryant's answer as the winner for giving me the suggestion. A: We have extended EventTrigger by adding dependency property called IsInner and then we always set a static flag in the inner EventTrigger. The outer EventTrigger unsets the flag and returns if the flag was set. That is extremely easy to write and works well. A: Your best bet would be to pass the event args to the Command and then mark the event handled using the event args. You can do this by following this example here.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Monotouch OpenGL orientation event I'm trying to do something that should be very simple, but the interface builder in xCode is doing some stuff behind the curtains that makes it all very unclear. Basically I want to allow my OpenGL application to be orientation aware, and from what I understand I need to catch these kind of events in an UIViewController. So, to make it simple, assuming I just created a new project using the standard MonoTouch OpenGL template, what code should I add to catch the orientation events? Or even better, a template for starting OpenGl without the Interface Builder at all, since I am new to Interface Builder and it only seems to get in the way. A: I am not sure if that is what you mean by "orientation aware" but you can have access to the current orientation of the device by calling the following code: UIDeviceOrientation curOrientation = [[UIDevice currentDevice] orientation]; This will tell you whether the device orientation is Portrait, LandscapeLeft, etc. You can then rotate your views/images accordingly, depending on what you want to achieve. Please note that UIDeviceOrientation refers to the orientation of the physical device while UIInterfaceOrientation refers to the orientation of the user interface as mentioned in this SO post: UIDEVICE orientation You can change the UI orientation by calling the following function: [[UIDevice currentDevice] setOrientation:someUIInterfaceOrientation]; Hope this helps,
{ "language": "en", "url": "https://stackoverflow.com/questions/7507806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to source a script in a Makefile? Is there a better way to source a script, which sets env vars, from within a makefile? FLAG ?= 0 ifeq ($(FLAG),0) export FLAG=1 /bin/myshell -c '<source scripts here> ; $(MAKE) $@' else ...targets... endif A: Some constructs are the same in the shell and in GNU Make. var=1234 text="Some text" You can alter your shell script to source the defines. They must all be simple name=value types. Ie, [script.sh] . ./vars.sh [Makefile] include vars.sh Then the shell script and the Makefile can share the same 'source' of information. I found this question because I was looking for a manifest of common syntax that can be used in Gnu Make and shell scripts (I don't care which shell). Edit: Shells and make understand ${var}. This means you can concatenate, etc, var="One string" var=${var} "Second string" A: To answer the question as asked: you can't. The basic issue is that a child process can not alter the parent's environment. The shell gets around this by not forking a new process when source'ing, but just running those commands in the current incarnation of the shell. That works fine, but make is not /bin/sh (or whatever shell your script is for) and does not understand that language (aside from the bits they have in common). Chris Dodd and Foo Bah have addressed one possible workaround, so I'll suggest another (assuming you are running GNU make): post-process the shell script into make compatible text and include the result: shell-variable-setter.make: shell-varaible-setter.sh postprocess.py @^ # ... else include shell-variable-setter.make endif messy details left as an exercise. A: I really like Foo Bah's answer where make calls the script, and the script calls back to make. To expand on that answer I did this: # Makefile .DEFAULT_GOAL := all ifndef SOME_DIR %: <tab>. ./setenv.sh $(MAKE) $@ else all: <tab>... clean: <tab>... endif -- # setenv.sh export SOME_DIR=$PWD/path/to/some/dir if [ -n "$1" ]; then # The first argument is set, call back into make. $1 $2 fi This has the added advantage of using $(MAKE) in case anyone is using a unique make program, and will also handle any rule specified on the command line, without having to duplicate the name of each rule in the case when SOME_DIR is not defined. A: If you want to get the variables into the environment, so that they are passed to child processes, then you can use bash's set -a and set +a. The former means, "When I set a variable, set the corresponding environment variable too." So this works for me: check: bash -c "set -a && source .env.test && set +a && cargo test" That will pass everything in .env.test on to cargo test as environment variables. Note that this will let you pass an environment on to sub-commands, but it won't let you set Makefile variables (which are different things anyway). If you need the latter, you should try one of the other suggestions here. A: If your goal is to merely set environment variables for Make, why not keep it in Makefile syntax and use the include command? include other_makefile If you have to invoke the shell script, capture the result in a shell command: JUST_DO_IT=$(shell source_script) the shell command should run before the targets. However this won't set the environment variables. If you want to set environment variables in the build, write a separate shell script that sources your environment variables and calls make. Then, in the makefile, have the targets call the new shell script. For example, if your original makefile has target a, then you want to do something like this: # mysetenv.sh #!/bin/bash . <script to source> export FLAG=1 make "$@" # Makefile ifeq($(FLAG),0) export FLAG=1 a: ./mysetenv.sh a else a: .. do it endif A: Using GNU Make 3.81 I can source a shell script from make using: rule: <tab>source source_script.sh && build_files.sh build_files.sh "gets" the environment variables exported by source_script.sh. Note that using: rule: <tab>source source_script.sh <tab>build_files.sh will not work. Each line is ran in its own subshell. A: Makefile default shell is /bin/sh which does not implement source. Changing shell to /bin/bash makes it possible: # Makefile SHELL := /bin/bash rule: source env.sh && YourCommand A: This works for me. Substitute env.sh with the name of the file you want to source. It works by sourcing the file in bash and outputting the modified environment, after formatting it, to a file called makeenv which is then sourced by the makefile. IGNORE := $(shell bash -c "source env.sh; env | sed 's/=/:=/' | sed 's/^/export /' > makeenv") include makeenv A: My solution to this: (assuming you're have bash, the syntax for $@ is different for tcsh for instance) Have a script sourceThenExec.sh, as such: #!/bin/bash source whatever.sh $@ Then, in your makefile, preface your targets with bash sourceThenExec.sh, for instance: ExampleTarget: bash sourceThenExec.sh gcc ExampleTarget.C You can of course put something like STE=bash sourceThenExec.sh at the top of your makefile and shorten this: ExampleTarget: $(STE) gcc ExampleTarget.C All of this works because sourceThenExec.sh opens a subshell, but then the commands are run in the same subshell. The downside of this method is that the file gets sourced for each target, which may be undesirable. A: Depending on your version of Make and enclosing shell, you can implement a nice solution via eval, cat, and chaining calls with &&: ENVFILE=envfile source-via-eval: @echo "FOO: $${FOO}" @echo "FOO=AMAZING!" > $(ENVFILE) @eval `cat $(ENVFILE)` && echo "FOO: $${FOO}" And a quick test: > make source-via-eval FOO: FOO: AMAZING! A: An elegant solution found here: ifneq (,$(wildcard ./.env)) include .env export endif A: If you need only a few known variables exporting in makefile can be an option, here is an example of what I am using. $ grep ID /etc/os-release ID=ubuntu ID_LIKE=debian $ cat Makefile default: help rule/setup/lsb source?=. help: -${MAKE} --version | head -n1 rule/setup/%: echo ID=${@F} rule/setup/lsb: /etc/os-release ${source} $< && export ID && ${MAKE} rule/setup/$${ID} $ make make --version | head -n1 GNU Make 3.81 . /etc/os-release && export ID && make rule/setup/${ID} make[1]: Entering directory `/tmp' echo ID=ubuntu ID=ubuntu -- http://rzr.online.fr/q/gnumake A: Assuming GNU make, can be done using a submake. Assuming that the shell script that exports the variables is include.sh in the current directory, move your Makefile to realmake.mk. Create a new Makefile: all: @. ./include.sh; \ $(MAKE) -f realmake.mk $(MAKECMDGOALS) $(MAKECMDGOALS): +@. ./include.sh; \ $(MAKE) -f realmake.mk $(MAKECMDGOALS) Pay attention to the ./ preceding include.sh. A: Another possible way would be to create a sh script, for example run.sh, source the required scripts and call make inside the script. #!/bin/sh source script1 source script2 and so on make A: target: output_source bash ShellScript_name.sh try this it will work, the script is inside the current directory.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "102" }
Q: Standard layout type and reinterpret_cast Am I allowed to cast from my class to a structure if i have copied the members of the structure to my class? #include <stdint.h> #include <sys/uio.h> class Buffer { public: void * address; size_t size; Buffer(void * address = nullptr, size_t size = 0) : address(address), size(size) { } operator iovec *() const { // Cast this to iovec. Should work because of standard layout? return reinterpret_cast<iovec *>(this); } } A: First off, you cannot cast away constness: §5.2.10p2. The reinterpret_cast operator shall not cast away constness (§5.2.11). (...) So you need at least to write that as operator iovec const*() const { return reinterpret_cast<iovec const*>(this); } or operator iovec *() { return reinterpret_cast<iovec *>(this); } On top of that, you need to have both Buffer and iovec be standard-layout types, and iovec cannot have an alignment stricter (i.e. larger) than Buffer. §5.2.10p7. An object pointer can be explicitly converted to an object pointer of a different type. When a prvalue v of type “pointer to T1” is converted to the type “pointer to cv T2”, the result is static_cast<cv T2*>(static_cast<cv void*>(v)) if both T1 and T2 are standard-layout types (§3.9) and the alignment requirements of T2 are no stricter than those of T1, or if either type is void. (...) You also need to be careful not to break the strict aliasing rules: in general, you cannot use two pointers or references to different types that refer to the same memory location.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: how to get BitmapImage in codebehind from the image tag in xaml in wpf/silverlight i dont have a problem with binding a bitmapimage to image tag in codebehind for eg. BitmapImage image = new BitmapImage(); imagetaginxaml.Source = image; // this will remove whatever image is currently on the image tag in xaml and attach the empty bitmapimage above but i'm not able to get the image by doing the reverse, for example, i want to process the image that is currently on the image tag. i am not able to do this BitmapImage image = imagetaginxaml.Source; what should i do A: Well, Image.Source is of type ImageSource, there is no quarantee that it will be a BitmapImage, it may be though. If the source is created by the XAML parser it will be a BitmapFrameDecode (which is an internal class). Anyway, the only save assignment is: ImageSource source = img.Source; otherwise you need to cast: BitmapImage source = (BitmapImage)img.Source; which will throw an exception if the Source is not of this type. So you can either save-cast or try-catch: //(Possibly check for img.Source != null first) BitmapImage source = img.Source as BitmapImage; if (source != null) { //If img.Source is not null the cast worked. } try { BitmapImage source = (BitmapImage)img.Source; //If this line is reached it worked. } catch (Exception) { //Cast failed } You could also check the type beforehand using img.SourceisBitmapImage. A: How about using WriteableBitmap to make a copy of the image, and then using a MemoryStream to copy the original image into a copy? // Create a WriteableBitmap from the Image control WriteableBitmap bmp = new WriteableBitmap(imagetaginxaml, null); // Load the contents of a MemoryStream from the WritableBitmap MemoryStream m = new MemoryStream(); bmp.SaveJpeg(m, bmp.PixelWidth, bmp.PixelHeight, 0, 100); // Read from the stream into a new BitmapImage object m.Position = 0; BitmapImage image = new BitmapImage(); image.SetSource(m); // do something with the new BitmapImage object // (for example, load another image control) anotherimagetaginxaml.Source = image;
{ "language": "en", "url": "https://stackoverflow.com/questions/7507820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Where is a complete example of logging.config.dictConfig? How do I use dictConfig? How should I specify its input config dictionary? A: The accepted answer is nice! But what if one could begin with something less complex? The logging module is very powerful thing and the documentation is kind of a little bit overwhelming especially for novice. But for the beginning you don't need to configure formatters and handlers. You can add it when you figure out what you want. For example: import logging.config DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'loggers': { '': { 'level': 'INFO', }, 'another.module': { 'level': 'DEBUG', }, } } logging.config.dictConfig(DEFAULT_LOGGING) logging.info('Hello, log') A: I found Django v1.11.15 default config below, hope it helps DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'filters': { 'require_debug_false': { '()': 'django.utils.log.RequireDebugFalse', }, 'require_debug_true': { '()': 'django.utils.log.RequireDebugTrue', }, }, 'formatters': { 'django.server': { '()': 'django.utils.log.ServerFormatter', 'format': '[%(server_time)s] %(message)s', } }, 'handlers': { 'console': { 'level': 'INFO', 'filters': ['require_debug_true'], 'class': 'logging.StreamHandler', }, 'django.server': { 'level': 'INFO', 'class': 'logging.StreamHandler', 'formatter': 'django.server', }, 'mail_admins': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler' } }, 'loggers': { 'django': { 'handlers': ['console', 'mail_admins'], 'level': 'INFO', }, 'django.server': { 'handlers': ['django.server'], 'level': 'INFO', 'propagate': False, }, } } A: Example with Stream Handler, File Handler, Rotating File Handler and SMTP Handler from logging.config import dictConfig LOGGING_CONFIG = { 'version': 1, 'loggers': { '': { # root logger 'level': 'NOTSET', 'handlers': ['debug_console_handler', 'info_rotating_file_handler', 'error_file_handler', 'critical_mail_handler'], }, 'my.package': { 'level': 'WARNING', 'propagate': False, 'handlers': ['info_rotating_file_handler', 'error_file_handler' ], }, }, 'handlers': { 'debug_console_handler': { 'level': 'DEBUG', 'formatter': 'info', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout', }, 'info_rotating_file_handler': { 'level': 'INFO', 'formatter': 'info', 'class': 'logging.handlers.RotatingFileHandler', 'filename': 'info.log', 'mode': 'a', 'maxBytes': 1048576, 'backupCount': 10 }, 'error_file_handler': { 'level': 'WARNING', 'formatter': 'error', 'class': 'logging.FileHandler', 'filename': 'error.log', 'mode': 'a', }, 'critical_mail_handler': { 'level': 'CRITICAL', 'formatter': 'error', 'class': 'logging.handlers.SMTPHandler', 'mailhost' : 'localhost', 'fromaddr': 'monitoring@domain.com', 'toaddrs': ['dev@domain.com', 'qa@domain.com'], 'subject': 'Critical error with application name' } }, 'formatters': { 'info': { 'format': '%(asctime)s-%(levelname)s-%(name)s::%(module)s|%(lineno)s:: %(message)s' }, 'error': { 'format': '%(asctime)s-%(levelname)s-%(name)s-%(process)d::%(module)s|%(lineno)s:: %(message)s' }, }, } dictConfig(LOGGING_CONFIG) A: How about here! The corresponding documentation reference is configuration-dictionary-schema. LOGGING_CONFIG = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'standard': { 'format': '%(asctime)s [%(levelname)s] %(name)s: %(message)s' }, }, 'handlers': { 'default': { 'level': 'INFO', 'formatter': 'standard', 'class': 'logging.StreamHandler', 'stream': 'ext://sys.stdout', # Default is stderr }, }, 'loggers': { '': { # root logger 'handlers': ['default'], 'level': 'WARNING', 'propagate': False }, 'my.packg': { 'handlers': ['default'], 'level': 'INFO', 'propagate': False }, '__main__': { # if __name__ == '__main__' 'handlers': ['default'], 'level': 'DEBUG', 'propagate': False }, } } Usage: import logging.config # Run once at startup: logging.config.dictConfig(LOGGING_CONFIG) # Include in each module: log = logging.getLogger(__name__) log.debug("Logging is configured.") In case you see too many logs from third-party packages, be sure to run this config using logging.config.dictConfig(LOGGING_CONFIG) before the third-party packages are imported. To add additional custom info to each log message using a logging filter, consider this answer. A: One more thing in case it's useful to start from the existing logger's config, the current config dictionary is can be obtained via import logging logger = logging.getLogger() current_config = logger.__dict__ # <-- yes, it's just the dict print(current_config) It'll be something like: {'filters': [], 'name': 'root', 'level': 30, 'parent': None, 'propagate': True, 'handlers': [], 'disabled': False, '_cache': {}} Then, if you just do new_config=current_config new_config['version']=1 new_config['name']='fubar' new_config['level']=20 # ...and whatever other changes you wish logging.config.dictConfig(new_config) You will then find: print(logger.__dict__) is what you'd hope for {'filters': [], 'name': 'fubar', 'level': 20, 'parent': None, 'propagate': True, 'handlers': [], 'disabled': False, '_cache': {}, 'version': 1} A: There's an updated example of declaring a logging.config.dictConfig() dictionary schema buried in the logging cookbook examples. Scroll up from that cookbook link to see a use of dictConfig(). Here's an example use case for logging to both stdout and a "logs" subdirectory using a StreamHandler and RotatingFileHandler with customized format and datefmt. * *Imports modules and establish a cross-platform absolute path to the "logs" subdirectory from os.path import abspath, dirname, join import logging from logging.config import dictConfig base_dir = abspath(dirname(__file__)) logs_target = join(base_dir + "\logs", "python_logs.log") *Establish the schema according to the dictionary schema documentation. logging_schema = { # Always 1. Schema versioning may be added in a future release of logging "version": 1, # "Name of formatter" : {Formatter Config Dict} "formatters": { # Formatter Name "standard": { # class is always "logging.Formatter" "class": "logging.Formatter", # Optional: logging output format "format": "%(asctime)s\t%(levelname)s\t%(filename)s\t%(message)s", # Optional: asctime format "datefmt": "%d %b %y %H:%M:%S" } }, # Handlers use the formatter names declared above "handlers": { # Name of handler "console": { # The class of logger. A mixture of logging.config.dictConfig() and # logger class-specific keyword arguments (kwargs) are passed in here. "class": "logging.StreamHandler", # This is the formatter name declared above "formatter": "standard", "level": "INFO", # The default is stderr "stream": "ext://sys.stdout" }, # Same as the StreamHandler example above, but with different # handler-specific kwargs. "file": { "class": "logging.handlers.RotatingFileHandler", "formatter": "standard", "level": "INFO", "filename": logs_target, "mode": "a", "encoding": "utf-8", "maxBytes": 500000, "backupCount": 4 } }, # Loggers use the handler names declared above "loggers" : { "__main__": { # if __name__ == "__main__" # Use a list even if one handler is used "handlers": ["console", "file"], "level": "INFO", "propagate": False } }, # Just a standalone kwarg for the root logger "root" : { "level": "INFO", "handlers": ["file"] } } *Configure logging with the dictionary schema dictConfig(logging_schema) *Try some test cases to see if everything is working properly if __name__ == "__main__": logging.info("testing an info log entry") logging.warning("testing a warning log entry") [EDIT to answer @baxx's question] *To reuse this setting across your code base, instantiate a logger in the script you call dictConfig() and then import that logger elsewhere # my_module/config/my_config.py dictConfig(logging_schema) my_logger = getLogger(__name__) Then in another script from my_module.config.my_config import my_logger as logger logger.info("Hello world!")
{ "language": "en", "url": "https://stackoverflow.com/questions/7507825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "216" }
Q: SharePoint 2010 and ASHX Handler I'm trying to get a webpart deployed and using a Silverlight webpart with an upload control inside. I am however, receiving the following error in the application log when I access my ashx. Exception information: Exception type: HttpParseException Exception message: Could not create type 'FileUploadSP.UploadHandler'. I've got an UploadHandler.cs file with the following code: namespace FileUploadSP { public class UploadHandler : RadUploadHandler { public override void ProcessStream() { base.ProcessStream(); if (this.IsFinalFileRequest()) { string filename = this.Request.Form["RadUAG_fileName"]; string fullPath = @"C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\TEMPLATE\LAYOUTS\FileUploadSP\FileTemp\"; SPContext.Current.Web.AllowUnsafeUpdates = true; FileStream fs = new FileStream(fullPath + filename, FileMode.Open); SPContext.Current.Web.Files.Add("/UploadLibrary/" + filename, fs, true); fs.Close(); File.Delete(fullPath + filename); } } } } And I have the following in my .ashx file: <%@ Assembly Name="Microsoft.SharePoint, Version=14.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Assembly Name="FileUploadSP, Version=1.0.0.0, Culture=neutral, PublicKeyToken=7c8e2c3ef53023ee" %> <%@ WebHandler Language="C#" Class="FileUploadSP.UploadHandler" %> I cannot get the .ashx to work as I expected to. What am I missing? Thanks! A: Check your assembly is in the web.config safe list, and has been deployed to the GAC, with an iis reset. Ashx can be blocked (and unblocked in central admin), but I guess from your error this is not the case. A: For me, it was the blocked file types under central admin -> security. ASHX was on the no-no list.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to add nodes to FireMonkey's TreeView at runtime I can't found any sample in the online documentation, or in the demos included with Delphi XE2, for adding nodes to a FMX.TreeView.TTreeView control at runtime. So, how can I add, remove, and traverse nodes of a FireMonkey TreeView at runtime? A: With AddObject(FmxObject) you can add any Object (Button etc.) as well... A: I think we are all learning at this point... But from what I have seen the TTreeView use the principle that any control can parent another control. All you need to do is set the Parent Property to get the item to show up as a child. var Item1 : TTreeViewItem; Item2 : TTreeViewItem; begin Item1 := TTreeViewItem.Create(Self); Item1.Text := 'My First Node'; Item1.Parent := TreeView1; Item2 := TTreeViewItem.Create(Self); Item2.Text := 'My Child Node'; Item2.Parent := Item1; end; Because of this you can do things never possible before, such as placing any control in the TreeView. For example this code will add a button to the area used by Item2, and the button won't be visible until the Item2 is visible. Button := TButton.Create(self); Button.Text := 'A Button'; Button.Position.X := 100; Button.Parent := Item2; A: I have another idea. The first answer helped me get it. So Add the following code Var TempItem:TTreeViewItem; Begin TempItem := TTreeViewItem.Create(Self); TempItem.Text := 'Enter Caption Here'; TempItem.Parent := TreeView; End Now the actual trick comes when you have to free the item so that it doesn't use unnecessary memory. So lets say you use it in a loop, like I did here: ADOTable.Connection := ADOConnection; ADOTable.TableName := 'MenuTree'; ADOTable.Open; ADOTable.First; ADOTable.Filter := '(CHFlag=''CURRENT'') AND (Parent=''Tree'')'; ADOTable.Filtered := True; While NOT ADOTable.Eof Do Begin TempItem := TTreeViewItem.Create(Self); TempItem.Text := ADOTable['ItemName']; TempItem.Parent := TreeView; // TempItem.Free; ADOTable.Next; End; TempItem.Free; ADOTable.Close; A: Your code isn't secure. If ADOTable is empty, TempItem is never created and the 'free' will generate an access violation. And even if the table is not empty, you will only free the last TempItem created.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: XtraGrid row index mismatch after deleteing a row I am currently using a XtraGrid. I have bound the gridControl to a DataTable. When I delete a row from the DataTable, the XtraGrid shows the change. But when I start dealing with row indexes, I get odd behavior. This is (roughly) the code I use to delete the row. DataTable dtWorkItems; ... gridWorkItemList.DataSource = dtWorkItems; ... int currRowHandle = gridViewWorkItemList.FocusedRowHandle; int currRowIndex = gridViewWorkItemList.GetDataSourceRowIndex(currRowHandle); DataRow theRow = gridViewWorkItemList.GetDataRow(currRowHandle); theRow.Delete(); But this test fails afterwards: int rowHandle = gridViewWorkItemList.FocusedRowHandle; int rowIndex = gridViewWorkItemList.GetDataSourceRowIndex(rowHandle); DataRow dr1 = gridViewWorkItemList.GetDataRow(rowHandle); DataRow dr2 = dtWorkItems.Rows[rowIndex]; if (dr1 != dr2) ; // Failure In fact, dr2 has a state of "Deleted". If I do an AcceptChanges() on the dtWorkItems, then the test will pass. But I would rather not do that. Is there something I have to do to get the row indexes to start matching up again? A: You should delete it from the datasource ( dtWorkItems ), then have the grid refresh if it doesn't automatically. The grid is just a view of the dtWorkItems. Maybe your already doing that as Acceptchanges works, why don't you want to Accept the changes you made ?
{ "language": "en", "url": "https://stackoverflow.com/questions/7507833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: delphi xe2 tms components unavailable under x64 platform Instaled tms components 6.1.4.1 in delphi xe2. Under x32 platform they work ok but under x64 platform they are unavailable. Am I missing something ? A: After some exchange with TMS Software it seems that the problem come with the changes of different platform implementation with components. So the solution it's as you said in the previous comment. Stay in 32 bits and put the components and the code. And only at the end compile in 64 Bits. For the error of the compiler go to Tools Menu, Options, Delphi Options, Library. Select 32 Bits, Copy the path that you have for the TMS Components. And select after 64 Bits, and paste the Path that you have copied from the 32 Bits. Compile and it works. A: I think TMS released these components too soon without proper testing. First,the trick to manually supply sources path to x64 platform does not work (should I say that it is utterly stupid that the installer did not do this by itself). Now Embarcadero is supplying service packs too (not even a month from the release), while TMS is catching up (if?). I think devexpress and Raize are wise to keep testing the components before they label them 'ready for xe2'. A: If we look at the TMSSoftware Website it seems that it's the v6.1.5.0 of the TMS Components Pack which is ready for the Delphi XE2. On this page : http://tmssoftware.com/site/ The message of the first of September. Title : "Info: TMS Component Pack v6.1.5.0 ready for Delphi XE2 & C++Builder XE2"
{ "language": "en", "url": "https://stackoverflow.com/questions/7507837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: LiveBindings - TList bound to TStringGrid I have the following example set of code, how can I bind the Data list elements to the TStringGrid using LiveBindings. I need bi-directional updates so that when the column in the grid is changed it can update the underlying TPerson. I have seen example of how to do this with a TDataset Based binding but I need to do this without a TDataset. unit Unit15; interface uses Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics, Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.Grids, System.Generics.Collections; type TPerson = class(TObject) private FLastName: String; FFirstName: string; published property firstname : string read FFirstName write FFirstName; property Lastname : String read FLastName write FLastName; end; TForm15 = class(TForm) StringGrid1: TStringGrid; procedure FormCreate(Sender: TObject); private { Private declarations } public { Public declarations } Data : TList<TPerson>; end; var Form15: TForm15; implementation {$R *.dfm} procedure TForm15.FormCreate(Sender: TObject); var P : TPerson; begin Data := TList<TPerson>.Create; P := TPerson.Create; P.firstname := 'John'; P.Lastname := 'Doe'; Data.Add(P); P := TPerson.Create; P.firstname := 'Jane'; P.Lastname := 'Doe'; Data.Add(P); // What can I add here or in the designer to link this to the TStringGrid. end; end. A: Part of the solution: From TList to TStringGrid is: procedure TForm15.FormCreate(Sender: TObject); var P : TPerson; bgl: TBindGridList; bs: TBindScope; colexpr: TColumnFormatExpressionItem; cellexpr: TExpressionItem; begin Data := TList<TPerson>.Create; P := TPerson.Create; P.firstname := 'John'; P.Lastname := 'Doe'; Data.Add(P); P := TPerson.Create; P.firstname := 'Jane'; P.Lastname := 'Doe'; Data.Add(P); // What can I add here or in the designer to link this to the TStringGrid. while StringGrid1.ColumnCount<2 do StringGrid1.AddObject(TStringColumn.Create(self)); bs := TBindScope.Create(self); bgl := TBindGridList.Create(self); bgl.ControlComponent := StringGrid1; bgl.SourceComponent := bs; colexpr := bgl.ColumnExpressions.AddExpression; cellexpr := colexpr.FormatCellExpressions.AddExpression; cellexpr.ControlExpression := 'cells[0]'; cellexpr.SourceExpression := 'current.firstname'; colexpr := bgl.ColumnExpressions.AddExpression; cellexpr := colexpr.FormatCellExpressions.AddExpression; cellexpr.ControlExpression := 'cells[1]'; cellexpr.SourceExpression := 'current.lastname'; bs.DataObject := Data; end;
{ "language": "en", "url": "https://stackoverflow.com/questions/7507838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Utility To Count Number Of Lines Of Code In Python Or Bash Is there a quick and dirty way in either python or bash script, that can recursively descend a directory and count the total number of lines of code? We would like to be able to exclude certain directories though. For example: start at: /apps/projects/reallycoolapp exclude: lib/, frameworks/ The excluded directories should be recursive as well. For example: /app/projects/reallycool/lib SHOULD BE EXCLUDED /app/projects/reallycool/modules/apple/frameworks SHOULD ALSO BE EXCLUDED This would be a really useful utility. A: Found an awesome utility CLOC. https://github.com/AlDanial/cloc Here is the command we ran: perl cloc.pl /apps/projects/reallycoolapp --exclude-dir=lib,frameworks And here is the output -------------------------------------------------------------------------------- Language files blank comment code -------------------------------------------------------------------------------- PHP 32 962 1352 2609 Javascript 5 176 225 920 Bourne Again Shell 4 45 70 182 Bourne Shell 12 52 113 178 HTML 1 0 0 25 -------------------------------------------------------------------------------- SUM: 54 1235 1760 3914 -------------------------------------------------------------------------------- A: find ./apps/projects/reallycool -type f | \ grep -v -e /app/projects/reallycool/lib \ -e /app/projects/reallycool/modules/apple/frameworks | \ xargs wc -l | \ cut -d '.' -f 1 | \ awk 'BEGIN{total=0} {total += $1} END{print total}' A few notes... * *the . after the find is important since that's how the cut command can separate the count from the file name *this is a multiline command, so make sure there aren't spaces after the escaping slashes *you might need to exclude other files like svn or something. Also this will give funny values for binary files so you might want to use grep to whitelist the specific file types you are interested in, ie: grep -e .html$ -e .css$ A: The find and wc arguments alone can solve your problem. With find you can specify very complex logic like this: find /apps/projects/reallycoolapp -type f -iname '*.py' ! -path '*/lib/*' ! -path '*/frameworks/*' | xargs wc -l Here the ! invert the condition so this command will count the lines for each python files not in 'lib/' or in 'frameworks/' directories. Just dont forget the '*' or it will not match anything.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How to log API Errors on my server? In my Facebook App Insights I see a section called "API Errors Returned" which indicate that I have a few API Errors everyday. How can I log all the API Errors my App generates so I can fix them or at least know what's wrong? A: I am also getting API errors returned according to Insights. But the Most Common Errors list is empty. Is this a bug? I believe these API errors are related to a bug we have uncovered and has yet to be fixed by Facebook. The bug we have open with them has had it's status changed and we cannot get them to re-open it. Our website app (FilmCrave.com) is behaving like it has a canvas URL when we do not have one set. In our Basic App settings page, we only have Website checked, App on Facebook is NOT checked (this is where you set your canvas URL). Thanks, Nick FilmCrave.com
{ "language": "en", "url": "https://stackoverflow.com/questions/7507854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: BizTalk set up in dual center environment I have a question with setting up BizTalk clustered servers in both dual center. We have a data center in South of CA and North of CA. We want both BizTalk servers to be running at both dual center, possibly doing the same things. If there's a problem with one of the data center, then the other server will pick up the work. My question is should we set up one BizTalk cluster at each data center, or create one Biztalk cluster that includes the Biztalk servers from both South and North? Given the our requirement to have both of them running and doing the same thing, which one makes more sense? Thanks in advance!! Angela A: To have the servers doing the same thing you need for the servers to be part of the same Biztalk cluster. That is Biztalk front end servers that are pointing to the same database. Say you place the database in the North data center, then if the South data center goes down, the North will continue to work. But if the North goes down, the South will stop since there is no database. You then need some strategy to ship your database over continually and then switch to a copy. To get the above working is a lot of work. Then the option of a seperate Biztalk cluster at each site makes more sense. But you need to configure some routing such that what is added to one cluster is transfered to the other.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: mixing my jQuery click events with existing object's onclick attribute I'm using jQuery but dealing with markup produced from JSF pages. A lot of the elements have onclick attributes provided by the JSF code (which isn't my realm). Example: <div onclick="[jsf js to submit form and go to next page]">submit</div> I'm trying to add some client side validation with jQuery. I need something like this pseudo code: $('div').click(function(e){ if(myValidation==true){ // do nothing and let the JS in the onlick attribute do its thing } else { $error.show(); // somehow stop the onclick attribute JS from firing } }) Is there a best-practice for handling this? One thought I had was that on page load, grab the onclick attribute's value, delete the onclick attribute from the object, then...well, that's where I get lost. I could cache the JS as text in a data- attribute, but I'm not sure how to fire that off later. A: Just use eval to run onclick attribute code in your jQuery click event if you want it. You need to remove onclick attribute <div onclick="alert('hi');">submit</div> - $(document).ready(function() { var divClick = $('#theDiv').attr('onclick'); $('#theDiv').removeAttr('onclick'); }); $('#theDiv').bind('click', function(e) { if (myValidation == true) { // do nothing and let the JS in the onclick attribute do its thing eval(divClick); } else { $error.show(); // somehow stop the onclick attribute JS from firing e.preventDefault(); } }); A: Either return false or use: e.stopPropagation() or e.preventDefault() Depending on your needs. A: EDIT You can save original event: var originalEvent = $('div').attr("onclick"); $('div').attr("onclick", false); $('div').click(function(e) { if (false) { // do nothing and let the JS in the onlick attribute do its thing eval(originalEvent); } else { alert("error"); // somehow stop the onclick attribute JS from firing } }); take a look at this http://jsfiddle.net/j4jsU/ Change if(false) to if(true) to see what hepens when form is valid. A: I like e.stopProgation() and e.preventDefault(), but if you do not prefer that strategy, you could also manually remove the onclick() attribute and manually call the function it was using upon successful validation. Different strokes.. A: Why can't you do something like: div=document.getElementById('test'); oldClick=div.onclick; bol=false; div.onclick=function(){ if(bol){ oldClick(); } else { alert('YOU SHALL NOT RUN INLINE JAVASCRIPT'); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7507861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Preferred caching strategy for data-driven applications on mobile devices What is the preferred strategy for creating a mobile application that relies on data to be moved to/from a cloud-based service (thus requiring connectivity)? What mechanisms are typically employed to ensure synchronization while connectivity can be less than stable? Are all potential write operations queued locally and recover gracefully from terminated uploads? Are most downloads/data queries just re-executed when applications are brought back in scope or regain connectivity after losing it for a while? Specific guidance as well as training materials/study resources are acceptable!
{ "language": "en", "url": "https://stackoverflow.com/questions/7507863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: WCF Data Service + Expiring Data (Timer?) I would like to expose a few endpoints via a WCF data service (Singlton) which will maintain a collection of data used to respond to individual requests. Ideally I would like to be able to expire (delete) the data held in memory for a given request after a period of time. The stored data would be used to build (partially only, so out of the box caching is not ok) a result set to return to the client. The data will be objects from an API and must be kept in memory, not peristed to storage. I'm looking for ways to trigger the 'purge' process to check for expired data. Kicking off a timer in the ctor seems like a bad idea. It could be run for every request (single concurrency in enabled) but this seems excessive, and would potentially leave data hanging around when there are not more requests? Any thoughts at all on the issue appreciated. A: You need some sort of timer to run the cleanup process on a regular interval. You could trigger on request but that's not advisable because you certainly shouldn't block requests on what could be a long running cleanup process and because you could potentially have long periods of times between requests means the requests could be working off data that's beyond it's lifetime. One option is to not make the cleanup process not critical by making the reads filter data out beyond it's lifetime. For example, you could use an in memory database like SQL Compact Edition or Sqlite. The cached data could have a timestamp column on it and then reads into the cache could query and always filter by timestamp not older than X. What that does is it makes it not critical for the cleanup to happen but instead an optimization for memory pressure that really should happen. Sql just gives you easy mechanisms to filter by timestamp. You could do the same with your own in memory data structures. As far as the cleanup process goes, you need some sort of timer or something to kick it to run. The process the starts the WCF in proc service could also start a timer and call into the cache on a periodic basic to clean it up. If a cleanup call gets called while its cleaning up, it would just return. If you make the cleanup not critical (like outlined above) and the cleanup process is ignored if running, then each request could potentially kick it as well. A: Ended up redesigning and hosting the relevant service component in a windows service with a system timer to purge required data.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Silverstripe subsites with independent user registrations I'm using the subsites module to make a multi site system. I'd like the users logins to the sites to be independent. So if a user has registered on one site, they can't just go to another subsite and login - they have to register for that site too. In other words - registrations on each subsite is completely independent. Is this possible? A: Technically it would be possible to write a DataObjectDecorator for the Member class, and add a SubsiteID to each member, and then add a filter for that SubsiteID with argumentSQL(). And you need to modify the register form to consider the SubsiteID and hook into the authenticator. But could very well be that there are a couple of other points you need to hook into to get this to work. So yes, it should be possible, but it is going to take a long time, and it will be a pain in the arse to get it working properly. you should carefully consider if you really need it that bad that you need to go this way. It should be possible to just work around this by using groups, and setting group permissions. A: I know this is pretty old thread, but in case someone stumble upon this thread, it will be useful. There is another hack for this. /mysite/extensions/CustomLeftAndMain.php <?php class CustomLeftAndMain extends Extension { public function onAfterInit() { self::handleUser(); } public static function handleUser(){ $currentSubsiteID = Subsite::currentSubsiteID(); $member = Member::currentUser(); $memberBelongsToSubsite = $member->SubsiteID; if($memberBelongsToSubsite>0 && $currentSubsiteID!=$memberBelongsToSubsite){ Security::logout(false); Controller::curr()->redirect("/Security/login/?_c=1001"); } } } and in /mysite/_config.php add an extension LeftAndMain::add_extension('CustomLeftAndMain'); What above code basically does is, the system lets user login no matter which subsite they belong to. And as long as application is initiated, whether the logged in user belongs to current website or not (method handleUser does it.). If user doesnot belong to current site, then they are logged out and then redirected to login page. A: The description says (among other things): * *"The subsites module allows multiple websites to run from a single installation of SilverStripe, and share users, content, and assets between them." *"The branches can have separate users/admins, and information that is individual." If you don't have a common "headquarter", I'm not sure the module is right for you. Instead of hacking the module to do something it isn't intended to do, why not make separate installations?
{ "language": "en", "url": "https://stackoverflow.com/questions/7507871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Reassigning an ActiveRecord instance and corresponding foreign keys In Rails/ActiveReocrd is there a way to replace one instance with another such that all the relations/foreign keys get resolved. I could imagine something like this: //setup customer1 = Customer.find(1) customer2 = Customer.find(2) //this would be cool customer1.replace_with(customer2) supposing customer1 was badly configured and someone had gone and created customer2, not knowing about customer1 it would be nice to be able to quickly set everything to customer 2 So, also this would need to update any foreign keys as well User belongs_to :customer Website belongs_to :customer then any Users/Websites with a foreign key customer_id = 1 would automatically get set to 2 by this 'replace_with' method Does such a thing exist? [I can imagine a hack involving Customer.reflect_on_all_associations(:has_many) etc] Cheers, J A: Something like this could work, although there may be a more proper way: Updated: Corrected a few errors in the associations example. class MyModel < ActiveRecord::Base ... # if needed, force logout / expire session in controller beforehand. def replace_with (another_record) # handles attributes and belongs_to associations attribute_hash = another_record.attributes attribute_hash.delete('id') self.update_attributes!(attribute_hash) ### Begin association example, not complete. # generic way of finding model constants find_model_proc = Proc.new{ |x| x.to_s.singularize.camelize.constantize } model_constant = find_model_proc.call(self.class.name) # handle :has_one, :has_many associations have_ones = model_constant.reflect_on_all_associations(:has_one).find_all{|i| !i.options.include?(:through)} have_manys = model_constant.reflect_on_all_associations(:has_many).find_all{|i| !i.options.include?(:through)} update_assoc_proc = Proc.new do |assoc, associated_record, id| primary_key = assoc.primary_key_name.to_sym attribs = associated_record.attributes attribs[primary_key] = self.id associated_record.update_attributes!(attribs) end have_ones.each do |assoc| associated_record = self.send(assoc.name) unless associated_record.nil? update_assoc_proc.call(assoc, associated_record, self.id) end end have_manys.each do |assoc| associated_records = self.send(assoc.name) associated_records.each do |associated_record| update_assoc_proc.call(assoc, associated_record, self.id) end end ### End association example, not complete. # and if desired.. # do not call :destroy if you have any associations set with :dependents => :destroy another_record.destroy end ... end I've included an example for how you could handle some associations, but overall this can become tricky.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is a good standard practice to find a distinct regex patterns from a single field in a table? I'm looking for a standard approach to generate all the unique patterns that can occur in a single field in an sql 2000-2008r2 table. Is there a simple tool that will generate all the different static patterns. Also is there a name for a distinct regular expression pattern. I'm trying to do this t-sql, but will also perform this in either c#, or vb6, or even javascript. I've noticed that apostrophes can come into play, as well as : or other text. Is there a good way to detect when a user puts in various combinations? \d{1,1}\d{1,1}\d{1,1}[.]\d{1,1} would be the same pattern for 111.10 or 201.90. If I have other patterns such as "Refund" I want to see something like [A-z]{6,6}. Is there a command or tool for Regular expression that would generate these distinct but static patterns, so that when a new pattern crops up, I can date and time stamp when it occurs and have it be validated. When someone type 7 characters, I want the Patterns that were caught under Refund to now also accept the pattern for "Balance". [A-z]{6,7} is now acceptable and won't cause the user to be alerted after validation has occurred by an admin. Thanks A: I don't think your question gets encountered enough to have a standard practice... While it would be possible to analyze two regular expressions and deterimine if the are equivalent, I don't think anyone has actually done so and made the results available...
{ "language": "en", "url": "https://stackoverflow.com/questions/7507879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Recommended development web server for Rails 3.1 and Ruby 1.9.2 I have been using Mongrel successfully with rails 2.* and 3.0* development, with ruby 1.8.7. I recently started working with Rails 3.1 and ruby 1.9.2. I got my test app running with WEBrick. I don't like WEBrick. If I forget and simply close the WEBrick terminal window instead of going into the window and issuing a Control-C to WEBrick, the server port (3000) stays in use, and I can't run 'rails server' again until I log out everything and get WEBrick cleared out of the port table. Mongrel never had that problem. I do have a build problem with Mongrel and ruby-1.9.2. I get multiple header files in the build, some referring to ruby-1.9.1 and some ruby-1.9.2. What a mess. What is the recommended development web server for my config, which is 32-bit Ubuntu Natty with Rails 3.1 and ruby 1.9.2? A: Webrick works well for me. The only problem I had is that it did not work well with https secure. The solution was to only run https on staging and production, not on development machine. I use the dev machine only as the server, and develop on Windows machine with Notepad++. I think it works well, after using a horrible Rails IDE. (I used to use Visual Studio and love it.) Access the web page through local IP and port. It's a cheap, fast easy solution for Windows users. I am running Ubuntu 11.04, Rails 3.07, Ruby 1.92 with RVM, and PostgreSQL. RVM is supposed to make life easy for Ubuntu users, because Ubuntu uses a different version of Ruby. To kill the server process running on port 3000: xxxx is the value returned from the first line. $ lsof | grep 3000 $ kill -9 xxxx This could easily be combined into one line or an alias killserver or similar. A: Thanks for the various port listener kill commands, I will construct something simple to clear the WEBrick's irritating habit, and continue to use it. Chasing a development web server issue is low on my priority list; they should just work. You can see from my questions that my Linux skills don't go very deep into the kernel.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Ruby Selenium - How to check if text is present only once I am working on a application that reports status updates on certain services. I am using Ruby Selenium for testing the application. For the same purpose I wana test some updates that are just plain text - these updates should appear exactly once in the page. Thus, how can I test if a web page has some text only once ? I am looking for something like assertTextPresentOnlyOnce ?? A: I believe you are looking something like this..: !20.times { break if (selenium.is_visible(driver.find_element(:id, 'loginid')) rescue false ); sleep 1} . . [rest of the code you want to execute] . . This will cover you and for page time-out as well if the element you want is not present yet. As soon as selenium finds the element you want the loop will break and the code below will be executed.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: CUDA Runtime error when copying to element of jagged array On the host I have a jagged array implemented with a vector of vector of ints. To set up a jagged array on device, I started by allocating a pointer to a pointer of ints: int ** adjlist; // host pointer int ** d_adjlist; // device pointer Just to clarify some terminology, I am calling the array of pointers adjlist the "base", and the arrays that are pointed to adjlist[i] the "teeth". // this is the width of the base const int ens_size = 12; // allocate the base on the device cutilSafeCall( cudaMalloc( (void***)&d_adjlist, ens_size*sizeof(int*) ) ); // to store the contents of base on host (I can't cudaMalloc the teeth directly, as that would require dereferencing a pointer to device memory) adjlist = static_cast<int**>( malloc( ens_size*sizeof(int*) ) ); // copy the contents of base from the device to the host cutilSafeCall( cudaMemcpy( adjlist, d_adjlist, ens_size*sizeof(int*), cudaMemcpyDeviceToHost) ); This all works fine, now the base is done. The original vector of vectors I mentioned at the beginning is stored at nets[i]->adjlist. Now I allocate the teeth with the following loop: int N = 6; int numNets = 2; for(int i=0; i < numNets; ++i) { for(int j=0; j < N; ++j) { k = nets[i]->adjlist[j].size(); // allocate the "teeth" of the adjacency list cutilSafeCall( cudaMalloc( (void**)&(adjlist[N*i+j]), k ) ); } } My problem arises when I go to copy the teeth from the vector of vectors to the teeth on the device, here is the code: // this holds the tooth to be copied to the device int h_adjlist[Kmax]; // k <= Kmax for(int i=0; i < numNets; ++i) { for(int j=0; j < N; ++j) { k = nets[i]->adjlist[j].size(); // copy the adjacency list of the (Ni+j)-th node copy( nets[i]->adjlist[j].begin(), nets[i]->adjlist[j].end(), h_adjlist ); cutilSafeCall( cudaMemcpy( adjlist[N*i+j], h_adjlist, sizeof(int)*k, cudaMemcpyHostToDevice ) ); } } When I try to run the code, I get a Runtime API error: invalid argument. error on the line: cudaMemcpyHostToDevice ) ); At least that is the line where the cudaSafeCall function says the error occurs. Why is this being flagged as an invalid argument? Or if it is some other argument, which one?
{ "language": "en", "url": "https://stackoverflow.com/questions/7507887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: AS3: How to make this mouse position detection work properly? I'm new to AS3 so please bare with my basic questions. What I want to do is have a left arrow MC on the left side of the stage and right arrow MC on the right side of stage. When the mouse is over the left 1/3 of the stage, the left arrow appears, on the right 1/3 of the stage, the right arrow appears, but the middle 1/3 the arrows fade out. I do NOT want to make large invisible MCs and detect the mouse movement that way. I just want it to be relative to the mouse position on the stage. I thought it would be very easy, but the eventListener fires everytime the mouse moves, so the left and right arrow MC animation is constantly being triggered, and they look like they are "shaking" for a lack of a better word. What I have so far is the following. Could someone please give me some help with this? var stagePos:int = stage.width/3; addEventListener(MouseEvent.MOUSE_MOVE, arrowDetectHandler); function arrowDetectHandler(e:MouseEvent) { var mouseArrow:int = mouseX; if (mouseArrow<stagePos) { arrowLeft_mc.gotoAndPlay("Show"); trace ("left arrow show"); } else if (mouseArrow>stagePos && mouseArrow<stagePos*2) { arrowLeft_mc.gotoAndPlay("Hide"); arrowRight_mc.gotoAndPlay("Hide"); trace ("nothing happens"); } else if (mouseArrow>stagePos*2) { arrowRight_mc.gotoAndPlay("Show"); trace ("right arrow show"); } } A: if...else seem to be ok. The only thing which may couse the problem is mc.gotoAndPlay. Try to use alpha property instead: var stagePos:int = stage.width/3; addEventListener(MouseEvent.MOUSE_MOVE, arrowDetectHandler); function arrowDetectHandler(e:MouseEvent) { var mouseArrow:int = mouseX; if (mouseArrow<stagePos) { arrowLeft_mc.alpha = 1; //alpha is 1, arrow is shown trace ("left arrow show"); } else if (mouseArrow>stagePos && mouseArrow<stagePos*2) { arrowLeft_mc.alpha = 0; //alpha is 0, arrow is hidden arrowRight_mc.alpha = 0; trace ("nothing happens"); } else if (mouseArrow>stagePos*2) { arrowRight_mc.alpha = 1; trace ("right arrow show"); } } A: The trouble is the speed with which your code is getting called repeatedly. If you you are listening to MouseEvent.MOUSE_MOVE then it will happen way too fast for any 'gotoAndPlay' business to finish. Since you don't want to do the invisible MovieClips ( which gives you the very handy MouseEvent.ROLL_OVER and MouseEvent.ROLL_OUT events ) then you are left polling to evaluate coordinates like you have in your code. You need to remember the last 'answer' your code gave and then ignore the case that is already true next time. You'll have to bear with my preference for switch statements. var stagePos:int = stage.width/3; var _arrowShowing : int = 0; addEventListener(MouseEvent.MOUSE_MOVE, arrowDetectHandler); function arrowDetectHandler(e:MouseEvent) { var mouseArrow:int = mouseX; switch( true ) { case ( !_arrowShowing == 1 && mouseArrow < stagePos ) : _arrowShowing = 1; arrowLeft_mc.gotoAndPlay("Show"); trace ("left arrow show"); break; case ( !_arrowShowing == 0 && mouseArrow > stagePos && mouseArrow < stagePos * 2 ) : _arrowShowing = 0; arrowLeft_mc.gotoAndPlay("Hide"); arrowRight_mc.gotoAndPlay("Hide"); trace ("nothing happens"); break; case ( !_arrowShowing == 2 && mouseArrow>stagePos*2 ) : _arrowShowing = 2; arrowRight_mc.gotoAndPlay("Show"); trace ("right arrow show"); break; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7507895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Dynamically adding container to a dynamic container I have a loop that goes through data from a book and displays it. The book is not consistent in it's layout so I am trying to display it in two different ways. First way(works fine) is to load the text from that section in to a panel and display it. The second way is to create a new panel (panel creates fine) and then add collapsable panels(nested) to that panel. Here is the code from the else loop. else if (newPanel == false){ // simpleData is just for the title bar of the new panel // otherwise the panel has no content var simpleData:Section = new Section; simpleData.section_letter = item.section_letter; simpleData.letter_title = item.letter_title; simpleData.section_id = item.section_id; simpleData.title = item.title; simpleData.bookmark = item.bookmark; simpleData.read_section = item.read_section; var display2:readPanel02 = new readPanel02; //item is all the data for the new text display2.id = "panel"+item.section_id; //trace(display2.name);//trace works fine // set vars is how I pass in the data to the panel display2.setVars(simpleData); studyArea.addElement(display2); // displays fine newPanel = true; //this is where it fails var ssPanel:subSectionPanel = new subSectionPanel; //function to pass in the vars to the new panel ssPanel.setSSVars(item); //this.studyArea[newPanelName].addElement(ssPanel); this["panel"+item.section_id].addElement(ssPanel); The error I get is: ReferenceError: Error #1069: Property panel4.4 not found on components.readTest and there is no default value. I have tried setting the "name" property instead of the "id" property. Any help would be greatly appreciated. I am stumped. Thanks. A: So here is the solution I came up with. I made the new readPanel create the sub panels. I added the else statement to help it make more sense. The readPanel creates the first sub panel, and for every subsequent need of a sub panel it references the other panel by name and calls the public function in the panel that creates the new sub panel. Here's the code to create the main panel: else if (newPanel == false){ var display2:readPanel02 = new readPanel02; studyArea.addElement(display2); display2.name = "panel"+item.section_id; display2.setVars(item); newPanel = true; } else{ var myPanel:readPanel02 = studyArea.getChildByName("panel"+item.section_id) as readPanel02; myPanel.addSubSection(item); } And here is the functions in the panel: public function setVars(data:Section):void { myS = data; textLetter = myS.section_letter; textLetterTitle = myS.letter_title; textSection = myS.section_id; textTitle = myS.title; myS.bookmark == 0 ? bookmarked = false : bookmarked = true; myS.read_section == 0 ? doneRead = false : doneRead = true; showTagIcon(); showReadIcon(); addSubSection(myS); } public function addSubSection(data:Section):void{ var ssPanel:subSectionPanel = new subSectionPanel; ssPanel.setSSVars(data); myContentGroup.addElementAt(ssPanel, i); i++; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7507897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Usage of Factory / Abstract Factory Design Patterns in Unit Testing I am told that the Factory / Abstract Factory Design Patterns for writing Unit Test cases is very effective but I havent been able to find any tutorial which clearly demonstrates it. So it will be very helpful if somebody can point me to any existing tutorial or give me some pseudo code and explanation here :) A: According to GoF, the intent of the Abstract Factory pattern is to provide an interface for creating families of related or dependent objects without specifying their conrcete classes. In frameworks abstract factories are typically provided using dependency injection, and this is the real key to write code which is easy to test. Dependency injection just means that dependencies are "injected" through the constructor, rather than newed inside the class. Suppose you use two factories to produce dependencies (here just one dependency, Dice) for easy and hard games of backgammon: public class EasyGameFactory implements GameFactory { Dice createDice() { return new LuckyDice(); } } public class NormalGameFactory implements GameFactory { Dice createDice() { return new RandomDice(); } } For unit testing purposes you would really prefer to use neither of the Dice implementations, so you write a special implementation of GameFactory: public class CustomGameFactory implements GameFactory { private Dice mDice; public CustomGameFactory(Dice dice) { mDice = dice; } Dice createDice() { return mDice; } } This factory would not have to be part of your production code tree. You supply the factory with a special implementation of Dice through the test code: public class TestBackgammon { @Test public void shouldReturnDiceThrown() { SettableDice dice = new SettableDice(); Game game = new GameImpl(new CustomGameFactory(dice)); dice.setDice(new int[] {4, 5}); game.nextTurn(); assertArrayEquals(new int[] {4, 5}, game.diceThrown()); } } With this approach any concrete dependency can be injected for testing purposes. However, often the same can be achieved without an Abstract Factory, i.e. rather than injecting a factory, the dependency itself can be injected.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can i find the co occurrence of words in a sentences with perl? is there anyone who can help me find the co occurrence of words in a sentence ? the words are listed in two different arrays, the idea is to find the co occurrence of the words in two arrays from the sentences. example: #sentence my $string1 = "i'm going to find the occurrence of two words if possible"; my $string2 = "to find a solution to this problem"; my $string3 = "i will try my best for a way to this problem"; #arrays my @arr1 = qw(i'm going match possible solution); my @arr2 = qw(problem possible best); how can i write a program in perl to search for co occurrence of two words (eg. going and possible since going is in @arr1 and possible is in @arr2 for $string1 that means both words co occurred in the first sentence) also the same in the second sentence ie $string2 (since solution and problem co occurred in at least one of the arrays) but the third sentence is invalid ie $string3 (since non of the words in the sentence occur in @arr1). thank you A: #!/usr/bin/perl use warnings; use strict; my @strings = ( "i'm going to find the occurrence of two words if possible", "to find a solution to this problem", "i will try my best for a way to this problem" ); my @arr1 = qw(going match possible solution); my @arr2 = qw(problem possible best); my $pat1 = join '|', @arr1; my $pat2 = join '|', @arr2; foreach my $str (@strings) { if ($str =~ /$pat1/ and $str =~ /$pat2/) { print $str, "\n"; } } A: Take care of word boundary to not match possible in impossible. #!/usr/bin/perl use Modern::Perl; my @strings = ( "i'm going to find the occurrence of two words if possible", "i'm going to find the occurrence of two words if impossible", "to find a solution to this problem", "i will try my best for a way to this problem" ); my @arr1 = qw(i'm going match possible solution); my @arr2 = qw(problem possible best); my $re1 = '\b'.join('\b|\b', @arr1).'\b'; my $re2 = '\b'.join('\b|\b', @arr2).'\b'; foreach my $str (@strings) { my @l1 = $str =~ /($re1)/g; my @l2 = $str =~ /($re2)/g; if (@l1 && @l2) { say "found : [@l1] [@l2] in : '$str'"; } else { say "not found in : '$str'"; } } output: found : [i'm going possible] [possible] in : 'i'm going to find the occurrence of two words if possible' not found in : 'i'm going to find the occurrence of two words if impossible' found : [solution] [problem] in : 'to find a solution to this problem' not found in : 'i will try my best for a way to this problem'
{ "language": "en", "url": "https://stackoverflow.com/questions/7507899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can I use django as a front end for an already existing database? I have a database populated with data that I want to present as a website. It will be read-only and I was wondering if there was a standard way of presenting the data using django's forms and template syntax to make my job easier. I could code up a site with php but I was wondering if it was possible from an alternative language. I suppose this question could be extended to other web frameworks eg. ruby on rails. My background is with python so a django answer would be preferable. I am not concerned with administering the database as it is out of my hands (I only have read-only access anyway). Thanks A: "Integrating Django with a legacy database"
{ "language": "en", "url": "https://stackoverflow.com/questions/7507900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Resetting element CSS attributes on container resize event (onresize) I have a function in the onresize event of a wrapper div to resize an element. That function isn't called. Is the onresize event not available for divs? Here is the HTML. <div id="matting" onresize="resize_page();"> <!-- Begin page matting div --> <div id="page"> <!-- Begin page div --> </div> <!-- End page div --> </div> <!-- End page matting div --> <script type="text/javascript"> function resize_page() { alert ('resize_page'); $("#page").css('height','120%'); } </script> A: The onresize event occurs when a window or frame is resized. So, it seems it isn't supported for div http://www.w3schools.com/jsref/event_onresize.asp You can use jquery plugin http://benalman.com/projects/jquery-resize-plugin/ An internal polling loop is started which periodically checks for element size changes and triggers the event when appropriate
{ "language": "en", "url": "https://stackoverflow.com/questions/7507902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Need to disable an array of ctrl keys for my site Especially Ctrl+I , which is "mail this page". I'm using wordpress self hosted. So far I've found this code, not sure how to implement it or if it's old. Please no plethora of reasons as to why you find this attempt pointless. A: Really, shouldn't answer, but: There's no reason for this, because there's always a very easy way around it. It'll probably take a lot more work than whatever you end up with's worth. If somebody has half of a computer literate mind, they probably can get past this without a problem at all. Summary: Don't bother A: You can't really do that. See this page for really good information on the portability of various key events in JavaScript across multiple browsers. You will see for one that each browser handles/responds to various key events in many different ways. Also, most of the default browser actions (e.g. Ctrl-F, Ctrl-S) cannot be canceled if you are capturing key events. You can still detect some of them and respond, but you can't actually stop the browser from displaying the search dialog or whatever specific action is to be performed by the key combination. Also, if someone really wants to take your page's HTML/JavaScript code or content, these methods won't stop them. The disable right click code from the link you referenced can prevent right click, but all someone has to do is disable javascript and it no longer works. A: Disabling hotkeys won't stop anyone from just selecting that option from the File menu. People will always find ways around these kinds of hacks. Turning off JavaScript, hacking the source with Firebug, Option+Click on a Mac, taking a screenshot, etc. They are completely ineffective against anyone even slightly determined to do what you don't want them to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-4" }
Q: same query, dramatic different performances on different data. MySQL So, I have this huge database with 40 Millions entries. the query is a simple (a_view is a view!) select * from a_view where id > x LIMIT 10000 this is the behavior I get: If x is a little number (int) the query is super fast. when x > 29 Millions the query starts to take minutes. if it is closer to 30 Millions it takes hours. and so on... why is that? what can I do to avoid this? I am using InnoDB as engine, tables have indexes. the value of the limit is a critical one, it affects performances. if it is small the query is always fast. but if x is close to 30Millions then I need to be very careful to set it not too big (less than 300 hundreds), and still it is quite slow, but doesn't take forever If you need more details, feel free to ask. EDIT: here is the explain +----+-------------+-------+--------+-----------------+---------+---------+---------------------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------+---------+---------+---------------------+---------+-------------+ | 1 | SIMPLE | aH | index | PRIMARY | PRIMARY | 39 | NULL | 3028439 | Using index | | 1 | SIMPLE | a | eq_ref | PRIMARY | PRIMARY | 4 | odb.aH.albumID | 1 | Using where | | 1 | SIMPLE | aHT | ref | PRIMARY,albumID | albumID | 4 | odb.a.albumID | 4 | | | 1 | SIMPLE | t | eq_ref | PRIMARY | PRIMARY | 4 | odb.aHT.id | 1 | Using where | | 1 | SIMPLE | g | eq_ref | PRIMARY | PRIMARY | 4 | odb.t.genre | 1 | | | 1 | SIMPLE | ar | eq_ref | PRIMARY | PRIMARY | 4 | odb.t.artist | 1 | | +----+-------------+-------+--------+-----------------+---------+---------+---------------------+---------+-------------+ A: Here is a guess. Basically, your view is a select on some tables. The "id" could be a row number. The larger your "x" is, the more select rows need to be created (and discarded) before you can get whatever data you want. That is why your query slows down when your "x" increases. If this is true, one solution could be to create a table that contains the rownum and a primary key sorted by whatever "order by" you are using. Once you have that table, you can join it with the rest of your data and select your data window by a rownum range.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: XPath: Selecting a node based on another nodes value I am trying to use a single XPath expression to select a node that has a child node which matches another node in the document. A match would mean that ALL attributes of the node are the same. So if a node was being compared with several attributes doing individual attribute comparisons would be unmaintainable. As an example given the following: <Network> <Machines> <Machine Name = "MyMachine"> <Services> <ServiceDetails Description="MyService" Executable="c:\Myservice.exe" DisplayName="My Service" Version="5"/> </Services> </Machine> ... </Machines> <Services> <Service Name = "Service1"> <ServiceDetails Description="MyService" Executable="c:\Myservice.exe" DisplayName="My Service" Version="5"/> </Service> ... </Services> </Network> I want to get the service node from Services based on the ServiceDetails listed under MyMachine. I thought it would look something like: //Services/Service[ServiceDetails = //Machines/Machine[@Name='MyMachine']/ServiceDetails] but it doesn't seem to work. I suspect the '=' operator isn't handling the node comparison correctly. I think there are some XPath 2.0 Methods that might work but I am using .NET 4.0 (System.XML namespace) I do not know if I can use them. If XPath 2.0 methods would help here I would really appreciate an explanation on how to use them in .Net 4.0. Thanks A: Use: /*/Services/Service [ServiceDetails/@Description = /*/Machines/Machine[@Name = "MyMachine"] /Services/ServiceDetails/@Description ] A: Try this will validate all attribute values are equal in both the elements then it is true: /Network[(descendant::ServiceDetails/@Description = /Network//Machine[@Name = "MyMachine"]/Services/ServiceDetails/@Description) and (descendant::ServiceDetails/@Executable = /Network//Machine[@Name = "MyMachine"]/Services/ServiceDetails/@Executable) and (descendant::ServiceDetails/@DisplayName = /Network//Machine[@Name = "MyMachine"]/Services/ServiceDetails/@DisplayName) and (descendant::ServiceDetails/@Version = /Network//Machine[@Name = "MyMachine"]/Services/ServiceDetails/@Version)]
{ "language": "en", "url": "https://stackoverflow.com/questions/7507907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I see a `dependency:tree` for artifacts only used in non-default lifecycle steps? I have a Maven project with a number of dependencies. I can run mvn dependency:tree to get a dump of all the artifacts that I depend on, plus their transitive dependencies, etc, turtles all the way down. However, I can sometimes run a non-default lifecycle goal like rpm:rpm or javadoc:javadoc and it will complain about missing an artifact that wasn't listed in dependency:tree. Is there a way to tell Maven "calculate dependencies as if you were going to run goal X:Y, then give me a dependency tree for that"? Am I missing something? A: You're talking about running plugin goals, not lifecycle phases. Plugins have their own dependencies that are unrelated to the project dependencies. If you run Maven with verbose output (-X/--debug command line option), it will show you the dependency trees of all the plugins. This is the only way I've ever found to see a plugin's dependencies. The output is huge, and it will take you a while to orient yourself the first time through, but the trees are pretty obvious when you find them. Try searching for occurrences of the plugin's artifactId. That will get you where you want to be.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: See last comment or stream tag for each friend So right now this looks like: $mq = array( "usr"=>"SELECT uid2 FROM friend WHERE uid1 = me() LIMIT 70", "basics"=>"SELECT name, uid FROM user WHERE uid IN (SELECT uid2 FROM #usr)", "q1" =>"SELECT actor_id FROM stream_tag WHERE target_id = me() AND actor_id IN (SELECT uid2 FROM #usr)", "q2" =>"SELECT target_id FROM stream_tag WHERE target_id IN (SELECT uid2 FROM #usr) AND actor_id = me()" ); I'm trying to get interactions between each active users friends. Any better way to do this? A: I think you can achieve the same result with something more simple like: SELECT post_id, actor_id, target_id FROM stream_tag WHERE target_id=me() OR actor_id=me() hope this helps
{ "language": "en", "url": "https://stackoverflow.com/questions/7507909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Kinect Openni - I need to filter out random people from the active user I need major help!!! I am designing a game that will be at a tradeshow which means lots of people walking by. The problem it seems is that once kinect sees you "New user" even if you are just strolling by it seems to kill or mess up the ONISkeletonEvent.USER_TRACKING data when that user is "LOST". Please if you have any ideas I need them fast! This thing needs to ship end of week. I thought I could handle this by assigning an activeUserID and filtering based on that but it just doesn't care. When it "LOST USER" it is game over even if that was just someone watching and then moving away. PLEASE HELP!!! A: OpenNI identifies each user with a unique ID. Each event message comes with a user id which connects the message to a particular user. Once a user has its skeleton calibrated, you can skip further calibrations for other users, thus limiting the skeleton tracking to the first user who completed the skeleton calibration procedure.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Moving different projects tracked by git into one repository I have several closely related projects in a directory, and I want them all to be tracked with the same git repository. Right now I have two directories, thing1 and thing2, in a parent directory things. Each directory has its own .git. I want to have one .git in the things directory that includes all of the history from both thing1 and thing2. My question is essentially the same as this one, but with two (or in general any number of) directories instead of one. A: Use filter-branch to move all history in the repos to where you want them to be in the resultant repository. In one of the repos, add a remote that points to the other one. Do a fetch and you'll have both in the same one. You will have separate branches for the work in each. Do a merge and from then on, you will have changes to both tracked in a common branch - if that's what you want to do. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Cocoa icon for file type? If I have a file, I can get the icon by doing something such as: NSImage *iconImage = [[NSWorkspace sharedWorkspace] iconForFile: @"myFile.png"]; But if I just wanted to get the icon for a specific file type (example the icon associated with png files, without having a "myFile.png" that already exists), i'm not sure how I can do that. Any suggestions are appreciated! A: You can first determine file type (UTI) and then pass it on to get icon: NSString *fileName = @"lemur.jpg"; // generic path to some file CFStringRef fileExtension = (__bridge CFStringRef)[fileName pathExtension]; CFStringRef fileUTI = UTTypeCreatePreferredIdentifierForTag(kUTTagClassFilenameExtension, fileExtension, NULL); NSImage *image = [[NSWorkspace sharedWorkspace]iconForFileType:(__bridge NSString *)fileUTI]; A: Underneath -[NSWorkspace iconForFile:] in the documentation is -[NSWorkspace iconForFileType:]. Have you tried that? A: Here is the Swift 5 version of Dave DeLong's answer: icon(forFile:) Returns an image containing the icon for the specified file. Declaration func icon(forFile fullPath: String) -> NSImage Parameters fullPath The full path to the file. icon(forFileType:) Returns an image containing the icon for files of the specified type. Declaration func icon(forFileType fileType: String) -> NSImage Parameters fileType The file type, which may be either a filename extension, an encoded HFS file type, or a universal type identifier (UTI). A: Here is the Swift 5 version of PetrV's answer: public extension NSWorkspace { /// Returns an image containing the icon for files of the same type as the file at the specified path. /// /// - Parameter filePath: The full path to the file. /// - Returns: The icon associated with files of the same type as the file at the given path. func icon(forFileTypeAtSamplePath filePath: String) -> NSImage? { let fileExtension = URL(fileURLWithPath: filePath).pathExtension guard let unmanagedFileUti = UTTypeCreatePreferredIdentifierForTag(kUTTagClassFilenameExtension, fileExtension as CFString, nil), let fileUti = unmanagedFileUti.takeRetainedValue() as String? else { assertionFailure("Should've gotten a UTI for \(fileExtension)") return nil } return NSWorkspace.shared.icon(forFileType: fileUti) } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7507917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: How to optimize this MySQL query? (moving window) I have a huge table (400k+ rows), where each row describes an event in the FX market. The table's primary key is an integer named 'pTime' - it is the time at which the event occurred in POSIX time. My database is queried repeatedly by my computer during a simulation that I constantly run. During this simulation, I pass an input pTime (I call it qTime) to a MySQL procedure. qTime is a query point from that same huge table. Using qTime, my procedure filters the table according to the following rule: Select only those rows whose pTime is a maximum 2 hours away from the input qTime on any day. ex. query point: `2001-01-01 07:00` lower limit: `ANY-ANY-ANY 05:00` upper limit: `ANY-ANY-ANY 09:00` After this query the query point will shift by 1 row (5 minutes), and a new query will be initiated: query point: `2001-01-01 07:05` lower limit: `ANY-ANY-ANY 05:05` upper limit: `ANY-ANY-ANY 09:05` This is the way I accomplish that: SELECT * FROM mergetbl WHERE TIME_TO_SEC(TIMEDIFF(FROM_UNIXTIME(pTime,"%H:%i"),FROM_UNIXTIME(qTime,"%H:%i")))/3600 BETWEEN -2 AND 2 Although I have an index on pTime, this piece of code significantly slows down my software. I would like to pre-process this statement for each value of pTime (which will later serve as an input qTime), but I cannot figure out a way to do this. A: You query still needs to scan every value because of how you are testing the time within certain ranges that are not spanning of the index. You would need to separate your time into a different field and index to gain the benefit of an index here. (note: answer was edited to fix my original misunderstanding of the question) A: If you rely only on time - I'd suggest you to add another column of time type with time fraction of pTime and perform queries over it A: DATETIME is the wrong type in this case because no system of DATETIME storage I know of will be able to use an index if you're examining only the TIME part of the value. The easy optimization is, as others have said, to store the time separately in a field of datatype TIME (or perhaps some kind of integer offset) and index that. If you really want the two pieces of information in the same column you'll have to roll your own data format, giving primacy to the time type. You could use a string type in the format HH:MM:SS YYYY-MM-DD or you could use a NUMERIC field in which the whole number part is a seconds-from-midnight offset and the decimal part a days-from-reference-date offset. Also, consider how much value the index will be. If your range is four hours, assuming equal distribution during the day, this index will return 17% of your database. While that will produce some benefit, if you're doing any other filtering I would try to work that into your index as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: OpenGL point sprites with depth testing - a blending issue? I am rendering point sprites (using OpenGL ES 2.0 on iOS) as a user's drawing strokes. I am storing these points in vertex buffer objects such that I need to perform depth testing in order for the sprites to appear in the correct order when they're submitted for drawing. I'm seeing an odd effect when rendering these drawing strokes, as shown by the following screenshot: Note the background-coloured 'border' around the edge of the blue stroke, where it is drawn over the green. The user drew the blue stroke after the green stroke, but when the VBOs are redrawn the blue stroke gets drawn first. When it comes to draw the green stroke, depth testing kicks in and sees that it should be behind the blue stroke, and so does this, with some success. It appears to me to be some kind of blending issue, or to do with incorrectly calculating the colour in the fragment shader? The edges of all strokes should be transparent, however it appears that the fragment shader combines it with the background texture when processing those fragments. In my app I have created a depth renderbuffer and called glEnable(GL_DEPTH_TEST) using glDepthFunc(GL_LEQUAL). I have experimented with glDepthMask() to no avail. Blending is set to glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA), and the point sprite colour uses premultiplied alpha values. The drawing routine is very simple: * *Bind render-to-texture FBO. *Draw background texture. *Draw point sprites (from a number of VBOs). *Draw this FBO's texture to the main framebuffer. *Present the main framebuffer. EDIT Here is some code from the drawing routine. Setup state prior to drawing: glDisable(GL_DITHER); glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); Drawing routine: [drawingView setFramebuffer:drawingView.scratchFramebuffer andClear:YES]; glUseProgram(programs[PROGRAM_TEXTURE]); [self drawTexture:[self textureForBackgroundType:self.backgroundType]]; glUseProgram(programs[PROGRAM_POINT_SPRITE]); // ... // Draw all VBOs containing point sprite data // ... [drawingView setFramebuffer:drawingView.defaultFramebuffer andClear:YES]; glUseProgram(programs[PROGRAM_TEXTURE]); [self drawTexture:drawingView.scratchTexture]; [drawingView presentFramebuffer:drawingView.defaultFramebuffer]; Thanks for any help. A: If you want to draw non opaque geometries you have to z-sort them from back to front. This has been the only way to get a proper blending for many years. These days there are some algorithms for order independent transparency like Dual Depth Peeling but they are not applicable to iOS.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: how to use monadic forms? I am implementing a "contact me" form that will send an email when it is submitted. I needed this form to emit custom HTML, so I ended up using monadic forms. The problem is that I do not know how to use a monadic form. the code is below. I have omitted the part that sends e-mail for brevity. the problem is that my form never validates correctly. the form result is never FormSuccess in my postContactR function. It seems that I do not initialize the form correctly when I call runFormPost inside postContactR. I always pass Nothing instead of the actual ContactData to contactForm and I do not know how to construct my ContactData from the request. Is my understanding of the problem correct? I am trying to work with poorly documented features. :) any help? EDIT: what looks strange is that validation errors do show up in the form if I submit an invalid form, so the request data does get read at some point. what does not work is that when there are no errors I do not get redirected to RootR {-# LANGUAGE OverloadedStrings #-} {-# LANGUAGE QuasiQuotes #-} {-# LANGUAGE TemplateHaskell #-} module Handler.Contact where import Control.Applicative ((<$>), (<*>)) import Data.Text (Text) import Foundation import Network.Mail.Mime data ContactData = ContactData { contactName :: Text , contactEmail :: Text , contactMessage :: Textarea } deriving Show contactForm d = \html -> do (r1, v1) <- mreq textField "Your name:" (contactName <$> d) (r2, v2) <- mreq emailField "Your e-mail:" (contactEmail <$> d) (r3, v3) <- mreq textareaField "Message:" (contactMessage <$> d) let views = [v1, v2, v3] return (ContactData <$> r1 <*> r2 <*> r3, $(widgetFile "contact-form")) getContactR :: Handler RepHtml getContactR = do ((_, form), _) <- runFormPost (contactForm Nothing) defaultLayout $ do setTitle "contact" addWidget $(widgetFile "contact") postContactR :: Handler RepHtml postContactR = do ((r, form), _) <- runFormPost (contactForm Nothing) case r of FormSuccess d -> do sendEmail d setMessage "Message sent" redirect RedirectTemporary RootR _ -> getContactR A: Are you including the html value in contact-form.hamlet? It's a nonce value. You'd get better debug information if you printed the value of r (in postContactR). I have on my writing TODO list to add a monadic form example, it should be up soon.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: python sort list short of renaming/fixing the logging module on the webservers... when i do a list.sort(), the list entries get placed in the following order: 2011-09-21 19:15:54,731 DEBUG __main__ 44: running www.site.com-110731.log.0.gz 2011-09-21 19:15:54,731 DEBUG __main__ 44: running www.site.com-110731.log.1.gz 2011-09-21 19:15:54,731 DEBUG __main__ 44: running www.site.com-110731.log.2.gz 2011-09-21 19:15:54,732 DEBUG __main__ 44: running www.site.com-110731.log.3.gz 2011-09-21 19:15:54,732 DEBUG __main__ 44: running www.site.com-110731.log.gz how would i sort a list, to get (ie the entry eithout a digit to be first): 2011-09-21 19:15:54,732 DEBUG __main__ 44: running www.site.com-110731.log.gz 2011-09-21 19:15:54,731 DEBUG __main__ 44: running www.site.com-110731.log.0.gz 2011-09-21 19:15:54,731 DEBUG __main__ 44: running www.site.com-110731.log.1.gz 2011-09-21 19:15:54,731 DEBUG __main__ 44: running www.site.com-110731.log.2.gz 2011-09-21 19:15:54,732 DEBUG __main__ 44: running www.site.com-110731.log.3.gz THANKS!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! A: You would probably want to write a custom comparator to pass to sort; in fact, you probably need to anyway, because you're likely getting a lexicographical sort order instead of the intended (I presume) numerical order. For instance, if you know that the filenames will only differ in those digits, you'd write a comparator that extracts those digits, converts them to int, and then compares based on that value. Taking your examples as canonical, your comparator might look something like this: import re def extract(s): r = re.compile(r'\.(\d+)\.log\.((\d*)\.)?gz') m = r.search(s) file = int(m.group(1)) if not m.group(2): return (file, -1) index = int(m.group(3)) return (file, index) def comparator(s1, s2): return cmp(extract(s1), extract(s2)) This prefers to sort based on the "file" number (the first one), and then by the "index" number (the second one). Note that it takes advantage of the fact that using cmp on tuples works as we require.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: facebook api - get email with I have for facebook: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> <title>head</title> </head> <body> <h1>head</h1> <p><fb:login-button autologoutlink="true"></fb:login-button></p> <p><fb:like></fb:like></p> <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: '111111111111', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.type = 'text/javascript'; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; e.async = true; document.getElementById('fb-root').appendChild(e); }()); </script> </body> </html> what i must edit in this that get email address? i must use library facebook.php and login and logout with variables $loginUrl and $logoutUrl? I would like use <p><fb:login-button autologoutlink="true"></fb:login-button></p> A: There is no known way of getting email through xfbml. A: <fb:login-button autologoutlink="true" scope="email"></fb:login-button>
{ "language": "en", "url": "https://stackoverflow.com/questions/7507924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Using Manifesto gem with Sinatra I'm very new to Ruby and using Sinatra, mainly so that I can do some quick prototyping of web apps and some of the newer things available in HTML5. I am trying to use Manifesto to generate the application cache needed for an offline web app. I've followed the example listed on github, like so: require 'manifesto.rb' get '/manifest' do headers 'Content-Type' => 'text/cache-manifest' # Must be served with this MIME type Manifesto.cache end I am able to go to localhost:4567/manifest and I see what it generates just fine. What I am unclear on is what to do after that. My first attempt was to just view what was taking place in Web Inspector, but it doesn't appear that it recognizes any application cache at all. Next, I tried copying and pasting the info generated when I visited /manifest into an app.manifest file and referencing it in the <html> of my layout.erb. Still nothing. And, I figured that wasn't really what was intended, because the manifest wouldn't update as the gem implies. Can someone please help a newb understand what to do next? :) Thanks! Additionally, I am using the latest version of Rack which is supposed to support the mimetype for application cache. A: Let's see if I get it right here. You should be referencing the auto-generated /manifest page in your html tag instead of copying it to another file, right? <html manifest="/manifest"> And if you want it named something else, such as app.manifest it's as simple as changing that in your Sinatra code. get '/app.manifest' do I'm not entirely sure if this was what you asked however. Feel free to elaborate if needed. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/7507926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: GWT UiHandler for g:Anchor is not firing I essentially have a menu where you can't click the headers, but when you hover over the menu item you can select from the drop down elements. Each of the elements are g:Anchor tags with ui fields associated to them. What I would expect to happen, is that when a user clicks on one of the drop down elements, my uiHandler for that field would fire. Even with debugging on and a break point it doesn't ever hit. Below you will find the main areas of concern in the code. I will only post my gwt imports in here, since it can be assumed the rest are correct due to zero errors. Here is an XML snippet to get an idea of what is going on <!DOCTYPE ui:UiBinder SYSTEM "http://dl.google.com/gwt/DTD/xhtml.ent"> <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder' xmlns:g='urn:import:com.google.gwt.user.client.ui' xmlns:cpcw='urn:import:org.collegeboard.pa.gwt.client.widget'> <cpcw:ExtendedHTMLPanel> <ul> <li ui:field="juniorHigh"><g:Anchor href="">Junior High</g:Anchor> <ul> <li><g:Anchor href="#" ui:field="juniorHighFall">Fall</g:Anchor></li> <li><g:Anchor href="#" ui:field="juniorHighSpring">Spring</g:Anchor></li> <li><g:Anchor href="#" ui:field="juniorHighSummer">Summer</g:Anchor></li> </ul> </li> </ul> </cpcw:ExtendedHTMLPanel> </ui:UiBinder> And then here are the pieces from my Java class imports import com.google.gwt.core.client.GWT; import com.google.gwt.dom.client.LIElement; import com.google.gwt.event.dom.client.ClickEvent; import com.google.gwt.uibinder.client.UiBinder; import com.google.gwt.uibinder.client.UiField; import com.google.gwt.uibinder.client.UiHandler; import com.google.gwt.user.client.ui.Anchor; Ui Fields @UiField protected LIElement juniorHigh; @UiField protected Anchor juniorHighFall; @UiField protected Anchor juniorHighSpring; @UiField protected Anchor juniorHighSummer; Ui Handler @UiHandler({"juniorHighFall","juniorHighSpring","juniorHighSummer"}) public void handleMenuClick(ClickEvent event) { DisplayUtil.displayAlertMessage(event.toString()); } Initialize @Override public void initializeBinder() { initWidget(ourUiBinder.createAndBindUi(this)); } Now, the UiHandler never actually gets hit. I have had the g:Anchor tags with and without the href, and also have tried them as g:Hyperlink and g:Button with no success. It is as if the UiHandler isn't even there. Any help would be greatly appreciated, and also if you feel you would need anything else for troubleshooting, please let me know!! Thanks :-) EDIT: Just to make sure that this was clear, the template ui.xml file that contained this ui.xml file was placing this in a div. When I replaced that with a SimplePanel, everything worked with the UiHandler. Thanks for the responses! A: This does and should work as you have it by default. I've noticed: * *Your code references "ourUiBinder" *Your controls are wrapped in a "ExtendedHTMLPanel" These seem like custom classes - perhaps something funky is going on in those implementations? Maybe you can try wrapping those anchors in a SimplePanel and using the default UiBinder and see if that works. If it does, chances are it's something to do with any custom controls you use. If it doesn't then something strange is going on - maybe restart your IDE etc. A: The Template xml file had each view contained in a div element. Filip-fku recommended I look into SimplePanels for the code I posted, which made me go look at the template file as well to see they were not GWT containers at all, they were divs. As soon as I made them SimplePanels my UiHandlers worked like a charm! Thank you Filip-fku! g:SimplePanel FTW!!!
{ "language": "en", "url": "https://stackoverflow.com/questions/7507928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I rotate a background image (custom bullet) 90degrees What I am trying to do is make an accordion of sorts with a triangular bullet which rotates to point down when the <dd> slides down. Just to be clear, I want the bullet to animate, not just flip between two images. I know I can do this with css3 and a little class swapping if the bullets were their own html elements (<img>'s or whatever), but in the spirit of keeping content/style seperation I would prefer to leave the background image. Another approach would probably be injecting some <img>'s into the dom with javascript, but I would rather an more elegant solution if one exists. It would be even cooler if it could be done somehow with those cool css triangles I recently learned about. html <dl id="accordian"> <dt><a href="#" class="active"> Link to slide down dd </a></dt> <dd> Text to drop down </dd> <dt><a href="#"> Link to slide down dd </a></dt> <dd> Text to drop down </dd> <dt><a href="#"> Link to slide down dd </a></dt> <dd> Text to drop down </dd> </dl> css #accordian dt { background: url(../images/triangleBullet.png) left center no-repeat; padding-left: 20px; } jquery $(document).ready(function() { //Thermal Coatings page - Accordian $('#accordian dd').hide(); $('#accordian a.active').parent().next('dd').show(); $('#accordian a').click(function(){ $('#accordian a').removeClass('active'); $(this).addClass('active'); $('#accordian dd').slideUp(); $(this).parent().next('dd').slideDown(); }); }); A: Super modern browsers allowed? Say no more... check out this JSFiddle. Someone else already noted the technique -- here is a refined, cross-browser solution using your original HTML and JavaScript and only adding some CSS. It makes clever use of the ::before pseudo-element to place a piece of content to the left of the <dt> label. Furthermore, the "pointers" are generated with linear gradients (black 0%, black 50%, transparent 50%, transparent 100%) and the correct transform rotation. Since they are half transparent, you should be able to get them to sit on top of any type background style for the <dt> (background-color:green in this example). The ::before element also has a transition applied to it for the open/close animation. As already mentioned the animation is not working in all browsers -- these browsers will adjust the rotation instantly. Want to get super-duper advanced? You could also make the pointers with an image-mask or you could even base64 encode a transparent pointer png image that will only sit in your CSS file, like so: -webkit-mask: 0 0 url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAB4AAABGCAYAAADb7SQ4AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAiNJREFUeNrEWb9LQlEUvj5BcHoQvMnVKXD1D3CLwqBJbHJsazQaWoSCxgbHJiMIAiNok6AhCDdXVycnJ8EQOgeOYaG+d39998KH+HyP753zzjnfd325xfdSgVeV8B6BScuEV0IRSbxHeCMk/AVFXCA8ScQKSXxPqK0fQBBfE5r/D+Y8VzUT9jb94DPimqRYIYkrhGcpKhhxIqTxrpNcExdlQJTTTnRJnCc8ykhUSOIOoZ71ZFfEZ4S2zgUu+rguxZRHEnPbfKRVsOtUl0RtYpOLTYljIS2Z3nVk2DY9SbNCEt8RDm0rUpe4La1jvXSqmtum72raZI24KuNQIYl/nSGSOJb0Jq61M0pxhjwK9304hUjHGSKILzc5Q5drUzttdYY+I97pDH1FzG0zNFUb04gTG4kzJS5kdYauiZtZnaFr4ooKsCIVaDHxKAQxt1NBnGIVHfGCcEQYh3jGU8KBfMKLiyM+lgzAq/qT0ArVTg+Ei1B9fEPoovV4fcfQd2HedScX39GprwGTNjJn0maTELN6IuSzECLB6T5x2eM66jQgnIeSxa60GnS3uL56tr7b1Ai0JPVwYi6yho2U2lgfKym19VxjMRHzEGbvS9K+RBPzetGVUpf29lZHSl2/DMnLvwh1ZMQrKW3Ic4fvJOZS6ZMQW5hpmpT63DvtlFLfm7bBNruM2C2yXb7y3U6ZpRS5P/4jpUjihRTbCJ3q1eL3GMMfAQYAJmB6SBO619IAAAAASUVORK5CYII=') no-repeat; That could give you more flexibility in the exact style of your pointer (since it is an image) without needing to actually host an image file. A: I don't think background animation is possible in terms of rotating and all, you can however play around with css3 transitions and the background-position property. but this would only reposition your arrow or bulletpoint image. rotating is something you could do when like you say, it is it's own element. example: <div class="element">I'm a block of text</div> css: .element{ background:url('http://lorempixum.com/50/200') no-repeat; background-position:-40px; border:1px solid black; display:block; padding-left: 60px; height:200px; width:240px; -webkit-transition:background-position .5s ease; } .element:hover{ background-position:-0px; } A: These folks lack imagination, don't let 'em bring you down! In CSS3 you can use the :before or :after pseudoelements to add decorative content like this without adding extra markup. Using your code I was able to come up with this fiddle. For whatever reason in Chrome and Safari the easing doesn't seem to happen, but it works in Firefox. But I'm sure with a little, er, fiddling you can fix it. This ought to get you started at least. Edit: Works with a background image too, but same easing issue. Edit 2: FYI this is a known issue in WebKit. You could fake it, though, by generating CSS to take care of the rotation. But if you're going that far, you could just use JS to put real elements into the DOM instead of pseudoelements. A: It looks like this is not possible yet apparently. If anyone is interested, I made one of those css triangles rotate 90 degrees. I may use a variation of this concept... I'll put up the code when its all sorted out. JSfiddle
{ "language": "en", "url": "https://stackoverflow.com/questions/7507929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: is it possible to use Validation Application Block with WCF REST service? I'm using WCF REST service with API key template and trying to enforce validation using Validation Application Block attribute validation. here is my service: [ServiceContract] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] [ValidationBehavior] public class Service1 { [FaultContract(typeof(ValidationFault))] [WebGet(UriTemplate = "ValidateStuff?text={text}")] public void ValidateStuff( [NotNullValidator] string text) { } and the global.asax from the template: public class Global : HttpApplication { void Application_Start(object sender, EventArgs e) { RegisterRoutes(); } private void RegisterRoutes() { // Edit the base address of Service1 by replacing the "Service1" string below RouteTable.Routes.Add(new ServiceRoute("Service1", new WebServiceHostFactory(), typeof(Service1))); } } then I have a client sending a GET request: HttpWebRequest invokeRequest = WebRequest.Create(String.Concat(baseUrl, "/", uri, queryString)) as HttpWebRequest; invokeRequest.Method = Enum.GetName(typeof(Method), method); WebResponse response = invokeRequest.GetResponse()) now the problem is that I get HTTP/1.1 500 Internal Server Error everytime. if I remove the [ValidationBehavior] [FaultContract(typeof(ValidationFault))] and [NotNullValidator] attributes then everything works just fine. I checked service trace and didnt see anything that can help me. A: The answer is that it is indeed possible! I found out what's the problem. I was missing references to: Microsoft.Practices.ServiceLocation.dll Microsoft.Practices.Unity.dll Microsoft.Practices.Unity.Interception.dll The strage thing is that I didn't see any indication for that in the trace log nor debug mode. The only way I found out was when I was trying to do the validation manually in the operation implementation itself, like so: //at this point I got the exception saying that I'm missing the above references. var validationResult = Validation.Validate<T>(TInstance); Hope it helps somebody
{ "language": "en", "url": "https://stackoverflow.com/questions/7507930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: PHP object oriented foreach loop I have a CodeIgniter query that brings back an array of objects. My query is querying multiple tables that all have the same field name. So I am using the select to give them aliases. Like this: SELECT t1.platform_brief b1, t2.platform_brief b2 where t1 and t2 are my two tables. My array when print_r'd returns objects like this: Array ( [0] => stdClass Object ( [b1] => Lorem ipsum [b2] => ) [1] => stdClass Object ( [b1] => [b2] => Sic dolor sit ) ) In my foreach, when I echo them, how do I do that? I tried something like this but it didn't work: <?php foreach ($lasers as $laser) { echo $laser->? ?> What do I put in place of the question mark? EDIT: Here is my CI query: $this->db->select('ils1.platform_brief b1, ils2.platform_brief b2'); $this->db->where('ils1.language', $lang); $this->db->or_where('ils2.language', $lang); $this->db->join('all_platform_ils975 ils2', 'ils2.laser_id = c.laser_id', 'left'); $this->db->join('all_platform_ils1275 ils1', 'ils1.laser_id = c.laser_id', 'left'); $this->db->join('all_lasers l', 'l.laser_id = c.laser_id', 'inner'); return $this->db->get($lang . '_configure_lasers c')->result(); A: Judging by your follow-up comment, it sounds like you want to output all the b1 fields for each result, then all the b2 fields. One way: foreach( array('b1', 'b2') as $fields ) { foreach( $lasers as $laser ) { echo $laser->$field, '<br>'; } } A: You could iterate over each object: foreach ($lasers as $laser) { foreach($laser as $field) { if(!empty($field)) echo $field; } } However, instead you should change the database design / query design to better meet your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reading & printing binary data as it is from a binary file using C++ How can I read a binary file (I don't know its type or what is stored in it) and print out the 0s and 1s into a text file? #include <iostream> #include <fstream> #include <string.h> #include <iomanip> using namespace std; int main(int argc, char *argv[]) { char ch; bool x; int i = 0; if (argc < 3) { cout << endl << "Please put in all parameters" << endl; return 0; } char *c = new char[4]; ifstream fin(argv[1], ios_base::in | ios_base::binary); if (!fin.is_open()) { cout << "Error opening input file" << endl; return 0; } if (!strcmp(argv[2], "-file")) { ofstream fout(argv[3]); if (!fout.is_open()) { cout << "Error opening output file" << endl; return 0; } while (fin.read(c, sizeof ch)) { fout << c; } cout << "Contents written to file successfully" << endl; fout.close(); } else if (!strcmp(argv[2], "-screen")) { cout << endl << "Contents of the file: " << endl; while (fin.read((char *)&x,sizeof x)) { cout << x; } cout << endl; } else { cout << endl << "Please input correct option" << endl; return 0; } fin.close(); return 0; } A: Yes, just use fstreams and open the file with the binary flag, then you can handle the resource like a normal fstream and stream it into a text file. If you want to convert the 0 and 1 to chars it will get a bit more complicated. The easiest way for that will most likely be to buffer the bytes in unsigned chars like here and then try to manipulate those via sprintf. fstream API sprintf API A: For better or worse, I find this easiest done with printf: #include <fstream> #include <cstdio> static const std::size_t blocks = 256; char buf[blocks * 16]; std::ifstream infile(filename, "rb"); do { infile.read(buf, blocks * 16); for (std::size_t i = 0; i * 16 < infile.gcount(); ++i) { for (std::size_t j = 0; j < 16 && 16 * i + j < infile.gcount(); ++j) { if (j != 0) std::printf(" "); std::printf("0x%02X", static_cast<unsigned char>(buf[16 * i + j])); } } } while (infile); I chose an arbitrary line length of 16 bytes per line for this, and I'm reading 4kiB at a time -- this can be tuned for maximum efficiency. It's important to use gcount() to get the actual number of read bytes, since the last round of the loop may read less than 4kiB. Note that this is essentially equivalent to the hexdump utility. If you wanted acutal binary output, you could just write a little helper routine for that in place of the printf. A: As I got, you don't need hexadecimal output but binary (0s and 1s), despite of the fact I don't understand why. There's no io manipulator to output binary data. You need to do this yourself using bitwise operators. Something like this: char c; // contains byte read from input for (size_t i = 0; i != sizeof(c); ++i) { std::cout << c & 0x80; // grab most significant bit c <<= 1; // right shift by 1 bit } A: The easiest way is probably to create an std::bitset from the inputs, and print them out. Ignoring error checking and such, a simple version would come out something like this: #include <bitset> #include <fstream> #include <iostream> #include <ios> #include <iomanip> int main(int argc, char **argv) { std::ifstream infile(argv[1], std::ios::binary); char ch; unsigned long count = 0; while (infile.read(&ch, 1)) { if (count++ % 4 == 0) std::cout << "\n" << std::setw(6) << std::hex << count; std::cout << std::setw(10) << std::bitset<8>(ch); } return 0; } I'd consider the normal dump in hexadecimal a lot more usable though. I'm only displaying four bytes per line above (8 would fit in 80 columns only by omitting the offset at the beginning of the line, which would be a serious loss, at least in my experience), so an entire screen will typically be only ~200 bytes or so. In hexadecimal, you can (and usually do) display 16 bytes per line, in both hex and (for printable characters) as themselves as well. I should add, however, that I've been using hex dumps for decades now, so my opinion could be based at least partly on bias rather than real facts.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Datepicker events handler I have got this addon to my datepicker but I dont want it to give me an alertbox. I want it to show a div under the datepicker with the event on that day. I don't know how to change the if (event) to a <div> instead of an alert. $(function(){ var events = [ { Title: "Five K for charity", Date: new Date("09/13/2011") }, { Title: "Dinner", Date: new Date("09/25/2011") }, { Title: "Meeting with manager", Date: new Date("09/01/2011") } ]; $("#datepicker").datepicker({ beforeShowDay: function(date) { var result = [true, '', null]; var matching = $.grep(events, function(event) { return event.Date.valueOf() === date.valueOf(); }); if (matching.length) { result = [true, 'highlight', null]; } return result; }, onSelect: function(dateText) { var date, selectedDate = new Date(dateText), i = 0, event = null; while (i < events.length && !event) { date = events[i].Date; if (selectedDate.valueOf() === date.valueOf()) { event = events[i]; } i++; } if (event) { alert(event.Title); } } }); }); A: well, i don't know your markup (html) but here is how you can do it then... instead of alert(event.Title); try this: var eventContainer = ($('#eventContainer').length) ? $('#eventContainer').empty() : $('<div id="eventContainer"></div>'); var eventItem = $('<div/>'); eventItem.text(event.Title); eventContainer.append(eventItem); $("#datepicker").after(eventContainer); edit I added the code to a jsFiddle so you can test around with it yourself http://jsfiddle.net/ambiguous/TgZQJ/11/
{ "language": "en", "url": "https://stackoverflow.com/questions/7507936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: mkbundle produces non-functional console exe I can successfully build a bundled exe from my MonoDevelop C# project, but when I run the bundle, it doesn't do anything; execution is terminated immediately and silently. What am I doing wrong? I'm using Mono 2.10.5 on 64-bit Windows 7 with cygwin generally following these instructions, but with these modifications. The exact steps I follow are: * *Create new C# console project in MonoDevelop (contains only Console.WriteLine ("Hello World!");) *Change target to Release *Build all *In cygwin: mkbundle -c -o host.c -oo bundle.o --deps BundleTest.exe *Edit host.c, add #undef _WIN32 after #endif after #include <windows.h> *In cygwin: gcc -mno-cygwin -o test.exe -Wall host.c 'pkg-config --cflags --libs mono-2|dos2unix' bundle.o *In command prompt: test.exe *In command prompt: BundleTest.exe In step 7, the text "Hello World!" is printed in the command prompt as expected. In step 8, nothing is printed in the command prompt; the exact same response can be elicited by typing rem and pressing enter. EDIT: Someone else edited this question to switch steps 7 and 8, which substantively changes the description of the observed behavior. I don't know why they felt justified in doing this since they were not the ones making the observations, but it is so far removed from the time I was thinking about this problem that I don't want to just switch the back the way they were. So, note that the last paragraph before this edit probably doesn't accurately reflect my original observations any more. A: For building console application you should remove -mwindows flags from /lib/pkgconfig/mono-2.pc
{ "language": "en", "url": "https://stackoverflow.com/questions/7507944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Where is a good place to find Ruby Selenium tutorials and or documentation that explains what to use for what element? Where is a good place to find Ruby Selenium tutorials and or documentation that explains what to use for what element? Watir has a great place to find this type of information such as: * *Watir Wiki *Watir Documentation Anything that is similar to these websites for Selenium would be much appreciated. Thanks! A: From the horse's mouth: * *http://seleniumhq.org/docs/05_selenium_rc.html *http://release.seleniumhq.org/selenium-core/1.0/reference.html Also... if you use the Firefox plugin you'll get inline documentation under a tab called "Reference" (a sister tab of the "Log" tab)
{ "language": "en", "url": "https://stackoverflow.com/questions/7507947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: jquery Validation Plugin I’ve looked over the documentation for the validation plugin and I can’t figure out for the life of me what is wrong with my form/javascript. I load the plugin after I load Jquery, and to my knowledge I didn’t leave anything out. But the page biases the Javascript and goes straight to the action page. I am using Jquery 1.6.2 Any ideas why? Javascript: $("#regForm").validate({ rules: { pass: "required", passChk: { equalTo: "#pass" } }, submitHandler: function(form) { form.submit(); } }); HTML/CFML: <cfform type="actionForm" action="Action.cfm" id="regForm" method="post" data-ajax="false"> <label for="email">E-mail</label> <cfinput type="text" label="E-mail" name="email" id="email" class="required email"><br> <label for="pass">password</label> <cfinput type="password" name="pass" id="pass" class="required" ><br> <label for="passChk">enter password again</label> <cfinput type="password" name="passChk" id="passChk" class="required" > <br> <label for="fName">First Name</label> <cfinput type="text" name="fName" id="fName" class="required"><br> <label for="lName">Last Name</label> <cfinput type="text" name="lName" id="lName" class="required"><br> <cfinput type="submit" name="submit" value="register" data-inline="true"> </cfform> A: This is working fine for me. I wonder though. I did have a problem when I tried to use the jquery validate JS file from the CDN on the demo pages. http://dev.jquery.com/view/trunk/plugins/validate/jquery.validate.js When I tried to use this one I would sometimes get a 403. So sometimes the validation would work and sometimes not. When I switched to the proper CDN http://ajax.aspnetcdn.com/ajax/jquery.validate/1.8.1/jquery.validate.js It worked fine every time. I wonder if you are doing the same thing. Also, make sure you are not running this script until the DOM is ready. <script> $(function(){ $("#regForm").validate({ rules: { pass: "required", passChk: { equalTo: "#pass" } }, submitHandler: function(form) { $(form).submit(); } }); }); </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/7507949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you query by Date and Date Range in Mongo This would be the equivalent of this SQL statement? Select * from example WHERE date = '2011-09-21' The record is stored with a MongoDate field. I would also like to know the syntax of the between query. A: This would be the equivalent of this SQL statement? Select * from example WHERE date = '2011-09-21' db.example.find({date: dateobject}); In the case of MongoDB + PHP, you'll want to use the [MongoDate][2] class to represent those dates. Other language drivers typically just use the language date construct. I would also like to know the syntax of the between query. MongoDB does not have a between clause. To use "Greater than" you will need to use one of the query operators. See here for details. Simple example: db.example.find({ date: { $gt: lowdate, $lt: highdate } });
{ "language": "en", "url": "https://stackoverflow.com/questions/7507950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to create a common WPF base window style? Is there any recommended way with WPF to create a common window style to be used across an application? I have several dialogs that appear in my app, and I would like them all to be styled the same (same window border, ok/cancel button position, etc) and simply have different 'content' in each, depending on the situation. So, one dialog might have a list box in it, one might have a textbox, and so on. I understand how to make base .cs usercontrol files, but I can't for the life of me work out a good way to create a single window which can host different content when launched? Cheers, rJ A: To add to H.B.'s very helpful post, you may want to connect your event handlers in the loaded event as he's done but instead of using anonymous methods or lambda expressions, consider connecting them to protected virtual methods which can be overridden in the derived class should the functionality need to vary. In my case, I created a base data entry form which has buttons for saving and cancelling: public DataEntryBase() { Loaded += (_, __) => { var saveButton = (Button)Template.FindName("PART_SaveAndCloseButton", this); var cancelButton = (Button)Template.FindName("PART_CancelButton", this); saveButton.Click += SaveAndClose_Click; cancelButton.Click += Cancel_Click; }; } protected virtual void SaveAndClose_Click(object sender, RoutedEventArgs e) { DialogResult = true; } protected virtual void Cancel_Click(object sender, RoutedEventArgs e) { } The save functionality is then overridden in each derived class to save the specific entity: protected override void SaveAndClose_Click(object sender, RoutedEventArgs e) { if (Save()) { base.SaveAndClose_Click(sender, e); } } private bool Save() { Contact item = contactController.SaveAndReturnContact((Contact)DataContext); if (item!=null) { DataContext = item; return true; } else { MessageBox.Show("The contact was not saved, something bad happened :("); return false; } } A: One way to do it would be a new custom control, let's call it DialogShell: namespace Test.Dialogs { public class DialogShell : Window { static DialogShell() { DefaultStyleKeyProperty.OverrideMetadata(typeof(DialogShell), new FrameworkPropertyMetadata(typeof(DialogShell))); } } } This now needs a template which would normally be defined in Themes/Generic.xaml, there you can create the default structure and bind the Content: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:Test.Dialogs"> <Style TargetType="{x:Type local:DialogShell}" BasedOn="{StaticResource {x:Type Window}}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:DialogShell}"> <Grid Background="{TemplateBinding Background}"> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <!-- This ContentPresenter automatically binds to the Content of the Window --> <ContentPresenter /> <StackPanel Grid.Row="1" Orientation="Horizontal" Margin="5" HorizontalAlignment="Right"> <Button Width="100" Content="OK" IsDefault="True" /> <Button Width="100" Content="Cancel" IsCancel="True" /> </StackPanel> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> This is just an example, you probably want to hook up those buttons with custom events and properties you need to define in the cs-file. This shell then can be used like this: <diag:DialogShell x:Class="Test.Dialogs.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:diag="clr-namespace:Test.Dialogs" Title="Window1" Height="300" Width="300"> <Grid> <TextBlock Text="Lorem Ipsum" /> </Grid> </diag:DialogShell> namespace Test.Dialogs { public partial class Window1 : DialogShell { public Window1() { InitializeComponent(); } } } Event wiring example (not sure if this is the "correct" approach though) <Button Name="PART_OKButton" Width="100" Content="OK" IsDefault="True" /> <Button Name="PART_CancelButton" Width="100" Content="Cancel" IsCancel="True" /> namespace Test.Dialogs { [TemplatePart(Name = "PART_OKButton", Type = typeof(Button))] [TemplatePart(Name = "PART_CancelButton", Type = typeof(Button))] public class DialogShell : Window { //... public DialogShell() { Loaded += (_, __) => { var okButton = (Button)Template.FindName("PART_OKButton", this); var cancelButton = (Button)Template.FindName("PART_CancelButton", this); okButton.Click += (s, e) => DialogResult = true; cancelButton.Click += (s, e) => DialogResult = false; }; } } } A: You can use define a style in App.Xaml that targets all windows. This is a sample of how your App.Xaml may look like: <Application x:Class="ES.UX.App" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" StartupUri="Views/MainWindow.xaml"> <Application.Resources> <Style TargetType="Window"> <Setter Property="WindowStyle" Value="ToolWindow" /> </Style> </Application.Resources> Then for more advanced scenarios you may need to set the ControlTemplate for your Window. A: Creating a cusotm object , which is derived from Window Class.. http://maffelu.net/wpf-window-inheritance-problems-and-problems/ A: Create a Xaml Form template and add the template to the VS Installed ItemTemplates directory. 1) create a wpf xaml and xaml.cs file that has all the desired components wanted for a new form added to your application. In my case I wanted the title and toolbar buttons. 2) test the new xaml files through the current system flow. 3) copy xaml / xaml.cs to temp location and rename both the filenames to something you want to be recognized as a good template name. a) Change first line within xaml file to -- Window x:Class="$rootnamespace$.$safeitemname$" b) Make 3 changes within xaml.cs file to ensure the new name will be copied when using the template - -- namespace $rootnamespace$ (//dynamic namespace name) -- public partial class $safeitemname$ (//dynamic class name) -- public $safeitemname$() (//dynamic constructor name) 4) Now create a vstemplate file: ie. MyTemplate.vstemplate with the following content: <VSTemplate Version="3.0.0" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005" Type="Item"> <TemplateData> <DefaultName>WpfFormTemplate.xaml</DefaultName> <Name>WpfFormTemplate</Name> <Description>Wpf/Entities form</Description> <ProjectType>CSharp</ProjectType> <SortOrder>10</SortOrder> <Icon>Logo.ico</Icon> </TemplateData> <TemplateContent> <References> <Reference> <Assembly>System.Windows.Forms</Assembly> </Reference> <Reference> <Assembly>Workplace.Data.EntitiesModel</Assembly> </Reference> <Reference> <Assembly>Workplace.Forms.MainFormAssemb</Assembly> </Reference> </References> <ProjectItem SubType="Designer" TargetFileName="$fileinputname$.xaml" ReplaceParameters="true">WpfFormTemplate.xaml</ProjectItem> <ProjectItem SubType="Code" TargetFileName="$fileinputname$.xaml.cs" ReplaceParameters="true">WpfFormTemplate.xaml.cs</ProjectItem> </TemplateContent> </VSTemplate> 5) Once you have all these files, zip the files and place the zip file under the ....\Documents\Visual Studio 2012\Templates\ItemTemplates\WPF directory. Now you can go into VS2012 and use the ADD\New feature to see the template, select and rename as in the normal process. The template can be used in the same way for VS2010 by placing the zip file under the 2010 Templates Wpf directory. The Logo file should be included in the zip file as well or if you don't have a file then remove that line from the MyTemplate.vstemplate file.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: join mySQL query, two of the same column names, how to display data? I have created my first join query to display some Wordpress content on a home page of a website, so far so good, but there are two columns in the joined tables with name "ID". I lost ability to use one of the column names as a variable later on (in a link). "echo $row['ID']" does not display the value anymore (used to, before tables were joined). "echo $row['wp_posts.ID']" does not seem to do anything either. What should I do? $query = "SELECT wp_posts.ID, wp_posts.post_title, wp_posts.post_excerpt, wp_posts.post_author, wp_users.ID, wp_users.display_name FROM wp_posts, wp_users WHERE wp_posts.post_status = 'publish' AND wp_users.ID = wp_posts.post_author ORDER BY wp_posts.post_date DESC LIMIT 2"; $result = mysql_query($query) or die(mysql_error()); ?> <?php while ($row = mysql_fetch_array($result)){ echo "<h2>" . $row['post_title']; ?></h2> <p class="post_author"> by <?php echo $row['display_name'];?></p> <?php $text = explode("***",wordwrap(strip_tags($row['post_excerpt']),150,"***",true)); echo " ".$text[0]." "; ?>... <a href="http://www.pihl.ca/kelownalawyers/?p=<?php echo $row['ID'];?>"> READ MORE</a><p /> <?php } ?> A: SELECT wp_posts.ID AS wpp_id, wp_users.ID AS wpu_id and then reference with with $row['wpp_id'] and $row['wpu_id'] in your loop A: If you want to select all but still resolve the naming conflict with certain fields, this will work too: SELECT *, wp_posts.ID AS wpp_id, wp_users.ID AS wpu_id FROM... This allows you to grab all column, with the addition of 2 new columns which can then be grabbed from the assoc array. A: Give one of the ID fields an alias using MySQL AS keyword (MySQL SELECT Docs)
{ "language": "en", "url": "https://stackoverflow.com/questions/7507956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: maintain state between custom tag calls? Is there a way to maintain state between tag calls? I need to store the last item passed to a tag that I have created. It appeared as if I could use context to do this, but it doesn't seem to work. Here is my code: @register.simple_tag(takes_context=True) def date_divider(context, date): if 'last_date' not in context or context['last_date'] != date: # display new date header context['last_date'] = date return date_header The problem is that a new date header is always created even if the date passed in should match the date in the context. I'm guessing I'm using context wrong here... Is there a way to store this last date in the context or is there a better way to do this? A: It seems likely that the context into which you are entering last_date no longer exists the second time you reach this tag (for instance, perhaps that context has been popped already?). A (sort of hackish) solution is to be sure that you insert last_date into the "highest" context: if 'last_date' not in context.dicts[0] or context.dicts[0]['last_date'] != date: context.dicts[0]['last_date'] = date This kind of approach is often needed when the tags that you are writing aren't "nested", I've found. Incidentally, I've also found that tags of this sort are themselves often a hack! (Not to say this particular case is, just that my cases have been).
{ "language": "en", "url": "https://stackoverflow.com/questions/7507958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: why does an error result from zend autoloader testing if a non-existant class exists? Say I've registered the extra namespace "Tracker_" in the config file for some classes I've written, using autoloadernamespaces[]="Tracker_" Things with this namespace and autoloader work as expected except when I am testing for error handling. When I test whether a non-existing class exists, using class_exists("Tracker_DoesNotExist"); It throws an exception include_once(Tracker/DoesNotExist.php): failed to open stream: No such file or directory /path/Zend/Loader.php:146 /path/Zend/Loader.php:146 /path/Zend/Loader.php:94 /path/Zend/Loader/Autoloader.php:479 /path/Zend/Loader/Autoloader.php:124 /other/path/TrackablesMapper.php:40 //line referenced above Meanwhile, the same class_exists function works for every other case I've tested, i.e. class_exists("Application_ExistingClass"); //returns true class_exists("Application_NonExistingClass"); //returns false class_exists("Tracker_ExistingClass"); //returns true Am I doing something wrong? A: When running a Zend Framework application, it registers its autoloader using spl_autoload_register (http://php.net/spl_autoload_register). Now any calls to class_exists will use Zend's autoloader (by default class_exists tries to load the class). The reason you are getting the error when using class_exists with Tracker_ and not Application_ is because the Application namespace's autoloader is handled by Zend_Application_Module_Autoloader (Zend_Loader_Autoloader_Resource) which acts slightly different than the Zend_Loader autoloader. Zend_Loader performs some basic security checks and then simply tries to include the file in question. The resource autoloader actually uses a method that first checks to see if the file to be autoloaded is readable and if it is not, then it does not try to include it. So the reason you are getting the error with Tracker_ is because no error checking is performed when trying to autoload, and Application_ does have error checking. You can also suppress this by calling Zend_Loader_Autoloader::getInstance()->suppressNotFoundWarnings(true); Usually you don't want to turn this on though as it can create more confusion later. Class exists will call the autoloader because if the file containing the class has not yet been included, then the class does not exist, so it needs to attempt to try to load it first, if it fails to autoload it, then you get the include error from zend framework. Hope that cleared it up a bit for you. A: You have told the Zend autoloader to require any class from a file within that namespace. class_exists()Docs triggers the autoloader. If you would like to prevent that, add another parameter: class_exists("Tracker_DoesNotExist", FALSE); If you don't want to autoload classes from the Tracker_ namespace (class prefix), don't register it with the autoloader.
{ "language": "en", "url": "https://stackoverflow.com/questions/7507963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Instantiating immutable paired objects Is it possible to create a class with an immutable reference to a partner object, or does it have to be a var that I assign after creation? e.g. class PairedObject (p: PairedObject, id: String) { val partner: PairedObject = p // but I need ref to this object to create p! } or similarly how could I instantiate the following pair? class Chicken (e: Egg) { val offspring = e } class Egg (c: Chicken) { val mother = c } A: If your problem is circular references, you could use the solution posted in this SO question: scala: circular reference while creating object? This solves the chicken/egg problem. A: Here is a complete solution to the Chicken/Egg problem: class Chicken (e: =>Egg) { lazy val offspring = e } class Egg (c: =>Chicken) { lazy val mother = c } lazy val chicken: Chicken = new Chicken(egg) lazy val egg: Egg = new Egg(chicken) Note that you have to provide explicit types to the chicken and egg variables. And for PairedObject: class PairedObject (p: => PairedObject, val id: String) { lazy val partner: PairedObject = p } lazy val p1: PairedObject = new PairedObject(p2, "P1") lazy val p2: PairedObject = new PairedObject(p1, "P2")
{ "language": "en", "url": "https://stackoverflow.com/questions/7507965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }