text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Linux 2019-03-06 NAME socket - create an endpoint for communication SYNOPSIS #include <sys/types.h> /* See NOTES */ #include <sys/socket.h> #include <sys/socket.h> int socket(int domain, int type, int protocol); DESCRIPTION socket() On success, a file descriptor for the new socket is returned. On error, -1 is returned, and errno is set appropriately. ERRORS Other errors may be generated by the underlying protocol modules. CONFORMING TO POS POS. EXAMPLE An example of the use of socket() is shown in getaddrinfo(3). SEE ALSO accept) \(lqAn Introductory 4.3BSD Interprocess Communication Tutorial\(rq and \(lqBSD Interprocess Communication Tutorial\(rq, reprinted in UNIX Programmer’s Supplementary Documents Volume 1. COLOPHON This page is part of release 5.00 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. REFERENCED BY audit_open(3), sctp(7), accept(2), bind(2), bpf(2), connect(2), fcntl(2), getsockname(2), getsockopt(2), listen(2), mknod(2), open(2), recv(2), recvmmsg(2), send(2), sendfile(2), sendmmsg(2), shutdown(2), socketcall(2), socketpair(2), getaddrinfo(3), getifaddrs(3), getnameinfo(3), if_nameindex(3), if_nametoindex(3), ddp(7), ip(7), packet(7), raw(7), socket(7), tcp(7), unix(7), x25(7), socket-event(7), upstart-socket-bridge(8), ax25(4), netrom(4), rose(4), libdontdie(7), PMDA(3), pmdaConnect(3), pmsocks(1), socket(1), msocket(2viewos), biblesync(7), af_smc(7), vsock(7), acpid(8), pcap_set_protocol_linux(3pcap), ares_set_socket_functions(3), ktelnetd(8), rds(7), address_families(7)
https://reposcope.com/man/en/2/socket
CC-MAIN-2019-51
en
refinedweb
Final Keyword in Java – Learn to Implement with Methods & Classes Final Keyword in Java is used to finalize the value of a variable, class or method in the entire program. It stores the initialized value in the memory which will never change at any condition. There are much more to learn and implement the final keywords. So, let’s start with the introduction to Final Keywords. Stay updated with the latest technology trends while you're on the move - Join DataFlair's Telegram Channel What is Java Final Keyword? Final keyword in Java is used in many contexts like it can be used as a variable, it can be a class or can be a method. Inheritance enables us to reuse the existing code but sometimes we want to restrict the extensibility; the final keyword allows us to do so. It is a non-access modifier that means if you make a variable or class final then you are not allowed to change that variable or class and if you try then the compiler will throw compilation error. Example- final int number = 5; Brush-up your skills with Access Modifiers in Java Implementation of Final Keywords 1. Java Final Variable We should initialize a final variable in Java, otherwise, a compiler can throw a compile-time error. A Java final variable will only initialize once, either via an initializer or an assignment statement. There are 3 ways to initialize a Java final variable: - You can initialize a final variable when it’s declared. This approach is the most common. A final variable is named blank final variable if it’s not initialized whereas declaration. - A blank final variable is initialized within instance-initializer block or within a constructor. If you have got one constructor in your class then it should initialize in all of them, otherwise, a compile-time error is thrown. - A blank final static variable is initialized within a static block. 1.1 When to use a Final Variable in Java? The only difference between a normal variable and a final variable is that we will re-assign value to a normal variable. However, we cannot change the value of a final variable once assigned. Therefore, Java final variables must be used for the values that we would like to keep constant throughout the execution of a program. Let us see above different ways of initializing a Java final variable through an example. package com.dataflair.finalkeyword; public class FinalVariable { final int number=90;//final variable void run() { number=400; } public static void main(String args[]) { FinalVariable obj=new FinalVariable(); obj.run(); } } Output- Compile Time Error Once a variable is initialized as final we cannot redeclare it. 1.2 Reference Final Variable in Java When a Java final variable may refer to an object, then the final variable known is as reference final variable. Don’t forget to check StringBuffer in Java For example, a final StringBuffer variable seems like- Final StringBuffer stringBufferVariable; Example- package com.dataflair.finalkeyword; public class FinalVariable { public static void main(String[] args) { final StringBuilder stringbufferVariable = new StringBuilder("Data"); System.out.println(stringbufferVariable); stringbufferVariable.append("Flair"); System.out.println(stringbufferVariable); } } Output- 2. Java Final Class A class which is declared using the final keyword is a final class. A final class can’t be extended. There are many final classes available in Java one of them is Java String class. If we try to inherit from final class then it will generate a compile-time error. For example, All wrapper class in java like, float, integer etc. are Java final Class and we cannot extend them. Example package com.dataflair.finalkeyword; final class DemoClass { } class FinalClass extends DemoClass{ void demoMethod(){ System.out.println("My Method"); } public static void main(String args[]){ FinalClass obj= new FinalClass(); obj.demoMethod(); } } Output- Compile Time Error “The type FinalClass cannot subclass the final class DemoClass” The other use of final with classes is to make an immutable class like the predefined String class. You cannot create a class immutable while not creating it finally. It’s time to learn about Inner Classes in Java 3. Java Final Method When a method is declared in Java using the final keyword, then it’s called a final method. A final method can not be overridden. We should declare strategies with a final keyword in Java, that we tend to needed to follow constant implementation throughout all the derived classes. The subsequent fragment illustrates final keyword with a method: package com.dataflair.finalkeyword; class FinalMethodClass{ final void demo(){ System.out.println("FinalMethodClass Method"); } } class FinalMethod extends FinalMethodClass{ void demo(){ System.out.println("FinalMethod Method"); } public static void main(String args[]){ FinalMethod obj= new FinalMethod(); obj.demo(); } } The following program will throw an error, therefore we can use the parent class final method in subclass without any issues. Let’s see how we can overcome this problem. package com.dataflair.finalkeyword; class FinalMethodClass{ final void demo(){ System.out.println("FinalMethodClass Method"); } } class FinalMethod extends FinalMethodClass{ public static void main(String args[]){ FinalMethod obj= new FinalMethod(); obj.demo(); } } Summary We can use Final keyword in Java to restrict the user. It initializes the value once and can’t be changed during the action/entire process. Now, you practice this keyword with variables, methods, and classes. If you face any doubt, feel free to share with us our experts will definitely get back to you! Let’s discuss the different types of Exceptions in Java and how to deal with them How can i access final class in java
https://data-flair.training/blogs/final-keyword-in-java/comment-page-1/
CC-MAIN-2019-51
en
refinedweb
Creating a Calculator With wxPython Creating a Calculator With wxPython Learn how to create a calculator using the dreaded eval() fucntion in Python while learning how to keep it under control. Join the DZone community and get the full member experience.Join For Free A lot of beginner tutorials start with “Hello World” examples. There are plenty of websites that use a calculator application as a kind of “Hello World” for GUI beginners. Calculators are a good way to learn because they have a set of widgets that you need to lay out in an orderly fashion. They also require a certain amount of logic to make them work correctly. For this calculator, let’s focus on being able to do the following: - Addition - Subtraction - Multiplication - Division I think that supporting these four functions is a great starting place and also give you plenty of room for enhancing the application on your own. Figuring Out the Logic One of the first items that you will need to figure out is how to actually execute the equations that you build. For example, let’s say that you have the following equation: 1 + 2 * 5 What is the solution? If you read it left-to-right, the solution would seem to be 3 * 5 or 15. But multiplication has a higher precedence than addition, so it would actually be 10 + 1 or 11. How do you figure out precedence in code? You could spend a lot of time creating a string parser that groups numbers by the operand or you could use Python’s built-in `eval` function. The eval() function is short for evaluate and will evaluate a string as if it was Python code. A lot of Python programmers actually discourage the user of eval(). Let’s find out why. Is eval() Evil? The eval() function has been called “evil” in the past because it allows you to run strings as code, which can open up your application’s to nefarious evil-doers. You have probably read about SQL injection where some websites don’t properly escape strings and accidentally allowed dishonest people to edit their database tables by running SQL commands via strings. The same concept can happen in Python when using the eval() function. A common example of how eval could be used for evil is as follows: eval("__import__('os').remove('file')") This code will import Python’s os module and call its remove() function, which would allow your users to delete files that you might not want them to delete. There are a couple of approaches for avoiding this issue: - Don’t use eval() - Control what characters are allowed to go to eval() Since you will be creating the user interface for this application, you will also have complete control over how the user enters characters. This actually can protect you from eval’s insidiousness in a straight-forward manner. You will learn two methods of using wxPython to control what gets passed to eval(), and then you will learn how to create a custom eval() function at the end of the article. Designing the Calculator Let’s take a moment and try to design a calculator using the constraints mentioned at the beginning of the chapter. Here is the sketch I came up with: Note that you only care about basic arithmetic here. You won’t have to create a scientific calculator, although that might be a fun enhancement to challenge yourself with. Instead, you will create a nice, basic calculator. Let’s get started! Creating the Initial Calculator Whenever you create a new application, you have to consider where the code will go. Does it go in the wx.Frame class, the wx.Panel class, some other class or what? It is almost always a mix of different classes when it comes to wxPython. As is the case with most wxPython applications, you will want to start by coming up with a name for your application. For simplicity’s sake, let’s call it wxcalculator.py for now. The first step is to add some imports and subclass the Frame widget. Let’s take a look: import wx class CalcFrame(wx.Frame): def __init__(self): super().__init__( None, title="wxCalculator", size=(350, 375)) panel = CalcPanel(self) self.SetSizeHints(350, 375, 350, 375) self.Show() if __name__ == '__main__': app = wx.App(False) frame = CalcFrame() app.MainLoop() This code is very similar to what you have seen in the past. You subclass wx.Frame and give it a title and initial size. Then you instantiate the panel class, CalcPanel (not shown) and you call the SetSizeHints() method. This method takes the smallest (width, height) and the largest (width, height) that the frame is allowed to be. You may use this to control how much your frame can be resized or in this case, prevent any resizing. You can also modify the frame’s style flags in such a way that it cannot be resized too. Here’s how: class CalcFrame(wx.Frame): def __init__(self): no_resize = wx.DEFAULT_FRAME_STYLE & ~ (wx.RESIZE_BORDER | wx.MAXIMIZE_BOX) super().__init__( None, title="wxCalculator", size=(350, 375), style=no_resize) panel = CalcPanel(self) self.Show() Take a look at the no_resizevariable. It is creating a wx.DEFAULT_FRAME_STYLE and then using bitwise operators to remove the resizable border and the maximize button from the frame. Let’s move on and create the CalcPanel: class CalcPanel(wx.Panel): def __init__(self, parent): super().__init__(parent) self.last_button_pressed = None self.create_ui() I mentioned this in an earlier chapter, but I think it bears repeating here. You don’t need to put all your interfacer creation code in the init method. This is an example of that concept. Here you instantiate the class, set the last_button_pressed attribute to None and then call create_ui(). That is all you need to do here. Of course, that begs the question. What goes in the create_ui() method? Well, let’s find out! def create_ui(self): main_sizer = wx.BoxSizer(wx.VERTICAL) font = wx.Font(12, wx.MODERN, wx.NORMAL, wx.NORMAL) self.solution = wx.TextCtrl(self, style=wx.TE_RIGHT) self.solution.SetFont(font) self.solution.Disable() main_sizer.Add(self.solution, 0, wx.EXPAND|wx.ALL, 5) self.running_total = wx.StaticText(self) main_sizer.Add(self.running_total, 0, wx.ALIGN_RIGHT)) This is a decent chunk of code, so let’s break it down a bit: def create_ui(self): main_sizer = wx.BoxSizer(wx.VERTICAL) font = wx.Font(12, wx.MODERN, wx.NORMAL, wx.NORMAL) Here you create the sizer that you will need to help organize the user interface. You will also create a wx.Fontobject, which is used to modifying the default font of widgets like wx.TextCtrl or wx.StaticText. This is helpful when you want a larger font size or a different font face for your widget than what comes as the default. self.solution = wx.TextCtrl(self, style=wx.TE_RIGHT) self.solution.SetFont(font) self.solution.Disable() main_sizer.Add(self.solution, 0, wx.EXPAND|wx.ALL, 5) These next three lines create the wx.TextCtrl, set it to right-justified (wx.TE_RIGHT), set the font and Disable() the widget. The reason that you want to disable the widget is because you don’t want the user to be able to type any string of text into the control. As you may recall, you will be using eval() for evaluating the strings in that widget, so you can’t allow the user to abuse that. Instead, you want fine-grained control over what the user can enter into that widget. self.running_total = wx.StaticText(self) main_sizer.Add(self.running_total, 0, wx.ALIGN_RIGHT) Some calculator applications have a running total widget underneath the actual “display.” A simple way to add this widget is via the wx.StaticText widget. Now let’s add main buttons you will need to use a calculator effectively:) Here you create a list of lists. In this data structure, you have the primary buttons used by your calculator. You will note that there is a blank string in the last list that will be used to create a button that doesn’t do anything. This is to keep the layout correct. Theoretically, you could update this calculator down the road such that that button could be "percentage" or do some other function. The next step is to createthee buttons, which you can do by looping over the list. Each nested list represents a row of buttons. So for each row of buttons, you will create a horizontally oriented wx.BoxSizer and then loop over the row of widgets to add them to that sizer. Once every button is added to the row sizer, you will add that sizer to your main sizer. Note that each of these button’s is bound to the update_equation event handler as well. Now you need to add the equals button and the button that you may use to clear your calculator:) In this code snippet you create the “equals” button which you then bind to the on_totalevent handler method. You also create the “Clear” button, for clearing your calculator and starting over. The last line sets the panel’s sizer. Let’s move on and learn what most of the buttons in your calculator are bound to: def update_equation(self, event): operators = ['/', '*', '-', '+'] btn = event.GetEventObject() label = btn.GetLabel() current_equation = self.solution.GetValue() if label not in operators: if self.last_button_pressed in operators: self.solution.SetValue(current_equation + ' ' + label) else: self.solution.SetValue(current_equation + label) elif label in operators and current_equation is not '' \ and self.last_button_pressed not in operators: self.solution.SetValue(current_equation + ' ' + label) self.last_button_pressed = label for item in operators: if item in self.solution.GetValue(): self.update_solution() break This is an example of binding multiple widgets to the same event handler. To get information about which widget has called the event handler, you can call the "event" object’s GetEventObject() method. This will return whatever widget it was that called the event handler. In this case, you know you called it with a wx.Button instance, so you know that wx.Button has a GetLabel() method which will return the label on the button. Then you get the current value of the solution text control. Next, you want to check if the button’s label is an operator (i.e., /, *, -, +). If it is, you will change the text controls value to whatever is currently in it plus the label. On the other hand, if the label is not an operator, then you want to put a space between whatever is currently in the text box and the new label. This is for presentation purposes. You could technically skip the string formatting if you wanted to. The last step is to loop over the operands and check if any of them are currently in the equation string. If they are, then you will call the update_solution() method and break out of the loop. Now you need to write the update_solution() method: def update_solution(self): try: current_solution = str(eval(self.solution.GetValue())) self.running_total.SetLabel(current_solution) self.Layout() return current_solution except ZeroDivisionError: self.solution.SetValue('ZeroDivisionError') except: pass Here is where the “evil” eval() makes its appearance. You will extract the current value of the equation from the text control and pass that string to eval(). Then convert that result back to a string so you can set the text control to the newly calculated solution. You want to wrap the whole thing in a try/except statement to catch errors, such as the ZeroDivisionError. The last except statement is known as a bare except and should really be avoided in most cases. For simplicity, I left it in there, but feel free to delete those last two lines if they offend you. The next method you will want to take a look at is the on_clear() method: def on_clear(self, event): self.solution.Clear() self.running_total.SetLabel('') This code is pretty straight forward. All you need to do is call your solution text control’s Clear() method to empty it out. You will also want to clear the "running_total" widget, which is an instance of wx.StaticText. That widget does not have a Clear() method, so instead you will call SetLabel() and pass in an empty string. The last method you will need to create is the on_total() event handler, which will calculate the total and also clear out your running total widget: def on_total(self, event): solution = self.update_solution() if solution: self.running_total.SetLabel('') Here you can call the update_solution() method and get the result. Assuming that all went well, the solution will appear in the main text area and the running total will be emptied. Here is what the calculator looks like when I ran it on a Mac: And here is what the calculator looks like on Windows 10: Let’s move on and learn how you might allow the user to use their keyboard in addition to your widgets to enter an equation. Using Character Events Most calculators will allow the user to use the keyboard when entering values. In this section, I will show you how to get started adding this ability to your code. The simplest method to use to make this work is to bind the wx.TextCtrl to the wx.EVT_TEXT event. I will be using this method for this example. However, another way that you could do this would be to catch wx.EVT_KEY_DOWN and then analyze the key codes. That method is a bit more complex though. The first item that we need to change is our CalcPanel‘s constructor: # wxcalculator_key_events.py import wx class CalcPanel(wx.Panel): def __init__(self, parent): super().__init__(parent) self.last_button_pressed = None self.whitelist = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '-', '+', '/', '*', '.'] self.on_key_called = False self.empty = True self.create_ui() Here you add a whitelist attribute and a couple of simple flags, self.on_key_called and, self.empty. The white list is the only characters that you will allow the user to type in your text control. You will learn about the flags when we actually get to the code that uses them. But first, you will need to modify the create_ui() method of your panel class. For brevity, I will only reproduce the first few lines of this method: def create_ui(self): main_sizer = wx.BoxSizer(wx.VERTICAL) font = wx.Font(12, wx.MODERN, wx.NORMAL, wx.NORMAL) self.solution = wx.TextCtrl(self, style=wx.TE_RIGHT) self.solution.SetFont(font) self.solution.Bind(wx.EVT_TEXT, self.on_key) main_sizer.Add(self.solution, 0, wx.EXPAND|wx.ALL, 5) self.running_total = wx.StaticText(self) main_sizer.Add(self.running_total, 0, wx.ALIGN_RIGHT) Feel free to download the full source from Github or refer to the code in the previous section. The main differences here in regards to the text control is that you are no longer disabling it and you are binding it to an event: wx.EVT_TEXT. Let’s go ahead an write the on_key() method: def on_key(self, event): if self.on_key_called: self.on_key_called = False return key = event.GetString() self.on_key_called = True if key in self.whitelist: self.update_equation(key) Here you check to see whether the self.on_key_called flag is True. If it is, we set it back to False and "return" early. The reason for this is that when you use your mouse to click a button, it will cause EVT_TEXT to fire. The update_equation() method will get the contents of the text control which will be the key we just pressed and add the key back to itself, resulting in a double value. This is one way to work around that issue. You will also note that to get the key that was pressed, you can call the event object’s GetString() method. Then you will check to see if that key is in the white list. If it is, you will update the equation. The next method you will need to update is update_equation(): def update_equation(self, text): operators = ['/', '*', '-', '+'] current_equation = self.solution.GetValue() if text not in operators: if self.last_button_pressed in operators: self.solution.SetValue(current_equation + ' ' + text) elif self.empty and current_equation: # The solution is not empty self.empty = False else: self.solution.SetValue(current_equation + text) elif text in operators and current_equation is not '' \ and self.last_button_pressed not in operators: self.solution.SetValue(current_equation + ' ' + text) self.last_button_pressed = text self.solution.SetInsertionPoint(-1) for item in operators: if item in self.solution.GetValue(): self.update_solution() break Here you add a new elif that checks if the self.empty flag is set and if the current_equation has anything in it. In other words, if it is supposed to be empty and it’s not, then we set the flag to False because it’s not empty. This prevents a duplicate value when the keyboard key is pressed. So basically, you need two flags to deal with duplicate values that can be caused because you decided to allow users to use their keyboard. The other change to this method is to add a call to SetInsertionPoint() on your text control, which will put the insertion point at the end of the text control after each update. The last required change to the panel class happens in the on_clear() method: def on_clear(self, event): self.solution.Clear() self.running_total.SetLabel('') self.empty = True self.solution.SetFocus() This change was done by adding two new lines to the end of the method. The first is to reset self.empty back to True. The second is to call the text control’s SetFocus() method so that the focus is reset to the text control after it has been cleared. You could also add this SetFocus() call to the end of the on_calculate() and the on_total()methods. This should keep the text control in focus at all times. Feel free to play around with that on your own. Creating a Better eval() Now that you have looked at a couple of different methods of keeping the “evil” eval()under control, let’s take a few moments to learn how you can create a custom version of eval() on your own. Python comes with a couple of handy built-in modules called ast and operator. The ast module is an acronym that stands for “Abstract Syntax Trees” and is used “for processing trees of the Python abstract syntax grammar” according to the documentation. You can think of it as a data structure that is a representation of code. You can use the ast module to create a compiler in Python. The operator module is a set of functions that correspond to Python’s operators. A good example would be operator.add(x, y) which is equivalent to the expression x+y. You can use this module along with the ast module to create a limited version of eval(). Let’s find out how: import ast import operator allowed_operators = {ast.Add: operator.add, ast.Sub: operator.sub, ast.Mult: operator.mul, ast.Div: operator.truediv} def noeval(expression): if isinstance(expression, ast.Num): return expression.n elif isinstance(expression, ast.BinOp): print('Operator: {}'.format(expression.op)) print('Left operand: {}'.format(expression.left)) print('Right operand: {}'.format(expression.right)) op = allowed_operators.get(type(expression.op)) if op: return op(noeval(expression.left), noeval(expression.right)) else: print('This statement will be ignored') if __name__ == '__main__': print(ast.parse('1+4', mode='eval').body) print(noeval(ast.parse('1+4', mode='eval').body)) print(noeval(ast.parse('1**4', mode='eval').body)) print(noeval(ast.parse("__import__('os').remove('path/to/file')", mode='eval').body)) Here you create a dictionary of allowed operators. You map ast.Add to operator.add, etc. Then you create a function called noeval that accepts an ast object. If the expression is just a number, you return it. - The left part of the expression - The operator - The right hand of the expression What this code does when it finds a BinOp object is that it then attempts to get the type of ast operation. If it is one that is in our allowed_operators dictionary, then you call the mapped function with the left and right parts of the expression and return the result. Finally, if the expression is not a number or one of the approved operators, then you just ignore it. Try playing around with this example a bit with various strings and expressions to see how it works. Once you are done playing with this example, let’s integrate it into your calculator code. For this version of the code, you can call the Python script wxcalculator_no_eval.py. The top part of your new file should look like this: # wxcalculator_no_eval.py import ast import operator import wx class CalcPanel(wx.Panel): def __init__(self, parent): super().__init__(parent) self.last_button_pressed = None self.create_ui() self.allowed_operators = { ast.Add: operator.add, ast.Sub: operator.sub, ast.Mult: operator.mul, ast.Div: operator.truediv} The main differences here is that you now have a couple of new imports (i.e. ast and operator) and you will need to add a Python dictionary called self.allowed_operators. Next, you will want to create a new method called no: eval() def noeval(self, expression): if isinstance(expression, ast.Num): return expression.n elif isinstance(expression, ast.BinOp): return self.allowed_operators[ type(expression.op)](self.noeval(expression.left), self.noeval(expression.right)) return '' This method is pretty much exactly the same as the function you created in the other script. It has been modified slightly to call the correct class methods and attributes, however. The other change you will need to make is in the update_solution() method: def update_solution(self): try: expression = ast.parse(self.solution.GetValue(), mode='eval').body current_solution = str(self.noeval(expression)) self.running_total.SetLabel(current_solution) self.Layout() return current_solution except ZeroDivisionError: self.solution.SetValue('ZeroDivisionError') except: pass Now the calculator code will use your custom eval() method and keep you protected from the potential harmfulness of eval(). The code that is in Github has the added protection of only allowing the user to use the onscreen UI to modify the contents of the text control. However, you can easily change it to enable the text control and try out this code without worrying about eval() causing you any harm. Wrapping Up In this chapter you learned several different approaches to creating a calculator using wxPython. You also learned a little bit about the pros and cons of using Python’s built-in eval() function. Finally, you learned that you can use Python’s ast and operator modules to create a finely-grained version of eval() that is safe for you to use. Of course, since you are controlling all input into eval(), you can also control the real version quite easily though your UI that you generate with wxPython. Take some time and play around with the examples in this article. There are many enhancements that could be made to make this application even better. When you find bugs or missing features, challenge yourself to try to fix or add them. Download the Source The source code for this article can be found on Github. This article is based on one of the chapters from my book, Creating GUI Applications with wxPython. Published at DZone with permission of Mike Driscoll , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/creating-a-calculator-with-wxpython?fromrel=true
CC-MAIN-2019-51
en
refinedweb
Accelerated Bregman proximal gradient (ABPG) methods Project description # Accelerated Bregman Proximal Gradient Methods Accelerated first-order algorithms for solving relatively-smooth convex optimization problems of the form minimize { f(x) + P(x) | x in C } with a reference function h(x), where - h(x) is convex and essentially smooth on C - f(x) is convex and differentiable, and L-smooth relative to h(x), that is, f(x)-L*h(x) is convex - P(x) is convex and closed (lower semi-continuous) - C is a closed convex set ### Implemented algorithms in [HRX2018]() - BPG_LS (Bregman proximal gradient) method with line search - ABPG (Accelerated BPG) method - ABPG-expo (ABPG with exponent adaption) - ABPG-gain (ABPG with gain adaption) - ABDA (Accelerated Bregman dual averaging) method ## Installation Clone or fork from GitHub. Or install from PyPI: pip install accbpg ## Usage import accbpg # generate a random instance of D-optimal design problem f, h, L, x0 = accbpg.D_opt_design(80, 200) # solve the problem instance using BPG with line search x1, F1, G1 = accbpg.BPG_LS(f, h, L, x0, maxitrs=1000, verskip=100) # solve it again using ABPG_gain with gamma=2 x2, F2, G2, D2 = accbpg.ABPG_gain(f, h, L, 2, x0, maxitrs=1000, verbskip=100) compare the two methods by visualization import matplotlib.pyplot as plt Fmin = min(F1.min(), F2.min()) plt.semilogy(range(len(F1)), F1-Fmin, range(len(F2)), F2-Fmin) ## Examples in [HRX2018]() D-optimal experiment design import accbpg.ex_D_opt Nonnegative regression with KL-divergence import accbpg.ex_KL_regr Poisson linear inverse problems import accbpg.ex_PoissonL1 import accbpg.ex_PoissonL2 Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/accbpg/
CC-MAIN-2019-51
en
refinedweb
I was very pleased with the types of things that they advised, and I put together a sheet for next year's packets with some highlights. The file's linked here, but I thought that I'd also share the process for making these (with the rotating font sizes, types, and styles) yourself: - You'll need Python (script below) and LaTeX - Type the advice into a plain text document (UTF-8 encoding, so that the apostrophes don't get lost in the LaTeX), one nugget per line - I had it print only lines with * at the end, so that I could save all of the advice, but only use some of it. You can certainly modify the code to fit your needs - You'll need to make a few path and file mods (it's also calling Preview at the end to display, so if you're not on OSX you'll need to change that too) - Run and enjoy - let me know if you found it useful! The script (quick and dirty - surely can be improved. For example, most of the imports don't really need to be there, but I was modifying an old script, and they weren't hurting anyone): # Advice prepper - random fonts and sizes # prep: UTF-8 file advice.txt - * on end of each included line import sys import os import csv import string import subprocess import math from random import choice from itertools import * fontfams = [r'{\sffamily ', r'{\rmfamily '] fiter = cycle(fontfams) sizes = [r'\Large ', r'\small ', r'\large ', r'\normalsize ', r'\Large '] siter = cycle(sizes) types = [r'\scshape ', r'\itshape ', r'\bfseries ', ' '] titer = cycle(types) # open advice.txt reader = open('advice.txt', 'r') data = [] for lines in reader: data.append(lines) data = data[1:] reader.close() # open output file writer = open('advice.tex', 'w') # This is just my default LaTeX header - insert your own writer.write(r'\input{/Users/jgates/desktop/latex/headerforinput.tex}' +'\n') writer.write(r'\linespread{1.6}') writer.write(r'\begin{center}' + '\n' + r'{\huge \scshape Advice from former physics students}' + '\n' + r'\end{center}' + '\n' + r'\bigskip' + '\n') for lines in data: # strip trailing spaces if lines[-1]==' ': lines = lines[:-1] if lines[-1]==' ': lines = lines[:-1] if lines[-1]==' ': lines = lines[:-1] # choose starred lines if '*' in lines: lines = lines.translate(None,'*') #writer.write(choice(fontfams) + choice(types) + choice(sizes) + lines + '}' + '\n' ) writer.write(next(fiter) + next(titer) + next(siter) + lines + '}' + '\n' ) writer.write('\end{document}') writer.write('\n') writer.close() # compile PDF, clean up filename4exp = 'advice' # fix the paths! subprocess.check_output((r'/usr/local/texlive/2011/bin/universal-darwin/pdflatex', r'-aux-directory=/Users/jgates/desktop' , r'-output-directory=/Users/jgates/desktop' , filename4exp)) delfile = filename4exp + r'.log' os.remove(delfile) delfile = filename4exp + r'.tex' #os.remove(delfile) # Open document with Preview outname = r'open ' + filename4exp + r'.pdf' #print outname os.system(outname)
http://tatnallsbg.blogspot.com/2013/06/advice-to-future-physics-students.html
CC-MAIN-2017-34
en
refinedweb
We continue from TFS Integration Tools – Where does one start? … part 1 (documentation) and TFS Integration Tools – Where does one start? … part 2 (try a simple migration). In this post we look what happened after “gently” selecting START. Remember the following … - We configured the session using no customizations, such as field or value mapping … out of the box. - We have not worried about permissions. … let’s see what happened after a few minutes of analysis… Resolving the conflicts WIT Conflict The first “stop-the-bus” conflict was raised by the WIT session and had we read the configuration guide we would have noticed that by default the EnableBypassRuleDataSubmission rule is enabled, which requires the credential of the migration user to be a member of the Team Foundation Service Accounts group. Using the TFSSecurity tool we can add the credentials to the Team Foundation Service Accounts group, by running the following command as shown below: tfssecurity /g+ srv: @SERVER@\Administrator /server:@SERVER@, whereby we are using the Administrator credentials. After this fix, the 1000+ entries wandered across the copper cable … or is that WiFi copper-less network? VC Conflict The VC session was the next to pull the migration hand-brake with the following conflict: The conflict was thrown in our case, as the three template files exist in both the source project and the new target project, as created during the team project creation. Which is the right version? Well, the tool cannot make any assumptions and therefore stops the bus … or rather migration pipeline. As shown above, there is a “click here” hyperlink, which takes us to the relevant version control conflict definition and suggested resolution: In essence we have to look at both the team project we are migrating from to find the relevant changesets … … and the team project we are migrating to: What is worth highlighting is that this specific conflict dialog is probably VERY confusing, because the source changeset version refers to the local (to) team project and the target changeset to the other (from) team project. We have raised this anomaly (depending on which view of the system you have)with the product team and hope that we can rename the column headers and the field descriptors to something like “From” and “To” changeset information. Take a look at the migration log Scrolling through the 1.3MB log file would probably be very, very boring. for this blog post, but definitely worth a visit. It reports, in great details, the standard pipeline processing as documented in the architecture documentation: - Generate Context Information Tables - Generate Deltas on the originating point - Generate Link Deltas on the originating point - Generate Deltas on destination point - Generate Link Deltas on destination point - Generate Migration Instructions - Post-Process Delta Changes - Process Delta Changes - Analyze Link Delta - Process Link Changes and conflicts if any: [5/15/2011 10:51:32 AM] TfsMigrationShell.exe Information: 0 : VersionControl: Unresolved conflict: [5/15/2011 10:51:32 AM] Session: e8853f21-3ab0-425c-817b-917b3d93ca60 [5/15/2011 10:51:32 AM] Source: 172f78a5-b891-4f7b-8f0e-f2cf4b4610cd [5/15/2011 10:51:32 AM] Message: Cannot find applicable resolution rule. [5/15/2011 10:51:32 AM] Conflict Type: VC namespace conflict type [5/15/2011 10:51:32 AM] Conflict Type Reference Name: c1d23312-b0b1-456c-b6e4-af22c3531480 [5/15/2011 10:51:32 AM] Conflict Details: $/TiP_POC_Test_2/BuildProcessTemplates/DefaultTemplate.xaml [5/15/2011 10:51:32 AM] TfsMigrationShell.exe Information: 0 : VersionControl: Stopping current trip for session: e8853f21-3ab0-425c-817b-917b3d93ca60 Take a look at target team project The TFS Integration UI gives us a “thumbs up”, having moved 12 changesets and 1320 WIT items. At the bottom of the UI we will also see the graphical chart. If we hover over one of the red bars, we will recognise the event where the VC conflict as reported above was reported: Seeing is believing and therefore we should review the target team project and verify that everything has been migrated … which was the case. Looking at one of the VC changesets, we notice that additional information was added to the description, reporting which (1) tool and who (2) moved the changeset: What may not be apparent at a first glance, but is clearly documented as a limitation in the branching guidance, is the fact that the creation date is that date and time when the changeset was checked in by the migration tool, not the original check-in date you would find on the source (from) system. The migration guidance document defined this limitation as follows: .” Looking at one of the WIT items, we also notice additional information on who moved the item (2) and the same date/time (3) limitation as outlined above. The other limitation which you will noticed when doing a VC by VC changeset or WIT by WIT comparison is the following, as documented in the migration guidance document: .” Lastly it is important to remind ourselves, as outlined in the previous blog post, that we are moving from one domain to another, not using value mapping and therefore it comes as no surprise that the “Assigned To” defines the credentials from the source (from) team project, which may not and in our case are not valid credentials on the target (to) team project. In the next blog post we will demonstrate a simple field value mapping to change the “assigned to” field value to users that are known and valid on the target system. Also see the following blog posts for related information: See you next time. I just did a test run of only a Version Control migration and we do not see the CreationDate in the changset comment like in your screenshot. Is there a default setting that may have changed recently? Is there a way for us to easily add it back? Please use the social.msdn.microsoft.com/…/tfsintegration forum to raise a query. Remember to add your configuration and log file if possible, so that we can verify that your session and adapters matches the environment I used when doing the blog post. I my case it was a TFS to TFS migration, using the out-of-the-box WIT and VC adapters.
https://blogs.msdn.microsoft.com/willy-peter_schaub/2011/05/16/tfs-integration-tools-where-does-one-start-part-3-dust-has-settled-did-it-work/
CC-MAIN-2017-34
en
refinedweb
XI Landscape As such XI landscape is just another SAP landscape. But when we take closer look, we realize its far more complicated and diverse. As per standard, it has DEV, TEST and PRD environment. XI being middleware is a central point of communication for various SAP and non-SAP systems and as a central point of communication; we constantly need to keep an eye on all these systems. Just to make it further complex, XI itself contains various components some of which reside on ABAP stack while rest are on Java. Apart from Integration Engine, it involves Business Process Engine, Adapter Engine, tRFC / qRFC communication layer. Typically even a simple XI message passes through multiple Systems (Sender system, Integration Server, Receiver system) and even within a system it touches various components (Adapter Engine, Integration Engine, BPE ). And for effective monitoring and functioning of your XI landscape, you need to have proper visibility of all these communication components. But just imagine how it would be if you individually need to login to all these systems, execute different transactions for different components. Fairly complicated, isnt it ? And this is where XI CCMS Alert Monitoring comes into picture. CCMS : Central Monitoring Tool In such a complicated XI landscape, CCMS provide a central point of collecting all alerts from all components of all systems. Lets take a look at the different components involved in Integration Server. As described in above diagram with XI CCMS Alert Monitoring you can catch alerts from Integration Engine, qRFC queues, Business Process Engine and all Java components. Now if we take a system view of XI monitoring – Lets get on to all the sections of Alert Monitoring and their individual features. 1. Integration Engine of type Integration Server : This section displays system errors occurred in Integration Engine and the beauty of CCMS is these errors are classified and you view them in different categories. As you can guess receiving errors in categorised format leads to quicker analysis. These error categories provide detailed information on Error Frequency : Error frequency indicates number of error occurred per min for that particular category. Message alerts by Error Code Error Code 1 Error Code n Apart from system errors, one of the major feature provided here is Message Performance. Here you can determine average processing time for a particular interface specifying Sender service, Receiver service, Interface and its corresponding namespace. We can benchmark processing time for that interface and while message processing when the threshold is overshoot, youll red alert in CCMS. Ill detail this functionality in one of my feature blogs. 2. tRFC / qRFC queues On its journey from sender to reciver via Integration Server, a message passes through many queues. But fortunately with CCMS, you can monitor the status of all these queues from a central system. Just to appreciate this, lets list down the queues in each system. Integration Server – EO Inbound (XBTI*) EO Outbound (XBTO*) EO Acknowledgement (XBTB*) EOIO Inbound (XBQI*) EOIO Outbound (XBQO*) EOIO Acknowledgement (XBQB*) Application System- EO Receiver (XBTR*) EO Sender (XBTB*) EO Acknowledgement (XBTB*) EOIO Receiver (XBQR*) EOIO Sender (XBQB*) EOIO Acknowledgement (XBQB*) 3. Heartbeat Data for Java Components: Generic Request and Message Generator (GRMG) Infrastructure is part of CCMS and it provides the heartbeat data for Java components. The heartbeat data mainly focuses on availability monitoring. From technical perspective, theres a GRMG application (which is JSP / BSP) with well-defined interface, which is called by GRMG infrastructure. It responds to this call with GRMG response which itself is in well-defined XML format. In XI, we can monitor the availability of below Java Components. Integration Repository Integration Directory Runtime Workbench Adapter Engine System Landscape Directory 4. Business Process Engine (BPE): Under this node it reports problem both in BPE deployment and BPE Runtime. BPE deployment lists the problem with deployment and process activation. And runtime problem appear under BPE Runtime. As a standard feature, all these errors are categorised and error frequency data is provided for each category. Third Party Monitoring Tools : Im sure this question would have already popped up in your mind and so did it in mine, when I setup CCMS. I was keen to know if CCMS can be integrated with third party monitoring tools and how do we configure them. To my query, SAP consultant conceded that SAP indeed doesnt provide any documentation about this configuration. After browsing through servicemarketplace however I found some documentation on interfacing SNMP traps with CCMS. For details take a look at -> CCMS Agents : Features, Installation, and Operation -> Section 5.2 This feature enables SNMP (Simple Network Management Protocol) enabled monitoring tools to integrate with CCMS. Few vendors like BMC offer their own agents for the communication and using these agents we can integrate Patrol with CCMS. HP Open View as well offers some Plug-ins for SAP Netweaver platform. You may find the details at products/spi/spi_sap/ds/spi_sap_ds.pdf> In my XI system I am trying to monitor I don’t see these options like XICACHE and all . How DO i get it ? Thanks, Chitta Very good article. We want to include payload data for the failed message through alert emails, is it possible through CCMS? Thanks Sidra
https://blogs.sap.com/2005/12/06/xi-ccms-alert-monitoring-overview-and-features/
CC-MAIN-2017-34
en
refinedweb
Any help given would be much appreciated. I normally like to figure things out for myself, but this one has me stumped. Thanks, PS I only know basic coding commands, we haven't done extensive work on inbuilt functions and all, so my code will probably look a little, well basic, and that is because that is what I know. #include <iostream> #include <String> using namespace std; //Function prototype bool palindrome(string , int , int ); int main() { string word; int b , e; //call the function. if (palindrome(word,b,e) == true) cout<< word << " Is a palindrome." << endl << endl; else if (palindrome(word,b,e) ==false) cout << word << " Is not a palindrome." << endl << endl; system("PAUSE"); return 0; } bool palindrome(string test, int b , int e ) { b = 0; e = test.size(); if(b >=e) return true; if(test[b] != test[e]) return false; if((test[b])== (test[e])) { return palindrome(test,b+1,e-1); } return true; }
http://www.dreamincode.net/forums/topic/316223-check-if-a-word-is-a-palindrome-and-avoid-punctuation-and-spaces/
CC-MAIN-2017-34
en
refinedweb
I'm trying to compile this piece of code: import java.util.Collection; import java.util.function.BiConsumer; import de.hybris.platform.servicelayer.exceptions.ModelSavingException; import de.hybris.platform.servicelayer.model.ModelService; public class Foo { public static interface ModelService2 { public abstract void saveAll(Object[] paramArrayOfObject) throws ModelSavingException; public abstract void saveAll(Collection<? extends Object> paramCollection) throws ModelSavingException; public abstract void saveAll() throws ModelSavingException; } public void bar() { final BiConsumer<ModelService2, Collection<? extends Object>> consumer1 = ModelService2::saveAll; final BiConsumer<ModelService, Collection<? extends Object>> consumer2 = ModelService::saveAll; } } ModelService ModelService2 saveAll 1. ERROR in src\Foo.java (at line 17) final BiConsumer<ModelService, Collection<? extends Object>> consumer2 = ModelService::saveAll; ^^^^^^^^^^^^^^^^^^^^^ Cannot make a static reference to the non-static method saveAll(Object[]) from the type ModelService ModelService final Consumer<ModelService2> consumer3 = ModelService2::saveAll; final Consumer<ModelService> consumer4 = ModelService::saveAll; 1. ERROR in src\Foo.java (at line 19) final Consumer<ModelService> consumer4 = ModelService::saveAll; ^^^^^^^^^^^^^^^^^^^^^ Cannot make a static reference to the non-static method saveAll(Object[]) from the type ModelService '-noExit' '-classpath' '<classpath>' '-sourcepath' '<source path>' '-d' '<path>\classes' '-encoding' 'UTF8' The interaction between method overloading, varargs and type inference is perhaps the most complicated and hairy part of Java type checking. It's an area where bugs turn up regularly and where there are often differences between different compilers. My guess is the following: ModelService has a vararg saveAll. Because of this saveAll with two object argument is a valid method call to such an object. If that method would be static it would be valid to call it with one ModelService and one Collection, so a method reference expression would be valid for a BiConsumer<ModelService2, Collection<? extends Object>> type. Because of a compiler bug the compiler notes that, and notes that the method in not static, and thus infers that the method reference expression is not valid here. This generates the compilation error. ModelService2.saveAll is on the other hand is not a vararg and can not be called with one ModelService and one Collection. Because of this the compiler does not get stuck in this bug when it tries that possibility. When I tried this code with Eclipse 4.5.2 and javac 1.8.0_77 all of your examples compiled for me. I have no idea why you are getting different results.
https://codedump.io/share/mNJehB5wZnwE/1/why-is-the-type-inference-to-a-biconsumer-for-a-one-argument-instance-method-different-in-this-case
CC-MAIN-2017-34
en
refinedweb
Thread Synchronization (C# Programming Guide) The following sections describe features and classes that can be used to synchronize access to resources in multithreaded applications. One of the benefits of using multiple threads in an application is that each thread executes asynchronously. For Windows applications, this allows time-consuming tasks to be performed in the background while the application window and controls remain responsive. For server applications, multithreading provides the ability to handle each incoming request with a different thread. Otherwise, each new request would not get serviced until the previous request had been fully satisfied. However, the asynchronous nature of threads means that access to resources such as file handles, network connections, and memory must be coordinated. Otherwise, two or more threads could access the same resource at the same time, each unaware of the other's actions. The result is unpredictable data corruption. For simple operations on integral numeric data types, synchronizing threads can be accomplished with members of the Interlocked class. For all other data types and non thread-safe resources, multithreading can only be safely performed using the constructs in this topic. For background information on multithreaded programming, see: The, and followed by a code block that is to be executed by only one thread at a time. For example: The argument provided to the lock keyword must be an object based on a reference type, and is used to define the scope of the lock. In the example above, the lock scope is limited to this function because no references to the object exist outside the function. Strictly speaking, the object provided to lock is used solely to uniquely identify the resource being shared among multiple threads, so it can be an arbitrary class instance. In practice, however, this object usually represents the resource for which thread synchronization is necessary. For example, if a container object is to be used by multiple threads, then the container can be passed to lock, and the synchronized code block following the lock would access the container. As long as other threads locks on the same contain before accessing it, then access to the object is safely synchronized. Generally, it is best to avoid locking on a public type, or on object instances beyond the control of your application. For example, lock(this) can be problematic if the instance can be accessed publicly, because code beyond your control may lock on the object as well. This could create deadlock situations where two or more threads wait for the release of the same object. Locking on a public data type, as opposed to an object, can cause problems for the same reason. Locking on literal strings is especially risky because literal strings are interned by the common language runtime (CLR). This means that there is one instance of any given string literal for the entire program, the exact same object represents the literal in all running application domains, on all threads. As a result, a lock placed on a string with the same contents anywhere in the application process locks all instances of that string in the application. As a result, it is best to lock a private or protected member that is not interned. Some classes provide members specifically for locking. The Array type, for example, provides SyncRoot. Many collection types provide a SyncRoot member as well. For more information on the lock keyword, see::. For more information on monitors, see Monitor Synchronization Technology Sample. Using a lock or monitor is useful for preventing the simultaneous execution of thread-sensitive blocks of code, but these constructs do not allow one thread to communicate an event to another. This requires synchronization events, which are objects that have one of two states, signaled and un-signaled, that can be used to activate and suspend threads. Threads can be suspended by being made to wait on a synchronization event that is unsignaled, and can be activated by changing the event state to signaled. If a thread attempts to wait on an event that is already signaled, then the thread continues to execute without delay. There are two kinds of synchronization events: AutoResetEvent, and ManualResetEvent. They differ only in that AutoResetEvent changes from signaled to unsignaled automatically any time it activates a thread. Conversely, a ManualResetEvent allows any number of threads to be activated by its signaled state, and will only revert to an unsignaled state when its Reset method is called. Threads can be made to wait on events by calling one of the wait methods, such as WaitOne, WaitAny, or WaitAll. System.Threading.WaitHandle.WaitOne causes the thread to wait until a single event becomes signaled, System.Threading.WaitHandle.WaitAny blocks a thread until one or more indicated events become signaled, and System.Threading.WaitHandle.WaitAll blocks the thread until all of the indicated events become signaled. An event becomes signaled when its Set method is called. In the following example, a thread is created and started by the Main function. The new thread waits on an event using the WaitOne method. The thread is suspended until the event becomes signaled by the primary thread that is executing the Main function. Once the event becomes signaled, the auxiliary thread returns. In this case, because the event is only used for one thread activation, either the AutoResetEvent or ManualResetEvent classes could be used. using System; using System.Threading; class ThreadingExample { static AutoResetEvent autoEvent; static void DoWork() { Console.WriteLine(" worker thread started, now waiting on event..."); autoEvent.WaitOne(); Console.WriteLine(" worker thread reactivated, now exiting..."); } static void Main() { autoEvent = new AutoResetEvent(false); Console.WriteLine("main thread starting worker thread..."); Thread t = new Thread(DoWork); t.Start(); Console.WriteLine("main thrad sleeping for 1 second..."); Thread.Sleep(1000); Console.WriteLine("main thread signaling worker thread..."); autoEvent.Set(); } } For more examples of thread synchronization event usage, see: A mutex is similar to a monitor; it prevents the simultaneous execution of a block of code by more than one thread at a time. In fact, the name "mutex" is a shortened form of the term "mutually exclusive." Unlike monitors, however, a mutex can be used to synchronize threads across processes. A mutex is represented by the Mutex class. When used for inter-process synchronization, a mutex is called a named mutex because it is to be used in another application, and therefore it cannot be shared by means of a global or static variable. It must be given a name so that both applications can access the same mutex object. Although a mutex can be used for intra-process thread synchronization, using Monitor is generally preferred, because monitors were designed specifically for the .NET Framework and therefore make better use of resources. In contrast, the Mutex class is a wrapper to a Win32 construct. While it is more powerful than a monitor, a mutex requires interop transitions that are more computationally expensive than those required by the Monitor class. For an example of using a mutex, see Mutexes. How to: Create and Terminate Threads (C# Programming Guide) How to: Use a Thread Pool (C# Programming Guide) HOW TO: Synchronize Access to a Shared Resource in a Multithreading Environment by Using Visual C# .NET HOW TO: Create a Thread by Using Visual C# .NET HOW TO: Submit a Work Item to the Thread Pool by Using Visual C# .NET HOW TO: Synchronize Access to a Shared Resource in a Multithreading Environment by Using Visual C# .NET
https://msdn.microsoft.com/en-us/library/ms173179(v=VS.80).aspx
CC-MAIN-2017-34
en
refinedweb
When a Revit project (RVT) file contains a family instance and we select the instance and edit the family, it opens up the Family in family editor mode. If users click Save at this point from the Revit User Interface, Revit knows the location of the Family Document from where the Family was loaded initially. Can we extract this file path (location) programmatically? If you want to programmatically access this path of Family Document from the Family Instance, you can do that by traversing the path from Family Instance –> Family Symbol –> Family –> Family Document. But, the problem that came up with this approach was that after Family element, it only provides access to the Document property which only returns the active document in which the Family is loaded in (or where the Family Instance has been created in) – whereas what we need is to access the Family Document from which the Family was loaded. After some further playing around, one of the approach that seemed to work was to follow the workflow as followed using Revit UI – which was to call the EditFamily() on the Family element. This provided access to the Family Document and from this Document element, the PathName could be extracted. The Family.Document had taken us to the wrong direction. The following code snippet illustrates this approach: using System; using System.Collections.Generic; using System.Text; using Autodesk.Revit.Attributes; using Autodesk.Revit.DB; using Autodesk.Revit.UI; namespace Revit.SDK.Samples.HelloRevit.CS { [Transaction(TransactionMode.Manual)] public class Command : IExternalCommand { public Result Execute(ExternalCommandData commandData, ref string message, ElementSet elements) { Document doc = commandData.Application.ActiveUIDocument.Document; foreach (FamilyInstance famInst in commandData.Application.ActiveUIDocument.Selection.Elements) { Document famDoc = doc.EditFamily(famInst.Symbol.Family); TaskDialog.Show("Family PathName", famDoc.PathName); } return Result.Succeeded; } } } UPDATE: Thanks Dan for pointing out that we indeed need to close the Family document after the path name has been extracted - otherwise each of the family documents will throw the save dialog and consequently have to close them manually before Revit is closed.
http://adndevblog.typepad.com/aec/2012/09/accessing-the-path-a-revit-family-document-from-the-family-instance.html
CC-MAIN-2017-34
en
refinedweb
Java date and time program :- Java code to print or display current system date and time. This program prints current date and time. We are using GregorianCalendar class in our program. Java code to print date and time is given below :- Java programming code import java.util.*; class GetCurrentDateAndTime { public static void main(String args[]) { int day, month, year; int second, minute, hour; GregorianCalendar date = new GregorianCalendar(); day = date.get(Calendar.DAY_OF_MONTH); month = date.get(Calendar.MONTH); year = date.get(Calendar.YEAR); second = date.get(Calendar.SECOND); minute = date.get(Calendar.MINUTE); hour = date.get(Calendar.HOUR); System.out.println("Current date is "+day+"/"+(month+1)+"/"+year); System.out.println("Current time is "+hour+" : "+minute+" : "+second); } } Download Date and time program class file. Output of program: Don't use Date and Time class of java.util package as their methods are deprecated means they may not be supported in future versions of JDK. As an alternative of GregorianCalendar class you can use Calendar class.
http://www.programmingsimplified.com/java/source-code/java-program-display-date-time
CC-MAIN-2015-22
en
refinedweb
Agenda See also: IRC log <noah> As noted in my regrets, I have some conflicts with today's call, but I'll try to keep an occasional eye out for IRC, and dial in if something comes up for which I am needed. Thank you. <scribe> Agenda: SW: Other topics? DC: Assume that package URIs stuff would come up some time SW: Issue 61? We will add this SW: Minutes from 16 October: ... and 6 Nov. f2f: <DanC> +1 approve RESOLUTION: Approved as circulated <DanC> (noah, you're ok to scribe 20 Nov?) Meet next on 20 November, scribe duty to Noah, whom failing DanC Meeting of 27 November is cancelled NW: I've reviewed this, and as far as I understand it, I think they are using proxies in the way they are meant to be used SW: What about relation to Generic Resources? NW: Didn't see that explicitly, but any transformation gives a new representation DC: Are there multiple URIs? NW: I think not DC: TV, any thoughts on that? TVR: Not at the moment SW: Anything we need to push on? ... Last Call has actually expired NW: I see no need to do anything other than say "Fine" DC: Can you tell me a typical use case story? NW: There are proxies set up so that e.g. a rich web site goes through the proxy and is transformed to something viewable on your mobile -- I think sidekick exploits this DC: Any good recommendations NW: Well, yes, don't change request headers was one bit DC: Ah, perhaps the HTTP working party should look at this NW: Good idea SW: I will send a courtesy message saying we have nothing to say. . . DC: HST has the ball HST: I foresee progress in the new year ... So we could close the issue w/o completing the action (yet) <DanC> ACTION-23 due 2008-02-01 <trackbot> ACTION-23 track progress of #int bug 1974 in the XML Schema namespace document in the XML Schema WG due date now 2008-02-01 DC: The two are now linked, via the Issue being in state Pending Review SW: Some items already suggested: Self-describing Web, Uniform Access to Metadata, Versioning ... Wrt UAM, JR has an action to produce some words, but not due until next year JR: I will try to get something before us -- at least some slides <noah> I remain somewhat optimistic of having a new Self-Describing Web draft. Bad news: unlikely to be as far ahead of F2F as I would like; Good news: I would expect changes to be well-isolated and easy to review, given thorough discussion we had in Bristol. SW: DO, what about Versioning? DO: I hope to get to it next week or the week after JR: I believe I'm waiting for some input from DO DO: I believe I'm waiting for JR SW: Sounds like you should talk <DanC> action-181? <trackbot> ACTION-181 -- Jonathan Rees to update versioning formalism to align with terminology in versioning compatibility strategies -- due 2008-10-16 -- OPEN <trackbot> <DanC> action-182? <trackbot> ACTION-182 -- David Orchard to provide example for jar to work into the formalism -- due 2008-10-23 -- OPEN <trackbot> <DanC> action-183? <trackbot> ACTION-183 -- David Orchard to incorporate formalism into versioning compatibility strategies -- due 2008-10-23 -- OPEN <trackbot> <DanC> (indeed, the tracker state looks like... or is consistent with... deadlock) SW: JR, DO will talk offline SW: I have suggested giving each member a slot to motivate a topic, one they care about, either new, ongoing or forgotten ... HST, URNsAndRegistries? HST: Yes, I will have new prose in time for f2f <DanC> on tagSoup: DC: Mike Smith is working on a language spec. document for HTML 5 ... ref. TagSoupIntegration ... New W3C travel policy would mean I might get this trip and no others until TPAC SW: So you are asking if we should meet? DC: Yes HST: I had assumed we would meet, planning to buy tickets soon SW: NW and TVR will not be there, DO uncertain. NW and DO will join by 'phone HST: I believe we will have enough people to do useful work SW: We will meet, HST can buy tickets ... I would request more responses when I ask for agenda input <noah> I will be at the December meeting (which if course is convenient for me). SW: Let's look at the list of open actions, by issue: ... Is ACTION-24 a worthwhile thing for Tim to pursue? DC: Well, TBL does say when asked that we should keep this open ... I proposed to close on the basis of the XQuery spec. ... and there's the HTML5 spec's new input on this SW: So the topic title asks a question DC: That's overtaken for sure: W3C specs do support IRIs ... What's at the heart of WebArch, IRIs or URIs -- answer 'yes' <DanC> ACTION-188? <trackbot> ACTION-188 -- Dan Connolly to investigate the URL/IRI/Larry Masinter possible resolution of the URL/HTML5 issue. -- due 2008-10-31 -- OPEN <trackbot> SW: Anyone want to work on this? DC: Even if not, OK to have the issue there as a marker SW: ISSUE-30 / ACTION-176 -- NM, DO, any progress? <DanC> action-176? <trackbot> ACTION-176 -- Noah Mendelsohn to work with Dave to draft comments on exi w.r.t. evaluation and efficiency -- due 2008-09-30 -- OPEN <trackbot> DO: I think NM has made some progress, I request to be released from this, too much load elsewhere <DanC> (noah, are you OK to keep ACTION-176 open without Dave?) SW: ISSUE-34 / ACTION-113 HST: Yes, it will happen someday SW: ISSUE-35 / ACTION-130 XHTML/GRDDL DC: Namespace doc't has been updated SW: If you think it can be closed, please do so, leave a pointer to where the action is addressed DC: OK ... What about the issue? <DanC> action-130: rev 2008/10/14 22:08:29 <trackbot> ACTION-130 Consult with Dan and Ralph about the gap between the XHTML namespace and the GRDDL transformation for RDFa notes added SW: XHTML + RDFa has done it, right? <DanC> close action-130 <trackbot> ACTION-130 Consult with Dan and Ralph about the gap between the XHTML namespace and the GRDDL transformation for RDFa closed HST: As long as the issue is XHTML, we're good TVR: RFDa works fine with HTML HST: I dispute the 'fine' SW: and I wonder about the 'works' [TagSoup digression] SW: Propose to close ISSUE-35 TVR: By pointing to RDFa DC: And GRDDL <DanC> (indeed, -1 on the empty proposal to close; we need a technical decision.) RESOLUTION: Close ISSUE-35 on the basis the RDFa and GRDDL provide the desired solution HST: We need an action to explain the resolution to the public DC: I will take it trackbot, status? <DanC> ACTION: Dan announce decision on rdf-in-html-35 and invite feedback [recorded in] <trackbot> Created ACTION-191 - Announce decision on rdf-in-html-35 and invite feedback [on Dan Connolly - due 2008-11-20]. <scribe> ACTION: Dan to close ISSUE-35 with a public explanation [recorded in] <trackbot> Created ACTION-192 - Close ISSUE-35 with a public explanation [on Dan Connolly - due 2008-11-20]. <noah> Am I right that we instructed me to include in next draft of Self-describing Web a story on how you could follow your nose from HTML media types to RDFa? SW: ISSUE-41 / outstanding actions ... Assuming there will be progress by the F2F JR: Yes <DanC> close action-192 <trackbot> ACTION-192 Close ISSUE-35 with a public explanation closed SW: ISSUE-50 / ACTION-33 <DanC> action-192: dup of 191 <trackbot> ACTION-192 Close ISSUE-35 with a public explanation notes added trackbot, close ACTION-189 <trackbot> ACTION-189 S. Send public comment to www-tag about the XRI proposal and the establishment of base URI. closed HST: Others are indeed open SW: ISSUE-52 / ACTION-150 ... Finding published ... and announced <DanC> action-150: done. see <trackbot> ACTION-150 Finish refs etc on passwords in the clear finding [inc post Sept 2008 F2F updates] notes added <DanC> issue-52: finding: <trackbot> ISSUE-52 Sending passwords in the clear notes added trackbot, close ACTION-150 <trackbot> ACTION-150 Finish refs etc on passwords in the clear finding [inc post Sept 2008 F2F updates] closed DC: Did we hear back from anyone? Is there anyone we should be waiting on? SW: We could ask Ed Rice? DO: I will do so DC: Do we have any recent input from Security Context? SW: Not from the group, no DO: We did our best to address several individual comments SW: Any response to the publication announcement? DO: Not that I'm aware of <DanC> issue-52? <trackbot> ISSUE-52 -- Sending passwords in the clear -- RAISED <trackbot> <DanC> close issue-52 SW: Close the issue now? Wait for Ed? <DanC> issue-52? <trackbot> ISSUE-52 -- Sending passwords in the clear -- CLOSED <trackbot> TVR: Not necessary, close it and notify him as a courtesy SW: ISSUE-54 / three actions wrt TagSoup DC: Recent progress on validator, some of it public. . . <DanC> action-7: <trackbot> ACTION-7 draft a position regarding extensibility of HTML and the role of the validator for consideration by the TAG notes added DC: Blog posting by Olivier Théreaux, which has attracted favourable comment SW: Waiting for Tim on the other two <DanC> ACTION-188 due 20 Nov 2008 <trackbot> ACTION-188 Investigate the URL/IRI/Larry Masinter possible resolution of the URL/HTML5 issue. due date now 20 Nov 2008 HST: Wrt ACTION-145, I still hope TBL will produce a publication from the positive parts of his paper and his TPAC slides DC: I'm about to get going on ACTION-188 <DanC> (I'd like us to keep due dates in the future; if the chair expects tbl to continue work on 116, let's give it a due date in the future... e.g. the ftf agenda timeframe...) SW: ISSUE-57 / three actions <DanC> action-116 due 1 Dec 2008 <trackbot> ACTION-116 Align the tabulator internal vocabulary with the vocabulary in the rules, getting changes to either as needed. due date now 1 Dec 2008 JR: ACTION-184 is about to be done SW: We're expecting something on ACTION-178 for the F2F ... ISSUE-58 / ACTION-163 NW: I still hope to work with Ted Guild on this, it is important SW: ISSUE-60 / three actions NW: I have sent TVR a review SW: I will do ACTION-143 at some point <DanC> (possible ftf fodder: the iphone urls thread: ) SW: ACTION-106 NW: No progress, but I will try to get that ready for the f2f SW: I have done ACTION-190 trackbot, close ACTION-190 <trackbot> ACTION-190 Make the above resolution visible on www-tag closed <Stuart> close action-190 <trackbot> ACTION-190 Make the above resolution visible on www-tag closed ACTION-106: NW sent comments to TVR privately <trackbot> ACTION-106 Make a pass over the WebArch 2.0 doc't which adds a paragraph, and connects up to issues list notes added trackbot, close ACTION-106 <trackbot> ACTION-106 Make a pass over the WebArch 2.0 doc't which adds a paragraph, and connects up to issues list closed TVR: I'm not sure about how to take this forward ... I don't plan to pick it up, except to possibly add new uses ... I don't see how to get it to the right audience. . . SW, DC: Is there a blog article in there? <DanC> found it... TVR: Perhaps. . . DC: Maybe I'll try to adapt it TVR: I will help <scribe> ACTION: Dan to try to draft a blog posting adapted from, with help from TVR [recorded in] <trackbot> Created ACTION-193 - Try to draft a blog posting adapted from, with help from TVR [on Dan Connolly - due 2008-11-20]. <Stuart> issue-61? <trackbot> ISSUE-61 -- URI Based Access to Packaged Items -- OPEN <trackbot> SW: We will discuss that next week <DanC> DC: HTML5 and URLs, reread Doug Crockford's safe JavaScript ... He's added a mode to JSLINT which verifies this ... He's very critical of the work on cross-site access controls ... He has an alternative, namely JSON-request ... What could be the improvement, by using JSON instead of XML? ... We could study that space, perhaps TVR: I tried to find the answer to that question, but didn't see it SW: DC, could you assemble a reading list? ... If we scheduled this at the right time, could you join us by phone, TV? TVR: No, sorry, I will be travelling on the 11, and preparing on the day before -- Tuesday might be possible. SW: Sounds like a good idea in any case, DC, reading list please <DanC> iphone urls thread: DC: There there's the iphone: URL thread: ... I can't get this started yet ... MNot says [tongue in cheek?] "We need an Arch Group for this sort of thing" ... I like tel: . . . blog entry: ??? SW: We can talk about this on a call -- let's find a slot on one of the next two calls <DanC> (blog entry that celebrates tel: support ) TVR: We should maybe write down URI schemes we know about <DanC> (I try to garden somewhat actively) TVR: [lists some] ... It does help to look at these <HST> HST does review the registered and unregistered schemes lists with some regularity <DanC> DC: p2p ones are not lookup + hierarchy ... I am bored by proposals which suggest replacing DNS ... but these don't do that TVR: There are 4 parts to a URI: protocol, host, path and port ... But consider ??? -- doesn't change the host/DNS part, but changes the handler ... Or ado:, as a protocol identifier for local work [missed some] <DanC> (ado isn't among the list in . hmm.) <Norm> Yes, the fact that protocol handlers are easy to register is the interesting angle to me <Stuart> kind of browser architecture stuff... maybe html5 should say something about plugin handlers... DC: SchemeProtocols is a good area to wander around periodically, not necessarly to try to draw hard conclusions SW: ADJOURNED
http://www.w3.org/2008/11/13-tagmem-minutes
CC-MAIN-2015-22
en
refinedweb
Opened 3 years ago Last modified 2 months ago #7325 new bug threadDelay mistreats minBound and maxBound in some configurations Description threadDelay currently treats minBound and maxBound incorrectly in some cases. This breaks the following idiom (as seen in the async package): forever (threadDelay maxBound) On Linux (Ubuntu 10.04 64-bit) without -threaded, threadDelay maxBound returns immediately. For lower numbers on the same order of magnitude, it behaves non-deterministically. For example, given this program: import Control.Concurrent import Control.Monad main = forM_ [6244222868950683224..] $ \i -> do print i threadDelay i threadDelay returns immediately in some cases but not in others. If I compile and run it in bash like this: ghc-7.6.1 -fforce-recomp threadDelay-maxBound.hs ; ./threadDelay-maxBound The bug usually appears, but if I run it like this: ghc-7.6.1 -fforce-recomp threadDelay-maxBound.hs ./threadDelay-maxBound The bug does not appear (threadDelay blocks like it should). Thus, the program is affected by a very subtle difference in how it is invoked. Perhaps it is sensitive to file descriptor numbers. On Windows without -threaded, threadDelay maxBound seems to work, but threadDelay minBound blocks rather than returning immediately. Change History (8) comment:1 Changed 3 years ago by tibbe - Cc johan.tibell@… added comment:2 Changed 3 years ago by simonmar - difficulty set to Unknown - Milestone set to 7.6.2 - Priority changed from normal to high comment:3 Changed 10 months ago by thoughtpolice - Milestone changed from 7.6.2 to 7.10.1 Moving to 7.10.1. comment:4 Changed 10 months ago by kim - Cc simonmar added On OSX 10.9 and ghc 7.8.x I'm seeing the following behaviour: - threadDelay minBound returns immediately (with or without -threaded ) - threadDelay maxBound without -threaded , the program terminates with: maxbound: select: Invalid Argument(similar to #6019) - threadDelay maxBound with -threaded prints on stderr: maxbound: c_poll: invalid argument (Invalid argument) maxbound: ioManagerWakeup: write: Bad file descriptorand then appears to hang. When running it on a different thread, which is then killed from the main thread after some time, the following line is printed in after the above: maxbound: ioManagerWakeup: write: Bad file descriptorThe program makes no progress after the thread got killed. Curiously, on another machine with same OS and ghc versions but more cores, the program crashes instead of hangs. - as described by the reporter, for some values < maxBound but in the same order of magnitude, the errors disappear, but for others they don't. Please do let me know if providing dtruss output would be helpful. comment:5 Changed 10 months ago by AndreasVoellmy - Cc AndreasVoellmy added comment:6 Changed 7 months ago by Feuerbach Another report of this bug: comment:7 Changed 4 months ago by thoughtpolice - Milestone changed from 7.10.1 to 7.12.1 Moving to 7.12.1 See #6019 for the maxBound problem. I haven't investigated the minBound problem yet.
https://ghc.haskell.org/trac/ghc/ticket/7325
CC-MAIN-2015-22
en
refinedweb
- NAME - SYNOPSIS - DESCRIPTION - FEEDBACK - AUTHOR - Hilmar Lapp - APPENDIX - Internal methods NAME Bio::DB::BioSQL::Cluster APPENDIX The rest of the documentation details each of the object methods. Internal methods are usually preceded with a _::ClusterI references a namespace with authority, and possibly a species.. ClusterIs have a BioNamespace as foreign key, and possibly a species.. Bio::ClusterI has annotations as children. Example : Returns : TRUE on success, and FALSE otherwise Args : The Bio::DB::PersistentObjectI implementing object for which the child objects shall be made persistent. remove_children Title : remove_children Usage : Function: This method is to cascade deletes in maintained objects. We need to undefine the primary keys of all contained annotation objects here. Example : Returns : TRUE on success and FALSE otherwise Args : The persistent object that was just removed from the database. Additional (named) parameter, as passed to remove(). remove_members Title : remove_members Usage : Function: Dissociates all cluster members from this cluster. Note that this method does not delete the members themselves, it only removes the association between them and this cluster. Example : Returns : TRUE on success and FALSE otherwise Args : The persistent object for which to remove the members. For Bio::ClusterIs, we need to get the annotation objects.::ObjectFactoryI compliant object to be used for creating the object. populate_from_row Title : populate_from_row Usage : Function: Populates an object with values from columns of the row. Example : Returns : The object populated,. Internal methods) _cluster_factory Title : _cluster_factory Usage : $obj->_cluster_factory($newval) Function: Get/set the Bio::Factory::ObjectFactoryI to use Example : Returns : value of _cluster_factory (a scalar) Args : on set, new value (a scalar or undef, optional) _object_slot Title : _object_slot Usage : $term = $obj->_object_slot($slot, $value); Function: Obtain the persistent L<Bio::Annotation::SimpleValue> representation of certain slots that map to ontology term associations (e.g. size). This is an internal method. Example : Returns : A persistent L<Bio::Annotation::SimpleValue> object Args : The slot for which to obtain the SimpleValue object. The value of the slot. _ontology_term Title : _ontology_term Usage : $term = $obj->_ontology_term($name,$ontology) Function: Obtain the persistent ontology term with the given name and ontology. This is an internal method. Example : Returns : A persistent Bio::Ontology::TermI object Args : The name for the term. The ontology name for the term. Whether or not to find the term.
https://metacpan.org/pod/Bio::DB::BioSQL::ClusterAdaptor
CC-MAIN-2015-22
en
refinedweb
Created on 2010-08-03 22:12 by ideasman42, last changed 2020-11-02 05:17 by ideasman42. Some parts of the python api expect __main__ module dictionary to be the namespace when executing a script, this is true when running a python script from the python binary but NOT true when running a compiled script from the C/API which can lead to bugs which are not easy to solve unless the C/API author knows this. Can somebody review this small patch please. This patch is still relevant, mentioning this since the patch is from a while ago.
https://bugs.python.org/issue9499
CC-MAIN-2021-49
en
refinedweb
I know that this topic has already appeared here and in other places. I tried to follow couple paths and can not make JupyterLab to display dash apps. Probably there is some switch I need to turn on but can’t find out what it is and have no idea where else to look. Need some help, please. Here is how I installed test environment conda create --name env4 python=3.8 scipy pandas jupyterlab numpy "ipywidgets=7.5" matplotlib dash conda activate env4 conda install -c conda-forge -c plotly jupyter-dash ipython kernel install --user --name=env4 jupyter labextension install @jupyter-widgets/jupyterlab-manager plotlywidget@4.12.0 jupyter labextension install jupyterlab-plotly@4.12.0 conda install -c plotly plotly=4.12.0 (last command is there because v4.11 is loaded natively by conda). Here is my JupyterLab extension list: @jupyter-widgets/jupyterlab-manager v2.0.0 enabled OK jupyterlab-dash v0.3.0 enabled OK jupyterlab-plotly v4.12.0 enabled OK plotlywidget v4.12.0 enabled OK Here is some code from Dash website as an example of what I can’t display in JupyterLab. import plotly.graph_objects as go # or plotly.express as px fig = go.Figure() # or any Plotly Express function e.g. px.bar(...) # fig.add_trace( ... ) # fig.update_layout( ... ) import dash import dash_core_components as dcc import dash_html_components as html from jupyter_dash import JupyterDash app = JupyterDash(__name__) # app = dash.Dash() app.layout = html.Div([ dcc.Graph(figure=fig) ]) app.run_server(mode='jupyterlab', port = 8090, dev_tools_ui=True, #debug=True, dev_tools_hot_reload =True, threaded=True) it opens new tab in JupyterLab but tab is blank. The same when I use mode = inline but mode=external produces new tab in browser, as it should. Sorry for so many words but I’m bit frustrated that I can’t make it work. At the same time Plotly graphs are displayed properly. As said, I’ve tried different combinations and none seems to work. Need some ideas what to do. pls.
https://community.plotly.com/t/no-dash-dislay-in-jupyterlab/46652
CC-MAIN-2021-49
en
refinedweb
HashMap basic principle and underlying source code analysis 1. Storage structure of HashMap: HashMap is composed of array, chain structure (linked list) and red black tree. The structure of red black tree is added in JDK 1.8. (the storage structure will change dynamically according to the amount of stored data). Source code implementation: /** * Basic hash box node for most entries. (yes) * For information about the subclass of TreeNode, see below; for information about the subclass of EntryEntry, see LinkedHashMap.) * Node: Data node */ static class Node<K,V> implements Map.Entry<K,V> { /** * hash Value the value obtained by hashing the hashcode value of the key is stored in the Entry to avoid repeated calculation * */ final int hash; /** * key Indexes * */ final K key; /** * data Data domain * */ V value; /** * Next node node * */ Node<K,V> next; /** * Constructor */ Node(int hash, K key, V value, Node<K,V> next) { this.hash = hash; this.key = key; this.value = value; this.next = next; } /** * Get key value * */ public final K getKey() { return key; } /** * Get value * */ public final V getValue() { return value; } /** * key = value * */ public final String toString() { return key + "=" + value; } /** * hashCode hashCode is used to determine the storage address of an object in the hash storage structure; * Note: the same hashCode of two objects does not necessarily mean that two objects are the same * * 1.hashcode For example, there is such a location in memory * 0 1 2 3 4 5 6 7 * And I have a class. This class has a field called ID. I want to store this class in one of the above 8 locations. If it is stored arbitrarily without hashcode, when searching * You need to go to these eight positions one by one, or use algorithms such as dichotomy. * But if hashcode is used, it will improve the efficiency a lot. * There is a field called ID in our class, so we define our hashcode as ID% 8, and then store our class in the location where we get the remainder. than * If our ID is 9 and the remainder of 9 divided by 8 is 1, then we will put the class in the position of 1. If the ID is 13 and the remainder is 5, then we will put the class * Put it in 5 this position. In this way, when looking for this class in the future, you can find the storage location directly by dividing the ID by 8. * * 2.But what if two classes have the same hashcode (we assume that the ID of the above class is not unique), for example, if the remainder * of 9 divided by 8 and 17 divided by 8 is 1, is this legal? The answer is: Yes. So how to judge? At this time, you need to define equals. * In other words, we first judge whether the two classes are stored in a bucket through hashcode, but there may be many classes in this bucket, so we need to find the class we want in this bucket through * equals. * So. Why rewrite hashCode() when equals() is overridden? * Think about it. If you want to find something in a bucket, you must first find the bucket. You don't find the bucket by rewriting hashcode(). What's the use of rewriting equals() * */ public final int hashCode() { return Objects.hashCode(key) ^ Objects.hashCode(value); } /** * Setting a new value will return the old data * */ public final V setValue(V newValue) { V oldValue = value; value = newValue; return oldValue; } /** * Judge whether objects are equal * */ public final boolean equals(Object o) { if (o == this) { return true; } if (o instanceof Map.Entry) { Map.Entry<?,?> e = (Map.Entry<?,?>)o; if (Objects.equals(key, e.getKey()) && Objects.equals(value, e.getValue())) { return true; } } return false; } } Some basic parameters used: /** * Default initial capacity - must be a power of 2. */ static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16 /** * Maximum capacity, which is used if both constructors implicitly specify a higher value using parameters. * Max 1073741824 */ static final int MAXIMUM_CAPACITY = 1 << 30; /** * The load factor to use when not specified in the constructor. * The default loading factor is 0.75 */ static final float DEFAULT_LOAD_FACTOR = 0.75f; /** * Use a tree instead of a list to list bin count thresholds for bin. * When an element is added to a bin with at least so many nodes, the bin is converted to a tree. * The value must be greater than 2 and at least 8 to be related to the assumption of deleting the tree, that is, converting back to the original category box when shrinking. * When the number of elements in the bucket exceeds this value, you need to replace the linked list node with a red black tree node to match the optimization speed * * That is, when the length of the linked list reaches 8, it is transformed into a tree structure * */ static final int TREEIFY_THRESHOLD = 8; /** * Box count threshold used to de tree (split) boxes during sizing operations. * Should be less than TREEIFY_THRESHOLD and up to 6 to engage with the shrinkage detection under removal. * When the capacity is expanded, if the number of elements in the bucket is less than this value, the tree bucket elements will be restored (segmented) into a linked list structure * From tree structure to chain structure */ static final int UNTREEIFY_THRESHOLD = 6; /** * It can be classified as the minimum capacity of the tree. * (Otherwise, if there are too many nodes in the bin, the table will be resized.) Should be at least 4 TREEIFY_THRESHOLD to avoid conflicts between resizing and treelization thresholds. * When the capacity in the hash table is greater than this value, the bucket in the table can be tree shaped * Otherwise, if there are too many elements in the bucket, the capacity will be expanded rather than tree shaped * In order to avoid the conflict between capacity expansion and tree selection, this value cannot be less than 4 * tree_ THRESHOLD (256) * */ static final int MIN_TREEIFY_CAPACITY = 64; Definition of basic structural parameters: /** * The table is initialized on first use and resized as needed. After allocation, the length is always a power of 2. * (In some operations, we also allow zero length to allow the use of boot mechanisms that are not currently needed.) * Main function: save the array structure of Node nodes. */ transient Node<K,V>[] table; /** * Save the cached entrySet(). * Note that the AbstractMap field is used for keySet () and values (). * Main function: Set data structure composed of Node nodes */ transient Set<Map.Entry<K,V>> entrySet; /** * The number of key value mappings contained in this mapping. */ transient int size; /** * The number of structural modifications to the HashMap * Structural modification: * A modification that changes the number of mappings in a HashMap or otherwise modifies its internal structure (for example, re hashing). * This field is used to make the iterator on the collection view of HashMap fail quickly. * (See concurrent modificationexception). */ transient int modCount; /** * The next size value to resize (capacity load factor). * threshold Indicates that the resize operation will be performed when the size of the HashMap is greater than the threshold. * * Usually: threshold = loadFactor * capacity * @serial */ // (after serialization, javadoc is described as true. //In addition, if a table array has not been allocated, //This field will retain the initial array capacity, //Or zero, indicating default_initial_capability.) int threshold; /** * Load factor of the hash table. * * @serial */ final float loadFactor; 2. Initialize HashMap Four initializing constructors are given by default During initialization, you can specify the initial capacity and load factor of hashMap. jdk1.7 calculates when calling the constructor for initialization, but 1.8 initializes when the first put operation is performed. The resize() method undertakes the tasks of initialization and capacity expansion to a certain extent. (initialization is also equivalent to capacity expansion.) /** * Construct an empty Map with a specified initial capacity and load factor * * @param initialCapacity the initial capacity Initialization space * @param loadFactor the load factor Load factor * @throws IllegalArgumentException if the initial capacity is negative or the load factor is nonpositive */ public HashMap(int initialCapacity, float loadFactor) { /* If the initial capacity is less than 0: exception: the initial capacity is illegal */ if (initialCapacity < 0) { throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity); } //If the initialized capacity is greater than the maximum capacity: 1 < < 30, the initialized capacity becomes the maximum capacity if (initialCapacity > MAXIMUM_CAPACITY) { initialCapacity = MAXIMUM_CAPACITY; } //Load factor is less than or equal to 0. Or no load factor was passed in. Exception thrown: load factor error if (loadFactor <= 0 || Float.isNaN(loadFactor)) { throw new IllegalArgumentException("Illegal load factor: " + loadFactor); } this.loadFactor = loadFactor; /*Initialization parameter threshold */ this.threshold = tableSizeFor(initialCapacity); } - Threshold capacity threshold: the threshold that needs to be resized next time. This calculation method is very interesting: If the given capacity is 3, the closest value is 22 = 4. If the given capacity is 5, the closest value is 23 = 8. If the given capacity is 13, the closest value is 24 = 16. From this, we can draw a law: the function of the algorithm is to change all the values after the highest 1 into 1, and finally add the calculated result + 1. /** * For a given target capacity, the transmitted parameter is transformed into a value to the nth power of 2 * cap The current capacity returns the n-th power of 2 of a cap binary bit through operation. * MAXIMUM_CAPACITY Is the maximum upper limit * Calculation principle: * 5: 0000 0000 0000 0101 * 7: 0000 0000 0000 0111 step 1: Shift a binary number to the right in turn, and then take or with the original value. For the binary of a number, start with the first bit that is not 0 and set all subsequent bits to 1. * 8: 0000 0000 0000 1000 step 2: 7 + 1 -> 8 Get the power of the first 2 greater than 0000 0101 * * However, the above operation is not suitable for 2, 4 and 8, which are originally the n-th power of 2, so the cap - 1 operation is implemented, so the minimum power of 2 (itself) will be obtained * */ static final int tableSizeFor(int cap) { int n = cap - 1; n |= n >>> 1; n |= n >>> 2; n |= n >>> 4; n |= n >>> 8; n |= n >>> 16; return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1; } Other construction methods: /** * Construct an empty HashMap with the specified initial capacity and default load factor (0.75). * If we specify the capacity value, we will generally use the power greater than the first 2 of the value as the initialization capacity * @param initialCapacity the initial capacity. * @throws IllegalArgumentException if the initial capacity is negative. */ public HashMap(int initialCapacity) { this(initialCapacity, DEFAULT_LOAD_FACTOR); } /** * Construct an empty < TT > HashMap < TT >, * It has a default initial capacity (16) and a default load factor (0.75). * If the initialization size is not specified, the default size is 16 and the load factor is 0.75 */ public HashMap() { this.loadFactor = DEFAULT_LOAD_FACTOR; //All other fields are default } /** * Construct a new HashMap, * Its mapping is the same as the specified Map. * HashMap It is created using the default load factor (0.75) and an initial capacity sufficient to save the Map in the specified Map. * * @param m The map whose map you want to place in its map * @throws NullPointerException if the specified map is null */ public HashMap(Map<? extends K, ? extends V> m) { /*The loading factor is 0.75 by default*/ this.loadFactor = DEFAULT_LOAD_FACTOR; putMapEntries(m, false); } 3. put method of HashMap: - If the table is empty, call the resize() method for the first capacity expansion, that is, initialize the HashMap. Allocate initialization capacity for HashMap. - There are no nodes in the bucket. Create a new node, node < K, V > - There are the same nodes p.hash == hash and (k = p.key) == key, which indicates that a hash conflict has occurred, and the new node is the same as the old node. Update old nodes. - Zipper method: loop through the linked list, find the address corresponding to the node, and judge whether to update the data node. Or create a node to judge whether the length of the linked list is 8 (head node + other 7 nodes). Determine whether tree structure is required. - Capacity expansion mechanism: if the length after adding elements is greater than the critical value, call the resize method /** * Associates the specified value with the specified key in the mapping. If the mapping previously contained a mapping for the key, the old value is replaced. * * @param key Specifies the key with which the value will be associated * @param value The value) { return putVal(hash(key), key, value, false, true); } /** * Implements Map.put and related methods. * * @param hash hash for key key hash value of * @param key the key Index key * @param value the value to put value * @param onlyIfAbsent If true, do not change the existing value * @param evict If false, the table is in create mode. * @return If the original value exists, return the previous value; null if none */ final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) { /*Bucket node array*/ Node<K,V>[] tab; /*New node bucket*/ Node<K,V> p; int n, i; //Initialize if the table of the storage element is empty if ((tab = table) == null || (n = tab.length) == 0) { //The initialization length here is 16: resize() expansion. n = (tab = resize()).length; } // (n - 1) & hash: & divide hash method to perform hash calculation. According to the hash value, the node is empty, and a new data node is initialized if ((p = tab[i = (n - 1) & hash]) == null) { //Initialize data node tab[i] = newNode(hash, key, value, null); } //Calculate and find the p node according to the hash value else { //New node Node<K,V> e; // Indexes K k; //p. Hash = = hash: the hash value of the P node is equal to the hash of the new data, and the (k = p.key) = = key index is the same //Or the key is not empty and equal. In short, e and p have the same hash and the same key. Directly use e to overwrite the original p node if (p.hash == hash && ((k = p.key) == key || (key != null && key.equals(k)))) { // e node = = p node (same address) e = p; } /*Indicates that the hash values are the same, but the key s are not the same*/ // If it is a tree node, insert it. Red black tree else if (p instanceof TreeNode) { e = ((TreeNode<K,V>) p).putTreeVal(this, tab, hash, key, value); } // If it is not a tree structure, it belongs to a chain structure. Create a new chain node else { //The length of nodes in the statistical chain, greater than 8, is transformed into a tree for (int binCount = 0; ; ++binCount) { // e = p.next indicates the next node to which the P node points. Each time, the next node is assigned to e node, which is equivalent to traversing the node if ((e = p.next) == null) { p.next = newNode(hash, key, value, null); // The first node is - 1. Plus the head node, the length of the linked list needs to be less than the threshold value of 8. When there are more than 8 nodes, the chain structure will be transformed into a tree structure if (binCount >= TREEIFY_THRESHOLD - 1) { //Chain to tree treeifyBin(tab, hash); } break; } //In the process of node traversal, if the hash value is the same and the key value is the same, exit the loop directly and assign the value to the found node directly if (e.hash == hash && ((k = e.key) == key || (key != null && key.equals(k)))) { break; } //Update the node and point to the next node every time p = e; } } // Mapping of existing keys. If there is a mapping relationship, replace the original value if (e != null) { V oldValue = e.value; // Judge whether overwrite is allowed and whether value is empty if (!onlyIfAbsent || oldValue == null) { e.value = value; } // Callback to allow LinkedHashMap post operation afterNodeAccess(e); return oldValue; } } ++modCount; //After the size of the hash table has checked the capacity expansion threshold, perform the capacity expansion operation if (++size > threshold) { resize(); } afterNodeInsertion(evict); return null; } Core mechanism: 1. Resize /** * Initialize or increase the table size. * If it is blank, it is allocated according to the initial capacity target maintained in the field threshold. * Otherwise, because we use a power of 2, the elements in each bin must maintain the same index or be offset by a power of 2 in the new table. * * The first method: initialize HashMap using the default construction method. From the above, we can know that HashMap will return an empty table at the beginning of initialization, and thershold is 0. Therefore, the capacity of the first expansion is default_ INITIAL_ Capability is 16. At the same time, threshold = DEFAULT_INITIAL_CAPACITY * DEFAULT_LOAD_FACTOR = 12. * The second method is to initialize HashMap by specifying the construction method of initial capacity. From the following source code, we can see that the initial capacity will be equal to threshold, and then threshold = current capacity (threshold) * DEFAULT_LOAD_FACTOR. * Third: HashMap is not the first expansion. If the HashMap has been expanded, the capacity and threshold of each table will be twice as large as the original. * @return the table */ final Node<K,V>[] resize() { //Save the current table to oldTable Node<K,V>[] oldTab = table; //Length of old table int oldCap = (oldTab == null) ? 0 : oldTab.length; //Threshold of old table int oldThr = threshold; int newCap, newThr = 0; //1. The old table has been initialized if (oldCap > 0) { //If the old capacity is greater than the maximum capacity, to reach the maximum capacity if (oldCap >= MAXIMUM_CAPACITY) { //The threshold is equal to the maximum value of Int type 2 ^ (30) - 1 threshold = Integer.MAX_VALUE; //Unable to expand, return to old table return oldTab; } //1. Expand the capacity of the old value (use the only left digit (old capacity multiplied by 2)) //2. If the capacity after capacity expansion is less than the maximum capacity and the old capacity value is greater than or less than the default capacity (16), double the old threshold (these two conditions must be met) else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY) { //The new threshold is twice the old threshold newThr = oldThr << 1; } } // Initial capacity set to threshold //If initialization has not occurred and initialCapacity is specified through the constructor during use, the size of the table is threshold, that is, an integer power greater than the minimum 2 of the specified initialCapacity (which can be obtained through the constructor) else if (oldThr > 0) { newCap = oldThr; } else { //If initialization has not been experienced and initialCapacity is not specified through the constructor, the default value is given (the array size is 16 and the load factor is 0.75) newCap = DEFAULT_INITIAL_CAPACITY; //threshold = loadFactor * capacity newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY); } //After the above method (capacity expansion or initialization) is completed, the capacity operation is completed, but the threshold value is not specified (initialCapacity is specified during normal capacity expansion or initialization), and the threshold value (final capacity * loading factor) is calculated if (newThr == 0) { float ft = (float)newCap * loadFactor; //If the last calculated threshold is less than the maximum capacity and the last determined capacity is less than the maximum capacity, the calculated threshold can be used. If either of the above two conditions is not met, the threshold is the Integer maximum newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE); } threshold = newThr; @SuppressWarnings({"rawtypes","unchecked"}) Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap]; //Initialize a new table and redefine an array according to the new capacity //Assign the newly created array to the HashMap member variable table = newTab; //The previous table has data if (oldTab != null) { //Because HashMap is array + linked list or array + red black tree (find the corresponding linked list or red black tree according to the array subscript), traverse the original array and find the corresponding linked list or red black tree for operation for (int j = 0; j < oldCap; ++j) { //Temporary node Node<K,V> e; //Array [j] has data for subsequent operations //Assign the head node (root node) of the linked list or red black tree represented under array [j] to e if ((e = oldTab[j]) != null) { //Dispose of old array [j] as empty //I think the operation of these two steps is to move the data at the subscript of the original array from its original position, and then operate the linked list oldTab[j] = null; if (e.next == null) { //If there is only one head node (root node), the corresponding position in the new table can be calculated and inserted directly according to the calculation method of E. hash & (newcap - 1) //It is consistent with the operation in the put method newTab[e.hash & (newCap - 1)] = e; } else if (e instanceof TreeNode) { //If it is a red black tree node ((TreeNode<K,V>)e).split(this, newTab, j, oldCap); } else { //First define five variables (Tail means Tail), so we can understand it this way //loHead lo head loTail lo tail //hiHead hi head hiTail hi tail // The low order above refers to 0 to oldCap-1 of the new array, and the high order specifies oldCap to newCap - 1 Node<K,V> loHead = null, loTail = null; Node<K,V> hiHead = null, hiTail = null; Node<K,V> next; do { next = e.next; // The length of the array must be the nth power of 2 (for example, 16). If the hash value and the length are combined, the effective binary bits of the hash value that can participate in the calculation are the last few bits equivalent to the length binary. If the result is 0, it means that the highest bit of the binary bit of the hash value participating in the calculation must be 0 //Because the binary effective highest bit of the array length is 1 (for example, the binary corresponding to 16 is 10000), only when *.. 0 * * * * and 10000 are combined, the result is 00000 (*. Represents multiple binary bits of uncertainty). In addition, because the modulo operation when positioning the subscript is the sum operation based on the hash value and length minus 1, the subscript = (*. 0 * * * * & 1111) is also = (*. 0 * * * * & 11111). 1111 is a binary of 15 and 11111 is a two-level system of 16 * 2-1, that is, 31 (double capacity expansion). // Therefore, if the hash value is touched with the length of the new array, the mod value will not change. That is, the position of the element in the new array is the same as that in the old array, so the element can be placed in the low-order linked list. if ((e.hash & oldCap) == 0) { //This part is very similar to the operations required to insert a node into the linked list (Figure 1 below shows the final state of the following code when there is only one data on the right, and the final state of multiple data on the left) if (loTail == null) { // If there is no tail, the linked list is empty loHead = e; // When the linked list is empty, the header node points to the element } else { loTail.next = e; // If there is a tail, the linked list is not empty. Hang the element to the end of the linked list. } loTail = e; // Set the tail node as the current element } // If the result of the and operation is not 0, the hash value is greater than the length of the old array (for example, the hash value is 17) // At this point, the element should be placed in the high position of the new array // For example: if the old array has a length of 16, the new array with a length of 32 and a hash of 17 should be placed at the 17th position of the array, that is, if the subscript is 16, then the subscript of 16 already belongs to the high order, the low order is [0-15] and the high order is [16-31] else { if (hiTail == null) { hiHead = e; } else { hiTail.next = e; } hiTail = e; } } while ((e = next) != null); // The linked list composed of low-order elements is still placed in the original position if (loTail != null) { loTail.next = null; newTab[j] = loHead; } // The position of the linked list composed of high-order elements is only offset by the length of the old array. if (hiTail != null) { hiTail.next = null; newTab[j + oldCap] = hiHead; } } } } } return newTab; } The most important thing in resizing is the rehash operation of linked list and red black tree: Usually, when we expand the capacity, we usually expand the length to twice the original. Therefore, the position of the element is either in the original position or moved to the power of 2 in the original position. When expanding the HashMap, we only need to see whether the new bit of the original hash value is 1 or 0. If it is 0, the index does not change. If it is 1, the index becomes "original index + oldCap". Since the new 1 bit is 0 or 1, it can be considered random, so the resize process evenly disperses the previously conflicting nodes into new slots Core mechanism: 2: split tree /** * Split the nodes in the number shape into higher and lower trees, or cancel the tree if the tree is now too small. Call only from resize; * That is, cut the fraction to avoid excessive number * @param map the map * @param tab the table for recording bin heads * @param index the index of the table being split * @param bit the bit of hash to split on */ final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) { TreeNode<K,V> b = this; // Relink to the lo and hi lists, keeping the order TreeNode<K,V> loHead = null, loTail = null; TreeNode<K,V> hiHead = null, hiTail = null; int lc = 0, hc = 0; //Loop through the tree. Because there is a double ended linked list relationship between TreeNode nodes, the linked list relationship can be used for rehash for (TreeNode<K,V> e = b, next; e != null; e = next) { next = (TreeNode<K,V>)e.next; e.next = null; if ((e.hash & bit) == 0) { if ((e.prev = loTail) == null) { loHead = e; } else { loTail.next = e; } loTail = e; ++lc; } else { if ((e.prev = hiTail) == null) { hiHead = e; } else { hiTail.next = e; } hiTail = e; ++hc; } } //After the rehash operation, pay attention to the untreeify or treeify operation according to the length of the linked list if (loHead != null) { if (lc <= UNTREEIFY_THRESHOLD) { tab[index] = loHead.untreeify(map); } else { tab[index] = loHead; //Otherwise it's already treelized if (hiHead != null) { loHead.treeify(tab); } } } if (hiHead != null) { if (hc <= UNTREEIFY_THRESHOLD) { tab[index + bit] = hiHead.untreeify(map); } else { tab[index + bit] = hiHead; if (loHead != null) { hiHead.treeify(tab); } } } } Author: coffee rabbit Link:
https://programmer.help/blogs/basic-principle-and-underlying-analysis-of-hashmap.html
CC-MAIN-2021-49
en
refinedweb
977. Square of ordered array simple single \color{#00AF9B} {simple} simple Give you an integer array nums sorted in non decreasing order, and return a new array composed of the square of each number. It is also required to sort in non decreasing order. Example 1: Input: nums = [-4,-1,0,3,10] Output:[0,1,9,16,100] Explanation: after squaring, the array becomes [16,1,0,9,100] After sorting, the array becomes [0,1,9,16,100] Example 2: Input: nums = [-7,-3,2,3,11] Output:[4,9,9,49,121] Tips: - 1 <= nums.length <= 104 - -104 <= nums[i] <= 104 - nums is sorted in non decreasing order Advanced: Please design an algorithm with time complexity O(n) to solve this problem Method 1: direct sort The idea of this method is very simple. After each item in the array is squared, the whole array is sorted. #include <vector> #include <algorithm> using namespace std; class Solution { public: vector<int> sortedSquares(vector<int> &nums) { vector<int> ans; ans.reserve(nums.size()); for (int num : nums) { ans.emplace_back(num * num); } sort(ans.begin(), ans.end()); return ans; } }; Complexity analysis Time complexity: O(n logn) Spatial complexity: O(logn). The array of answers is not included. We need O(logn) stack space to sort. Reference results Accepted 137/137 cases passed (32 ms) Your runtime beats 46.63 % of cpp submissions Your memory usage beats 56.4 % of cpp submissions (25.3 MB) Method 2: double pointer We can set two pointers left and right to point to the beginning and end of the array respectively. In each loop, compare the sizes of num [left] ^ 2 and num [right] ^ 2, put the largest value at the end of the answer array in reverse order, and then change the corresponding pointer. Process demonstration: #include <vector> using namespace std; class Solution { public: vector<int> sortedSquares(vector<int> &nums) { int n = nums.size(); vector<int> ans(n); for (int left = 0, right = n - 1, k = n - 1; left <= right; k--) { int a = nums[left] * nums[left], b = nums[right] * nums[right]; if (a > b) { ans[k] = a; left++; } else { ans[k] = b; right--; } } return ans; } }; Complexity analysis Time complexity: O(n) Space complexity: O(1). The array of answers is not included. We only need a constant space to store several variables. Reference results Accepted 137/137 cases passed (24 ms) Your runtime beats 85.42 % of cpp submissions Your memory usage beats 78.88 % of cpp submissions (25.2 MB) 189. Rotation array in etc. \color{#FFB800} {medium} secondary Give you an array, rotate the elements in the array K positions to the right, where k is a non negative number. Example 1: input: nums = [1,2,3,4,5,6,7], k = 3 output: [5,6,7,1,2,3,4] explain: Rotate 1 step to the right: [7,1,2,3,4,5,6] Rotate right for 2 steps: [6,7,1,2,3,4,5] Rotate right for 3 steps: [5,6,7,1,2,3,4] Example 2: Input: nums = [-1,-100,3,99], k = 2 Output:[3,99,-1,-100] explain: Rotate 1 step to the right: [99,-1,-100,3] Rotate right for 2 steps: [3,99,-1,-100] Tips: - 1 <= nums.length <= 105 - -231 <= nums[i] <= 231 - 1 - 0 <= k <= 105 Advanced: - Think of as many solutions as possible. There are at least three different ways to solve this problem. - Can you use the in-situ algorithm with spatial complexity O(1) to solve this problem? be careful - The title requires modification on the original array, rather than returning a new array. - Move the whole array nums by k bits to the right - The value of k may exceed the length of the array. That is, move the whole array k/n times, return to the original position, and then move k%n times to get a new array. Therefore, when calculating the subscript, we must not forget that the final result should take the remainder of n. Method 1: use additional arrays This may be the simplest idea of the topic. We can define an array dummy with the same length as the original array nums, and put the results into the new array dummy in order. Finally, replace all items in nums with data in dummy. According to the meaning of the title, in the original array with length n, the item i is moved k bits to the right and stored in the new array, then the subscript of item i in the new array can be obtained from the following mapping relationship: i → ( i + k ) % n i \rightarrow (i+k)\%n i→(i+k)%n Namely: d u m m y [ ( i + k ) % n ] = n u m s [ i ] dummy[(i + k) \% n] = nums[i] dummy[(i+k)%n]=nums[i] #include <vector> using namespace std; typedef unsigned int ui; class Solution { public: void rotate(vector<int> &nums, int k) { int n = nums.size(); vector<int> dummy(n); for (int i = 0; i < n; i++) { dummy[(i + k) % n] = nums[i]; } nums.assign(dummy.begin(), dummy.end()); } }; Complexity analysis Time complexity: O(n) Space complexity: O(n) Reference results Accepted 38/38 cases passed (28 ms) Your runtime beats 49.29 % of cpp submissions Your memory usage beats 27.54 % of cpp submissions (24.9 MB) Method 2: three flips Let's take the array [1,2,3,4,5,6,7] and k=3 as examples to observe the original array and the results: [ 1 , 2 , 3 , 4 , 5 , 6 , 7 ] [ 5 , 6 , 7 , 1 , 2 , 3 , 4 ] [1,2,3,4,\color{red}{5} \color{#000000}{},\color{red}{6} \color{#000000}{},\color{red}{7} \color{#000000}{}] \\ [\color{red}{5} \color{#000000}{},\color{red}{6} \color{#000000}{},\color{red}{7} \color{#000000}{},1,2,3,4] [1,2,3,4,5,6,7][5,6,7,1,2,3,4] Let's flip the K items on the left and the n-k items on the right in the results respectively: [ 7 , 6 , 5 , 4 , 3 , 2 , 1 ] [\color{red}{7} \color{#000000}{},\color{red}{6} \color{#000000}{},\color{red}{5} \color{#000000}{},4,3,2,1] [7,6,5,4,3,2,1] You can find that it is the global flip of the original array. In other words, the process from the original array to the result array is as follows: - Global flip - Left k-term flip - Right n-k term flip Namely: [ 1 , 2 , 3 , 4 , 5 , 6 , 7 ] ↓ 1 [ 7 , 6 , 5 , 4 , 3 , 2 , 1 ] ↓ 2 [ 5 , 6 , 7 , 4 , 3 , 2 , 1 ] ↓ 3 [ 5 , 6 , 7 , 1 , 2 , 3 , 4 ] [1,2,3,4,\color{red}{5} \color{#000000}{},\color{red}{6} \color{#000000}{},\color{red}{7} \color{#000000}{}] \\ {\downarrow}_{1} \\ [\color{red}{7} \color{#000000}{},\color{red}{6} \color{#000000}{},\color{red}{5} \color{#000000}{},4,3,2,1] \\ {\downarrow}_{2} \\ [\color{red}{5} \color{#000000}{},\color{red}{6} \color{#000000}{},\color{red}{7} \color{#000000}{},4,3,2,1] \\ {\downarrow}_{3} \\ [\color{red}{5} \color{#000000}{},\color{red}{6} \color{#000000}{},\color{red}{7} \color{#000000}{},1,2,3,4] [1,2,3,4,5,6,7]↓1[7,6,5,4,3,2,1]↓2[5,6,7,4,3,2,1]↓3[5,6,7,1,2,3,4] These three steps are the same process, that is, flip the array from a start item to an end item. Therefore, these three steps can be regarded as three function calls. We can define this function as reverse(). The flipping process can be completed by double pointers. We can set the two pointers left and right to point to the start item and the end item respectively, exchange the values pointed to by the two pointers each time, and then advance one unit to the middle until they point to the same element (an element does not need to be exchanged with itself) or left runs to the right of right, and the loop ends. Thus, the effective condition of the cycle can be determined as left < right. #include <vector> using namespace std; class Solution { public: void rotate(vector<int> &nums, int k) { int n = nums.size(); k %= n; reverse(nums, 0, n - 1); reverse(nums, 0, k - 1); reverse(nums, k, n - 1); } void reverse(vector<int> &nums, int left, int right) { while (left < right) { swap(nums[left], nums[right]); left++; right--; } } }; Please note that: - The array subscript starts from 0, so the K items on the left of the array are flipped from item 0 to item k-1. The same is true for the right side of the array. - The k value may be greater than the array length N, that is, the whole array is moved k/n times, and then moved k%n times to get the answer. So here we want to set k as k%n. Complexity analysis Time complexity: O(n). In addition to the global flip and the subsequent two respective flips, each element in the array is flipped twice, so the time complexity is O(2n)=O(n). Space complexity: O(1), we only need constant space to store several variables. Reference results Accepted 38/38 cases passed (28 ms) Your runtime beats 49.29 % of cpp submissions Your memory usage beats 91.94 % of cpp submissions (24.2 MB) Method 3: in situ solution Method 1 uses additional arrays to make the space complexity reach O(n); Method 2 traverses each element in the array twice, and the time complexity is O(2n)=O(n). Is there any way to reduce the space complexity to constant space and only need to traverse the array elements once? The answer is yes. We can optimize from method 1. In method 1, our operation on each element is to "put the element directly in its final position". There is no problem with this idea, but the problem lies in our method of traversing elements, which is the fundamental reason why we use additional arrays. In method 1, when the i-th element is processed, that is, placed at the position (i+k)%n, the selection strategy of the next element to be processed is sequential selection, that is, the i+1 element. Under this strategy, the elements originally placed at (i+k)%n are not only overwritten, but also directly discarded. That's why we need the original array to record what was in each position, and then write the results to a new array. Therefore, to optimize method 1, we need to change the selection strategy of the next element to be processed. Method 3 begins to be clear. We put the i-th element at position (i+k)%n and cover the element at that position. Therefore, we should take the (i+k)%n position as the element to be processed, save it, put it at the (i+2*k)%n position, and so on. Taking nums=[1,2,3,4,5,6,7], k=3 as an example, the transformation process of the array is as follows: It can be found that one round of traversal can complete the flip of the array. The end condition of the loop is also very simple, that is, the first item of the array is accessed. But let's focus on another example: num = [- 1, - 100,3,99], k=2. It can be found that a round of traversal can only exchange - 1 and 3. From this, we can conclude that: When the array length n cannot be divided by k, a round of traversal must be able to exchange all data; When n can be divided by k, the number of round s exchanged is the greatest common divisor (gcd) of N and k. So far, the number of cycles of the first layer can be determined as the number of rounds of exchange round. Next, we need to determine the number of exchanges in a round (count). In the case of non divisible (count = = 1), we need to exchange n times per round, that is, one round of exchange is completed; For the case of integer division (count! = 1), we can only exchange n/round times in a round. Namely: c o u n t = { n + 1 r o u n d = 1 n / r o u n d r o u n d ≠ 1 count= \left\{\begin{matrix} n+1 & round=1 \\ n/round & round \neq 1 \end{matrix}\right. count={n+1n/roundround=1round=1 Here are some process demonstrations of other examples: nums=[-1,-100,3,99],k=2: nums=[1,2,3,4,5,6],k=2: nums=[1,2,3,4,5,6],k=3: #include <vector> #include <numeric> using namespace std; class Solution { public: void rotate(vector<int> &nums, int k) { int n = nums.size(); k %= n; if (k == 0) return; int round = gcd(n, k); int count = round == 1 ? n + 1 : n / round; for (int i = 0; i < round; i++) { int last = nums[i]; for (int j = 1; j <= count; j++) { swap(last, nums[(i + j * k) % n]); } } } }; Complexity analysis Time complexity: O(n). N is the length of the array, and each element will be traversed only once. Space complexity: O(1), we only need constant space to store several variables. Reference results Accepted 38/38 cases passed (20 ms) Your runtime beats 92.01 % of cpp submissions Your memory usage beats 98.33 % of cpp submissions (24.2 MB) Animation powered by ManimCommunity/manim
https://programmer.help/blogs/leetcode-learning-plan-algorithm-introduction-c-day-2-double-pointer.html
CC-MAIN-2021-49
en
refinedweb
This site uses strictly necessary cookies. More Information How hard it is? Are any code changes necessary? Since TMP became free I've considered replacing current stuff that uses UI text with it, but since some of the code that uses UI Text was written by someone else (Fungus team, to be precise - even if they use TMP in newest versions I can't update because I had to do few customizations necessary to make it do what I want and as such I'm stuck on old version) and the old coding adage saying "there are two types of code: one that you understand and one written by someone else" I really need to know how hard it is to replace it. TL;DR: How hard it is to replace UI Text with Text Mesh Pro and what steps are necessary to do so? Answer by IgorAherne · Mar 17, 2017 at 11:35 PM I did just that a week ago. Took me one full day to change the whole Main menu + ingame-menu with several pages of content. Just make sure to include using TMPro; and instead of Textuse TextMeshProUGUI on a 2D text. Anything else is same, including variable names using TMPro; Text TextMeshProUGUI Make sure you've spent two hours and watched tutorials about them to generate atlas, familiar with material presets and you are good to go You would not believe how hard I've had to look to find that answer. Thank you!! Answer by m0guz · Mar 18, 2017 at 12:15 AM As @IgorAherne said, most of the thing is same. Only addition to process is to generate atlas which is very easy (TextMesh Pro - Font Asset Creator (in depth)). But I would wait Unity release for native implementation. Yeah, can't really wait for that, demo is almost ready and probably it would break preexisting text components anyway/be like when Shuriken "replaced" legacy particles (both systems coexisting) and I'd have to do what @IgorAherne said. Text Mesh Pro: highlight all words in a link 0 Answers Predict Text Width of String 0 Answers How to setup TextMesh Pro Font Asset to include characters from all languages? 1 Answer TextMesh Pro Thicker Wider Outlines 2 Answers UI components all disappear on Play mode, then stay invisible 2 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1326120/how-hard-is-to-replace-ui-text-with-text-mesh-pro.html
CC-MAIN-2021-49
en
refinedweb
Python wrapper for Domino API¶ The Python binding for the Domino API is installable via pip: pip install git+ If you would like to use the Python binding within a Domino workbook session simply include the following line to your project’s requirements.txt file. This will make the Python binding available for each new workbook session (or batch run) started within the project: -e git+ Tip Full documentation for this API is forthcoming. For more information now, see Domino’s public Python bindings project here. Once installed, you can instantiate library either with your API Key or Auth Token File. Token File: To use Token File to instantiate Python domino, you need to pass the path to this token file, either via class constructor (domino_token_file=<Path to Token file>) or via environment variable. export DOMINO_TOKEN_FILE=PATH_TO_DOMINO_TOKEN_FILE If you using Python package in code running in that is already running in Domino, The DOMINO_TOKEN_FILE will be set automatically to be the token file for the user who started the run. API Key: you’ll need to get your API key from your account page. To get your API Key, log into Domino and click on your name on the right hand side of the top menu. Select Account Settings and select the API Key option from the left hand menu. Copy the API key to your clipboard. The Python library will read this key from environment variables, so set it as follows in your shell: export DOMINO_USER_API_KEY=YOUR_API_KEY If you are using the Python package in code that is already running in Domino, the DOMINO_API_USER_KEY variable will be set automatically to be the key for the user who started the run. Note: 1. In case both API Key and Token file are present, default preference will be given to token file. To use API Key instead, unset the (DOMINO_TOKEN_FILE) environment variable 2. Documentation for the Domino REST API can be accessed here . Here is an example of usage: from domino import Domino # By and large your commands will run against a single project, # so you must specify the full project name domino = Domino("chris/canon") # List all runs in the project, most-recently queued first all_runs = domino.runs_list()['data'] latest_100_runs = all_runs[0:100] print(latest_100_runs) # all runs have a commitId (the snapshot of the project when the # run starts) and, if the run completed, an "outputCommitId" # (the snapshot of the project after the run completed) most_recent_run = all_runs[0] commitId = most_recent_run['outputCommitId'] # list all the files in the output commit ID -- only showing the # entries under the results directory. If not provided, will # list all files in the project. Or you can say path=“/“ to # list all files files = domino.files_list(commitId, path='results/')['data'] for file in files: print file['path'], '->', file['url'] print(files) # Get the content (i.e. blob) for the file you're interested in. # blobs_get returns a connection rather than the content, because # the content can get quite large and it’s up to you how you want # to handle it print(domino.blobs_get(files[0]['key']).read()) # Start a run of file main.py using the latest copy of that file domino.runs_start(["main.py", "arg1", "arg2"]) # Start a "direct" command domino.runs_start(["echo 'Hello, World!'"], isDirect=True) # Start a run of a specific commit domino.runs_start(["main.py"], commitId="aabbccddee")
https://docs.dominodatalab.com/en/4.6.2/api/Python_wrapper_for_Domino_API.html
CC-MAIN-2021-49
en
refinedweb
Timer with UI Hello, I want to create something like a timer with the UI. For testing there should be a button which triggers a label to display a count down etc. This is my code. Unfortunately, it only displays the last value (0) of the count down. Why is it like this? Is there anything I can do to directly display the other values (here 3, 2, 1) as well? import ui, time class Main(ui.View): def __init__(self): self.view_main = ui.load_view() self.lbl = self.view_main["lbl_time"] self.view_main.present("sheet") self.time = 4 def did_load(self): self["btn_start"].action = self.change_label_time def change_label_time(self, sender): while self.time > 0: self.time -= 1 time.sleep(1) self.view_main["lbl_time"].text = str(self.time) Main() - Webmaster4o time.sleepisn't really compatible with ui. Instead, use ui.delaylike so: # coding: utf-8 import ui, time class Main(ui.View): def __init__(self): self.view_main = ui.load_view() self.lbl = self.view_main["lbl_time"] self.view_main.present("sheet") self.time = 4 self.view_main["btn_start"].action = self.change_label_time def change_label_time(self, sender): ui.delay(self.decrement, 1) def decrement(self): if self.time > 0: self.time -= 1 self.lbl.text = str(self.time) ui.delay(self.decrement, 1) Main() You're also abusing a custom view class. There's no reason to use a custom view class here, it's effectively being used as a function ( __init__called). It works just as well to do import ui, time view_main = ui.load_view() lbl = view_main["lbl_time"] view_main.present("sheet") time = 4 #the action is this because it takes a function, what this does is calls it with a delay. view_main["btn_start"].action = lambda sender: ui.delay(decrement, 1) #This is still here because ui.delay takes a function. def decrement(): global time if time > 0: time -= 1 lbl.text = str(time) ui.delay(decrement, 1) There were other problems with your code, I don't know why did_load was called in your code or how it worked, the contents of did_load contained an error, it should've been self.view_main["btn_start"].action = self.change_label_time instead of self["btn_start"].action = self.change_label_time I don't know how your button action got set since did_load contained an error and was never called anyway. It'd've be great if you'd included the pyui file on GitHub. His did_load looks okay to me given he is using load_view. time.sleep can work, but it needs to be with code that has ui.in_background wrapped around it. Basically, for things on the ui thread, you will not see the ui update until the function exits. Ui.delay, or a Timer or Thread are recommended though, rather than in_background for something like this, if you care at all about precision. in_backgrounded coded gets run in a single queue, which can lead to surprising results if you think it behaves like an asynchronous thread. Thanks a lot for the quick answers. @Webmaster4o a) Is there a way to export the pyui file somehow and then to upload it to GitHub? Is it maybe done "manually" with the ftp server Ole posted some time ago? Or with any of the shells created for pythonista? b) do you know why there are compatibility issues between time and ui? I am quite curious about such things... @JonB Right now I am not familiar with asynchronous programming. But I think the take away is to skip the decorator in my case. - Webmaster4o In console, try print open('timer.pyui').read()and you can paste that in the forum, at the very least. There is no incompatibility with timr and ui. # coding: utf-8 import ui, time v=ui.View() b1=ui.Button(title='backgrounded',frame=(10,50,100,50)) b2=ui.Button(title='not backgrounded',frame=(10,150,100,50)) @ui.in_background def b1action(sender): for i in xrange(6): sender.title=str(i) time.sleep(0.5) def b2action(sender): # the ui does not update until this method exits! for i in xrange(6): sender.title=str(i) time.sleep(0.5) b1.action=b1action b2.action=b2action v.add_subview(b1) v.add_subview(b2) v.present() the key is understanding that the ui cannot update until your callback exits. So, in the case if the non backgrounded case, if it never exits the ui will appear to hang up. In my example above, it eventually exits, and prints the final number. Use of the in_backgrounddecorator allows the ui to update while the function runs in a background queue. However, note what happens if you use b1action for both buttons, then press both close together.... the first button has to finish before the next one starts, because ui.in_background shares the same queue. That problem can be solved using a Thread, or something like this Tanks. There is only one process at a time on each thread. In this case it is the process triggered by either of the buttons. pressing b2 halts the process of b1. Or in other words uses/ reserves the threat. But not vice versa. Pressing b1 still allows for b2 which then again blocks the thread. Eventually, b2 decorated with your run_async allows for parallel use. Funnily, if I put a @ui.in_background infront of b2action b1 behaves as b2. I have adjusted the code a little bit. The button now calls run_countdown. Now the delay function does not work any more. Why is it like this? How can I make the sport time work? def run_countdown(self): while self.round <= self.rounds: self.time = self.seconds_active self.countdown() self.time = self.seconds_break self.countdown() self.round += 1 def countdown(self): if self.time > 0: self.time -= 1 view_main["lbl_time"].text = str(self.time) ui.delay(self.countdown, 1) The ui will not update until the callback function exits. Look back at your code above, and think through when your callback function exits..... Decorating your first fcn with ui.in_background would fix that problem, but you would also spawn a very large number of countdown() functions (since countdown returns instantly), not whatnyou want, i think. I would recommend something like this: - your button action (not backgrounded) calls a start_game function - start_game would sets the self.time, then calls (via ui.delay) the countdown () function - countdown does mostly what it does now, except you would add an else: which calls game_over. That method can log the final results, decide if there are remaining rounds, and then call start_game again. 4). It is a good idea within any self-calling function or thread to check for the view being on_screen, and then exiting if not. That way, when you close the view, the timer will gracefully exit. You also have to solve the issue of someone pressing the button twice, either start_game exits if a game is already running, or you would set a cancel flag and/or call ui.cancel_all_delays. Thank you Jon. I thought the problem might be the same kind as before. So I will have to think through properly. Hope there are no further questions coming up. I have solved it with the following code. I'd be happy to share the whole script if anyone is interested. def run_countdown(self): self.setup_view_main() self.countdown() def countdown(self): if self.time > 0: if self.time <= 3: speech.say(str(self.time), "en-US", 0) view_main["lbl_time"].text = str(self.time) self.time -= 1 ui.delay(self.countdown, 1) elif self.round < self.data["rounds"]: self.switch_states() self.run_countdown() - list item
https://forum.omz-software.com/topic/2285/timer-with-ui/?
CC-MAIN-2021-49
en
refinedweb
Beginners in the field of data science who are not familiar with programming often have a hard time figuring out where they should start. With hundreds of questions about how to get started with Python for DS on various forums, this post (and video series) is my attempt to settle all those questions. I'm a Python evangelist that started off as a Full Stack Python Developer before moving on to data engineering and then data science. My prior experience with Python and a decent grasp of math helped make the switch to data science more comfortable for me. So, here are the fundamentals to help you with programming in Python. Before we take a deep dive into the essentials, make sure that you have set up your Python environment and know how to use a Jupyter Notebook (optional). A basic Python curriculum can be broken down into 4 essential topics that include: - Data types (int, float, strings) - Compound data structures (lists, tuples, and dictionaries) - Conditionals, loops, and functions - Object-oriented programming and using external libraries Let's go over each one and see what are the fundamentals you should learn. 1. Data Types and Structures The very first step is to understand how Python interprets data. Starting with widely used data types, you should be familiar with integers (int), floats (float), strings (str), and booleans (bool). Here's what you should practice. Type, typecasting, and I/O functions: - Learning the type of data using the type()method. type('Harshit') # output: str - Storing values into variables and input-output functions ( a = 5.67) - Typecasting — converting a particular type of variable/data into another type if possible. For example, converting a string of integers into an integer: astring = "55" print(type(astring)) # output: <class 'str'> astring = int(astring) print(type(astring)) # output: <class 'int64'> But if you try to convert an alphanumeric or alphabet string into an integer, it will throw an error: Once you are familiar with the basic data types and their usage, you should learn about arithmetic operators and expression evaluations (DMAS) and how you can store the result in a variable for further use. answer = 43 + 56 / 14 - 9 * 2 print(answer) # output: 29.0 Strings: Knowing how to deal with textual data and their operators comes in handy when dealing with the string data type. Practice these concepts: - Concatenating strings using + - Splitting and joining the string using the split()and join()method - Changing the case of the string using lower()and upper()methods - Working with substrings of a string Here’s the Notebook that covers all the points discussed. 2. Compound data structures (lists, tuples, and dictionaries) Lists and tuples (compound data types): One of the most commonly used and important data structures in Python are lists. A list is a collection of elements, and the collection can be of the same or varied data types. Understanding lists will eventually pave the way for computing algebraic equations and statistical models on your array of data. Here are the concepts you should be familiar with: - How multiple data types can be stored in a Python list. - Indexing and slicing to access a specific element or sub-list of the list. - Helper methods for sorting, reversing, deleting elements, copying, and appending. - Nested lists — lists containing lists. For example, [1,2,3, [10,11]]. - Addition in a list. alist + alist # output: ['harshit', 2, 5.5, 10, [1, 2, 3], 'harshit', 2, 5.5, 10, [1, 2, 3]] Multiplying the list with a scalar: alist * 2 # output: ['harshit', 2, 5.5, 10, [1, 2, 3], 'harshit', 2, 5.5, 10, [1, 2, 3]] Tuples are an immutable ordered sequence of items. They are similar to lists, but the key difference is that tuples are immutable whereas lists are mutable. Concepts to focus on: - Indexing and slicing (similar to lists). - Nested tuples. - Adding tuples and helper methods like count()and index(). Dictionaries These are another type of collection in Python. While lists are integer indexed, dictionaries are more like addresses. Dictionaries have key-value pairs, and keys are analogous to indexes in lists. To access an element, you need to pass the key in squared brackets. Concepts to focus on: - Iterating through a dictionary (also covered in loops). - Using helper methods like get(), pop(), items(), keys(), update(), and so on. Notebook for the above topics can be found here. 3. Conditionals, Loops, and Functions Conditions and Branching Python uses these boolean variables to assess conditions. Whenever there is a comparison or evaluation, boolean values are the resulting solution. x = True ptint(type(x)) # output: <class bool> print(1 == 2) # output: False The comparison in the image needs to be observed carefully as people confuse the assignment operator ( =) with the comparison operator ( ==). Boolean operators (or, and, not) These are used to evaluate complex assertions together. or— One of the many comparisons should be true for the entire condition to be true. and— All of the comparisons should be true for the entire condition to be true. not— Checks for the opposite of the comparison specified. score = 76 percentile = 83 if score > 75 or percentile > 90: print("Admission successful!") else: print("Try again next year") # output: Try again next year Concepts to learn: if, else, and elifstatements to construct your condition. - Making complex comparisons in one condition. - Keeping indentation in mind while writing nested if/ elsestatements. - Using boolean, in, is, and notoperators. Loops Often you'll need to do a repetitive task, and loops will be your best friend to eliminate the overhead of code redundancy. You’ll often need to iterate through each element of a list or dictionary, and loops come in handy for that. while and for are two types of loops. Focus on: - The range()function and iterating through a sequence using forloops. whileloops age = [12,43,45,10] i = 0 while i < len(age): if age[i] >= 18: print("Adult") else: print("Juvenile") i += 1 # output: # Juvenile # Adult # Adult # Juvenile - Iterating through lists and appending (or any other task with list items) elements in a particular order cubes = [] for i in range(1,10): cubes.append(i ** 3) print(cubes) #output: [1, 8, 27, 64, 125, 216, 343, 512, 729] - Using break, pass, and continuekeywords. List Comprehension A sophisticated and succinct way of creating a list using and iterable followed by a for clause. For example, you can create a list of 9 cubes as shown in the example above using list comprehension. # list comprehension cubes = [n** 3 for n in range(1,10)] print(cubes) # output: [1, 8, 27, 64, 125, 216, 343, 512, 729] Functions While working on a big project, maintaining code becomes a real chore. If your code performs similar tasks many times, a convenient way to manage your code is by using functions. A function is a block of code that performs some operations on input data and gives you the desired output. Using functions makes the code more readable, reduces redundancy, makes the code reusable, and saves time. Python uses indentation to create blocks of code. This is an example of a function: def add_two_numbers(a, b): sum = a + b return sum We define a function using the def keyword followed by the name of the function and arguments (input) within the parentheses, followed by a colon. The body of the function is the indented code block, and the output is returned with the return keyword. You call a function by specifying the name and passing the arguments within the parentheses as per the definition. More examples and details here. 4. Object-Oriented programming and using external libraries We have been using the helper methods for lists, dictionaries, and other data types, but where are these coming from? When we say list or dict, we are actually interacting with a list class object or a dict class object. Printing the type of a dictionary object will show you that it is a class dict object. These are all pre-defined classes in the Python language, and they make our tasks very easy and convenient. Objects are instance of a class and are defined as an encapsulation of variables (data) and functions into a single entity. They have access to the variables (attributes) and methods (functions) from classes. Now the question is, can we create our own custom classes and objects? The answer is YES. Here is how you define a class and an object of it: class Rectangle: def __init__(self, height, width): self.height = height self.width = width def area(self): area = self.height * self.width return area rect1 = Rectangle(12, 10) print(type(rect1)) # output: <class '__main__.Rectangle'> You can then access the attributes and methods using the dot(.) operator. Using External Libraries/Modules One of the main reasons to use Python for data science is the amazing community that develops high-quality packages for different domains and problems. Using external libraries and modules is an integral part of working on projects in Python. These libraries and modules have defined classes, attributes, and methods that we can use to accomplish our tasks. For example, the math library contains many mathematical functions that we can use to carry out our calculations. The libraries are .py files. You should learn to: - Import libraries in your workspace - Using the helpfunction to learn about a library or function - Importing the required function directly. - How to read the documentation of the well-known packages like pandas, numpy, and sklearn and use them in your projects Wrap up That should cover the fundamentals of Python and get you started with data science. There are a few other features, functionalities, and data types that you’ll become familiar with over time as you work on more and more projects. You can go through these concepts in GitHub repo where you’ll find the exercise notebooks as well: Here is 3-part video series based on this post for you to follow along with: Data Science with Harshit You can connect with me on LinkedIn, Twitter, Instagram, and check out my YouTube channel for more in-depth tutorials and interviews. If this tutorial was helpful, you should check out my data science and machine learning courses on Wiplane Academy. They are comprehensive yet compact and helps you build a solid foundation of work to showcase.
https://www.freecodecamp.org/news/python-fundamentals-for-data-science/
CC-MAIN-2021-49
en
refinedweb
Dear openHAB community, The one or the other might have read my mail on the Eclipse SmartHome (ESH) mailing list that I am retiring as a project lead of ESH. I can imagine that this announcement will bring up many questions in the openHAB community. That’s why I would like to give you some background on this decision and share with you the plan about what this means for openHAB. As a bit of history: Eclipse SmartHome had been initiated 5 years ago by taking the openHAB core code base and run it as a separate project. The idea behind this was to allow other (also commercial) solutions to build on this code base and especially to collaborate on it for the benefit of everyone. This worked pretty well over the years and I think many features of openHAB would not have been possible without this collaborative setup. If you have closely followed the activities at ESH in the recent months, you will have noticed though that the number of active maintainers has decreased and that the majority of contributions meanwhile originates from the openHAB community. This effectively means, that although there are commercial companies using ESH, those are rather passive users of the code, who might do a contribution here and there, but who are not actively engaging themselves in driving the project forward. Looking at this situation from an openHAB perspective, this does not seem beneficial for us anymore. Running ESH as a separate project means a lot of effort wrt project management, infrastructure management like builds and releases, etc., which is “unproductive” overhead and time that could be spent more wisely. Being non-commercial, the openHAB community has no interest in taking on this additional effort, since ESH is mainly meant to be used for other commercial solutions. What’s even worse than the additional efforts is that the split of ESH & openHAB creates some “unnatural” boundaries, which keep people from crossing it (like not fixing bugs in ESH while being users of openHAB), just like it is a permanent source of confusion about where to discuss/report/fix things. We should therefore use the situation as an opportunity to simplify the openHAB project organisation by reintegrating the ESH code into the openHAB project itself. The idea is to move all “core” ESH bundles to openhab-core and all ESH add-ons to openhab2-addons and continue their maintenance there. We would like to keep openhab-core in a “framework-style” similar to ESH, so that it is possibly to package it in different ways and that it is not only usable by the openHAB distro. The good news is that this will allow us to have @maggu2810 (as the only left ESH committer) join the openHAB community as a maintainer - his knowledge in OSGi and Karaf is a huge asset. What are the next steps? Starting today, we work on integrating the ESH code into openHAB. For this reason, we would like to ask you to NOT CREATE ANY FURTHER PRs OR ISSUES FOR ECLIPSE SMARTHOME. We will inform you here, once the code has successfully moved to openhab-core and openhab2-addons - the plan is to finish this latest by end of January. Wrt all currently open ESH PRs, we will individually discuss with the authors, how to proceed with those. The ESH issues will stay open and we can keep referencing them when working on fixes. New issues should all go to openHAB repos, though. Some details on what needs to be done: - openHAB repos should switch from EPLv1 to EPLv2 (which I did for openhab-core already) to be compatible with ESH code - Code from will be merged into, which includes a refactoring of its namespace from org.eclipse.smarthometo org.openhab. - Code from will be merged with. While bundles are renamed to org.openhab, the core packages will be kept at org.eclipse.smarthomefor compatibility reasons (so that existing add-ons still nicely work without any changes). The namespace of the automation component will be refactored, though, as hardly anyone depends on it (yet). - We use the opportunity and change the openhab-core build system from Tycho to pure Maven with bnd. This is more modern, brings us many additional features and is highly favoured by many community members as it is easier to get into and understand for non-OSGi developers. The openhab2-addons repo won’t be changed yet, but it is planned in the future as well. - Instead of the ESH IDE setup, a new “openHAB Core” IDE setup (using bndtools) will be provided for developing the core code. Pure add-ons development can continue with the existing “openHAB IDE setup”. - Instead of the Eclipse Forum, development related discussions will be done in the openHAB forum - a new “Development” category has been created today for this purpose. Please note that this is for general discussions/questions. If there are concrete bugs & issues, this should preferable directly go to the according issue tracker on Github. - We will need to update our documentation and remove any references to ESH - any help is highly appreciated here! The Road Ahead Having the framework code under our control, I think it is a good moment to start planning how openHAB should evolve in future. There were many discussions in the past about the bad UX especially for newbies, with too many different UIs and options and the tricky balance between textual and GUI-driven configuration. Imho we should not make any fundamental changes to openHAB 2, but we should start talking about an openHAB 3, which can bring simplifications by reducing complexity (like retiring UIs, removing Xtext from the core, etc, tbd.) with the cost of breaking backward compatibility (while keeping a migration path for everyone). Before going into any discussions & decisions on this, I would like to establish some more formal governance process for openHAB in general - similar to what exists at the Eclipse Foundation with the “Eclipse Development Process”, which clarifies, which people are allowed to actually take decisions and how such a process looks like. I definitely want to distribute the responsibilities, so that the project can better scale - all based on the principle of meritocracy. I’ll come up with some suggestions on such a process document soon and will keep you posted. Summary You see that quite some changes are lying ahead of us, but I am convinced that they will help to better evolve openHAB and serve its growing community. And please be re-assure, that although I step down as a project lead of ESH, I stay fully committed to openHAB and its community! Stay tuned for updates on the process here. Cheers, Kai
https://community.openhab.org/t/the-road-ahead-reintegrating-esh/64670/1
CC-MAIN-2021-49
en
refinedweb
NAME asn1_find_structure_from_oid - Locate structure defined by a specific OID. SYNOPSIS #include <libtasn1.h> const char * asn1_find_structure_from_oid(ASN1_TYPE definitions, const char * oidValue); ARGUMENTS DESCRIPTION Search the structure that is defined just after an OID definition. RETURNS NULL when OIDVALUE not found, otherwise the pointer to a constant string that contains the element name defined just after the OID.should give you access to the complete manual.
https://linux.fm4dd.com/en/man3/asn1_find_structure_from_oid.htm
CC-MAIN-2021-49
en
refinedweb
1586691360 With lots of practice, programming will get gradually get easier, but the bottom line is that programming is hard. It can be made even more difficult with an unfortunate combination of assumption and working problems out on your own. Without a mentor especially, it can be rather difficult to ever even know if one way you are doing something is wrong. We are certainly all guilty of going into our code at a later date and refactoring because we are all learning constantly how to do things in a better way. Fortunately, with the right amount of awareness, correcting these mistakes can make you a significantly better programmer. The greatest way to become a greater programmer is to overcome mistakes and problems. There is always a better way of doing something, it’s finding that specific better way that is challenging. It’s easy to get used to doing one thing or another, but sometimes a bit of a shake-up is needed to really get the ball rolling on becoming a great engineer. Though the “ Not Implemented” error is likely one of the least common errors on this list, I think it’s important to issue a reminder. Raising NotImplemented in Python will not raise a NotImplemented error, but instead will raise a Type error. Here is a function I wrote to illustrate this: def implementtest(num): if num == 5: raise(NotImplemented) Whenever we try to run the function where “ num” is equal to 5, watch what happens: The solution to raising the correct exception is to raise the NotImplementedError rather than raising NotImplemented. To do this, I modified our function: def implementtest(num): if num == 5: raise(NotImplemented) if num == 10: raise(NotImplementedError('This is the right way!')) And running this will give us the proper output: (This one I was guilty of) Default arguments in Python are evaluated once, and the evaluation takes place whenthe function definition is executed. Given that these arguments are evaluated once, each element inbound is used in each call, which means that the data contained in the variable is mutable across each time it is accessed within the function. def add(item, items=[]): items.append(item) What we should do instead is set the value of our parameter to nothing, and add a conditional to modify the list if it doesn’t exist def add(item, items=None): if items is None: items = [] items.append(item) Though this mostly applies to the statistical/DS/ML side of Python users, having immutable data is universally important depending on the circumstances. Inside of an object-oriented programming language, global variables should be kept to a minimum. However, I think it is important to subtitle that claim by explaining that global variables are certainly necessary, and quite alright in some situations. A great example of this is Data Science, where this is a limited amount of object-oriented programming actually going on, and Python is being used more functionally than it typically would. Global variables can cause issues with naming, and privacy while multiple functions are calling on and relying on the same value. A great example of a global variable I would say is okay to do is something like a file-path, especially one that is meant to be packaged along with your Python file. Even in writing a Gtk class and moving a graphical user interface builder should be done privately, rather than globally. Using copy can be objectively better than using normal assignment. Normal assignment operations will simply point the new variable towards the existing object, rather than creating a new object. d = 5 h = d There are two main types of copies that can be performed with the copy module for Python, shallow copy and deep copy. The difference between these two types of copies comes down to the type of variable you want to pass through the function. When using deep copying on variables that are contained within a single byte of data like integers, floats, and booleans or strings, the difference between a shallow copy and a deep copy cannot be felt. However, when working with lists, tuples, and dictionaries I would recommend always deep copying. A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original. A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original. Given those definitions, it’s easy to see why you might want to use one or the other for a given datatype. import copy l = [10,15,20] h = 5 hcopy = copy.copy(h) z = copy.deepcopy(l) In order to test our results, we can simply check if their variable id is the same with a conditional statement: print id(h) == id(hcopy) False Being a great programmer is about constantly improving, and hopefully, some misconceptions can be cleared up over time. It’s a gradual and painful process, but with lots of practice and even more information, following simple guidelines and advice like this can certainly be rewarding. Sharing “ not-to-dos” like this generally make great conversation and makes everyone involved a great programmer, so I think discussing this can certainly be beneficial regardless of how far along you are on your endless programming journey. #python #programming 1617964379 Python is one of the 10 most popular programming languages of all time, The reason? It offers the flexibility and eases no other programming language offers. Want to develop a GUI for a website, or mobile App? If your answer is yes and I can guarantee in most cases it will then hire dedicated Python developers who have the experience and expertise related to your project requirements from WebClues Infotech. You might be wondering how? WebClues has a large pool of dedicated python developers who are highly skilled in what they do. Also, WebClues offers that developers for hiring at the very reasonable and flexible pricing structure. Hire a Dedicated Python developer based on what you need. Share your requirements here Book Free Interview with Python developer: #hire python developers #hire python developers #hire dedicated python developers india #python developers india #hire dedicated python developers programmers #python developers in usa for hire 1598944263 Looking to build robust, scalable, and dynamic responsive websites and applications in Python? At HourlyDeveloper.io, we constantly endeavor to give you exactly what you need. If you need to hire Python developers, you’ve come to the right place. Our programmers are scholars at this language and the various uses it can be put to. When you Hire Python Developers India you aren’t just getting teams that are whizzes in this field. You are also getting people who ensure that they are au courant with the latest developments in the field and can use this knowledge to offer ingenious solutions to all your Python-based needs. Consult with our experts: #hire python developers india #hire python developers #python developers #python development company #python development services #python development
https://morioh.com/p/13ea846a1d8a
CC-MAIN-2021-49
en
refinedweb
. Plot legends identify discrete labels of discrete points. For continuous labels based on the color of points, lines, or regions, a labeled colorbar can be a great tool. In Matplotlib, a colorbar is a separate axes that can provide a key for the meaning of colors in a plot. Because the book is printed in black-and-white, this section has an accompanying online supplement where you can view the figures in full color (). We'll start by setting up the notebook for plotting and importing the functions we will use: import matplotlib.pyplot as plt plt.style.use('classic') %matplotlib inline import numpy as np As we have seen several times throughout this section, the simplest colorbar can be created with the plt.colorbar function: x = np.linspace(0, 10, 1000) I = np.sin(x) * np.cos(x[:, np.newaxis]) plt.imshow(I) plt.colorbar(); We'll now discuss a few ideas for customizing these colorbars and using them effectively in various situations. plt.imshow(I, cmap='gray'); All the available colormaps are in the plt.cm namespace; using IPython's tab-completion will give you a full list of built-in possibilities: plt.cm.<TAB> But being able to choose a colormap is just the first step: more important is how to decide among the possibilities! The choice turns out to be much more subtle than you might initially expect. A full treatment of color choice within visualization is beyond the scope of this book, but for entertaining reading on this subject and others, see the article "Ten Simple Rules for Better Figures". Matplotlib's online documentation also has an interesting discussion of colormap choice. Broadly, you should be aware of three different categories of colormaps: binaryor viridis). RdBuor PuOr). rainbowor jet). The jet colormap, which was the default in Matplotlib prior to version 2.0, is an example of a qualitative colormap. Its status as the default was quite unfortunate, because qualitative maps are often a poor choice for representing quantitative data. Among the problems is the fact that qualitative maps usually do not display any uniform progression in brightness as the scale increases. We can see this by converting the jet colorbar into black and white: from matplotlib.colors import LinearSegmentedColormap def grayscale_cmap(cmap): """Return a grayscale version of the given colormap""" cmap = plt.cm.get_cmap(cmap) colors = cmap(np.arange(cmap.N)) # convert RGBA to perceived grayscale luminance # cf. RGB_weight = [0.299, 0.587, 0.114] luminance = np.sqrt(np.dot(colors[:, :3] ** 2, RGB_weight)) colors[:, :3] = luminance[:, np.newaxis] return LinearSegmentedColormap.from_list(cmap.name + "_gray", colors, cmap.N) def view_colormap(cmap): """Plot a colormap with its grayscale equivalent""" cmap = plt.cm.get_cmap(cmap) colors = cmap(np.arange(cmap.N)) cmap = grayscale_cmap(cmap) grayscale = cmap(np.arange(cmap.N)) fig, ax = plt.subplots(2, figsize=(6, 2), subplot_kw=dict(xticks=[], yticks=[])) ax[0].imshow([colors], extent=[0, 10, 0, 1]) ax[1].imshow([grayscale], extent=[0, 10, 0, 1]) view_colormap('jet') Notice the bright stripes in the grayscale image. Even in full color, this uneven brightness means that the eye will be drawn to certain portions of the color range, which will potentially emphasize unimportant parts of the dataset. It's better to use a colormap such as viridis (the default as of Matplotlib 2.0), which is specifically constructed to have an even brightness variation across the range. Thus it not only plays well with our color perception, but also will translate well to grayscale printing: view_colormap('viridis') If you favor rainbow schemes, another good option for continuous data is the cubehelix colormap: view_colormap('cubehelix') For other situations, such as showing positive and negative deviations from some mean, dual-color colorbars such as RdBu (Red-Blue) can be useful. However, as you can see in the following figure, it's important to note that the positive-negative information will be lost upon translation to grayscale! view_colormap('RdBu') We'll see examples of using some of these color maps as we continue. There are a large number of colormaps available in Matplotlib; to see a list of them, you can use IPython to explore the plt.cm submodule. For a more principled approach to colors in Python, you can refer to the tools and documentation within the Seaborn library (see Visualization With Seaborn). Matplotlib allows for a large range of colorbar customization. The colorbar itself is simply an instance of plt.Axes, so all of the axes and tick formatting tricks we've learned are applicable. The colorbar has some interesting flexibility: for example, we can narrow the color limits and indicate the out-of-bounds values with a triangular arrow at the top and bottom by setting the extend property. This might come in handy, for example, if displaying an image that is subject to noise: # make noise in 1% of the image pixels speckles = (np.random.random(I.shape) < 0.01) I[speckles] = np.random.normal(0, 3, np.count_nonzero(speckles)) plt.figure(figsize=(10, 3.5)) plt.subplot(1, 2, 1) plt.imshow(I, cmap='RdBu') plt.colorbar() plt.subplot(1, 2, 2) plt.imshow(I, cmap='RdBu') plt.colorbar(extend='both') plt.clim(-1, 1); Notice that in the left panel, the default color limits respond to the noisy pixels, and the range of the noise completely washes-out the pattern we are interested in. In the right panel, we manually set the color limits, and add extensions to indicate values which are above or below those limits. The result is a much more useful visualization of our data. plt.imshow(I, cmap=plt.cm.get_cmap('Blues', 6)) plt.colorbar() plt.clim(-1, 1); The discrete version of a colormap can be used just like any other colormap. For an example of where this might be useful, let's look at an interesting visualization of some hand written digits data. This data is included in Scikit-Learn, and consists of nearly 2,000 $8 \times 8$ thumbnails showing various hand-written digits. For now, let's start by downloading the digits data and visualizing several of the example images with plt.imshow(): # load images of the digits 0 through 5 and visualize several of them from sklearn.datasets import load_digits digits = load_digits(n_class=6) fig, ax = plt.subplots(8, 8, figsize=(6, 6)) for i, axi in enumerate(ax.flat): axi.imshow(digits.images[i], cmap='binary') axi.set(xticks=[], yticks=[]) Because each digit is defined by the hue of its 64 pixels, we can consider each digit to be a point lying in 64-dimensional space: each dimension represents the brightness of one pixel. But visualizing relationships in such high-dimensional spaces can be extremely difficult. One way to approach this is to use a dimensionality reduction technique such as manifold learning to reduce the dimensionality of the data while maintaining the relationships of interest. Dimensionality reduction is an example of unsupervised machine learning, and we will discuss it in more detail in What Is Machine Learning?. Deferring the discussion of these details, let's take a look at a two-dimensional manifold learning projection of this digits data (see In-Depth: Manifold Learning for details): # project the digits into 2 dimensions using IsoMap from sklearn.manifold import Isomap iso = Isomap(n_components=2) projection = iso.fit_transform(digits.data) We'll use our discrete colormap to view the results, setting the ticks and clim to improve the aesthetics of the resulting colorbar: # plot the results plt.scatter(projection[:, 0], projection[:, 1], lw=0.1, c=digits.target, cmap=plt.cm.get_cmap('cubehelix', 6)) plt.colorbar(ticks=range(6), label='digit value') plt.clim(-0.5, 5.5) The projection also gives us some interesting insights on the relationships within the dataset: for example, the ranges of 5 and 3 nearly overlap in this projection, indicating that some hand written fives and threes are difficult to distinguish, and therefore more likely to be confused by an automated classification algorithm. Other values, like 0 and 1, are more distantly separated, and therefore much less likely to be confused. This observation agrees with our intuition, because 5 and 3 look much more similar than do 0 and 1. We'll return to manifold learning and to digit classification in Chapter 5.
https://nbviewer.ipython.org/github/donnemartin/data-science-ipython-notebooks/blob/develop/matplotlib/04.07-Customizing-Colorbars.ipynb
CC-MAIN-2021-49
en
refinedweb
GridSearch is not Enough: Part Two. There are many ways to solve the wrong problem. A most iconic one is to fail to recognize Goodhart’s law; when a metric becomes a target to be optimized, it risks no longer being a useful metric There are many anecdotes about this phenomenon on Wikipedia. The craziest one involves cobras during colonial India. The goal was to reduce the risk of getting poisoned by reducing the number of snakes. As an incentive, the idea was to pay people to deliver dead cobras. This plan worked until people realized they could breed the snakes and make more money by doing so. Once the government realized this, they stopped paying for the cobras. This resulted in the corba breeders releasing the snakes into the wild. As a result, the program effectively increased the number of wild cobras. There’s a similar story about rats during French colonial rule in Vietnam. But it’s not just the Wiki, the data-oriented enterprise is also getting a fair share of tales. I’ll list a few fables that cannot be confirmed or denied. Back in the hey-day of data science, a large firm had an incentive program for their data scientists. If a scientist came up with an AB test that prooved statistically (1%) significant, then they would receive a hefty bonus. The goal was to promote data science and to incentivize great data scientists. The irony was that this metric was easy to hack if you were just that. The clever ones immediately introduced 1000 random AB tests, and then the law of large numbers guaranteed their bonus for them. Once the bonus was “earned” the best thing to do is to leave the company (before anyone finds out what happened). You’d be correct to assume that the bonus policy effectively caused the excellent data scientists (the ones that understood probability theory) to leave the company. There was an article in Gartner that discussed the importance of bounce rate when you are optimising your website for new users. This inspired a consultancy, we’ll call them SuitConsulting, to stress their client on the importance of this metric. They had meetings and made it clear to the analysts: they needed to find ways to increase the bounce rate. A junior analytical consultant from SuitConsulting came with an idea: how about we add a flashy banner before we let the user on the main page. The idea being that this new banner would set the mood for th new user. A new version of the page was made and the analyst started looking at the bounce rate; it had indeed improved! SuitConsulting made a presentation out of this, presented it to upper management, got paid and left with a new whitepaper demonstrating their expertise in web analytics. By the time management noticed a drop in new users and sales, SuitConsulting was already gone. They hadn’t realised that by introducing the banner they were left with the heavy users allready familiar with the product. These users would not leave immediately but all the new users were turned off by requiring a new click. At a video-streaming service, management was concerned that the recommender engine would not serve recommendations that people were interested in. Rightly so, the goal of the recommender here was to bring content to the users that would broaden their horizon. The streaming service wanted to be careful that they did not perform any click baiting. This made the science team consider. Maybe a different cost-function would help. Maybe the algorithm should only get a rewardif the recommended video actually got watched for at least 75% of its total length. As the first version of this algorithm was pushed, the algorithm started scoring really well on this metric. A side effect of the algorithm was that the algorithm really started favoring videos that were easy to watch for at least 75% of its total length: 2-minute fragment videos. These videos were typically the videos that were designed to do click-baiting. The goal of a metric should be limited to being a proxy since they are easily perverted and misinterpreted. They should not replace common sense or judgment. Metrics do make me wonder a bit about my own profession. When you do machine learning, even cross-validations, you typically optimize for a single parameter. Then I got reminded of an example that demonstrates why we should be worried. The dataset below involves chickens. The use-case is to predict the growth per chicken such that you can determine the best diet and also predict the weight of the chicken. It’s a dataset that comes with the R language, but I’ve also made it available on scikit-lego. In the code below, you’ll see it being used in a grid-search. from sklearn.ensemble import GradientBoostingRegressor from sklearn.model_selection import GridSearchCV from sklego.datasets import load_chicken = load_chicken(give_pandas=True) df = df[['time', 'diet']], df['weight'] X, y = GridSearchCV(estimator=GradientBoostingRegressor(), mod =True, iid=10, cv=-1, n_jobs={'n_estimators': [10, 50, 100], param_grid'max_depth': [3, 5, 7]}) mod.fit(X, y) This model is cross-validated, and the predictions of the best model are shown below. We can see that there might be a preference for diets. This model has been selected because it is the best at predicting the chickens’ weight. We could even extend the grid-search by looking for more types of models. The problem is that it does not matter since we’re dealing with a vanity metric. Let’s consider the chart of all the growing paths for all the chickens. There’s a couple of paths that indicate that some chickens died prematurely. If we recognize that our model has no way of capturing this, then we should also acknowledge how the grid-search is a dangerous distraction. It distracts us from understanding our data. It may even be the case that the diet with the best mean growth is also the diet with the most premature deaths. There’s danger in the act of machine learning. You might make the argument that some of my examples merely demonstrate people putting faith in a faulty metric. There’s sense in that. Not all metrics are super harmful. That said, it is the act of optimizing religiously that is a natural effect if you introduce a metric. This is what so dangerous about them. Giving it to an algorithm will only make it worse. The algorithm will suggest that you need to focus on tuning instead of making you wonder if you’re solving the right problem. Let’s be frank. If we’re dealing with a vanity metric, then an algorithm sure is a great way to hide it. A grid search is not enough to protect you against this. For attribution, please cite this work as Warmerdam (2019, Oct. 16). koaning.io: Goodhart, Bad Metric. Retrieved from BibTeX citation @misc{warmerdam2019goodhart,, author = {Warmerdam, Vincent}, title = {koaning.io: Goodhart, Bad Metric}, url = {}, year = {2019} }
https://koaning.io/posts/goodheart-bad-metric/
CC-MAIN-2021-49
en
refinedweb
What's New in Pylint 2.4¶ - Release 2.4 - Date 2019-09-24 Summary -- Release highlights¶ New checkers¶ import-outside-toplevel This check warns when modules are imported from places other than a module toplevel, e.g. inside a function or a class. Added a new check, consider-using-sys-exit This check is emitted when we detect that a quit() or exit() is invoked instead of sys.exit(), which is the preferred way of exiting in program. Close #2925 Added a new check, arguments-out-of-order This check warns if you have arguments with names that match those in a function's signature but you are passing them in to the function in a different order. Close #2975 Added new checks, no-else-breakand no-else-continue These checks highlight unnecessary elseand elifblocks after breakand continuestatements. Close #2327 Added unnecessary-comprehensionthat detects unnecessary comprehensions. This check is emitted when pylintfinds list-, set- or dict-comprehensions, that are unnecessary and can be rewritten with the list-, set- or dict-constructors. Close #2905, redeclared-assigned-name This check is emitted when pylintdetects that a name was assigned one or multiple times in the same assignment, which indicate a potential bug. Close #2898 Added a new check, self-assigning-variable This check is emitted when we detect that a variable is assigned to itself, which might indicate a potential bug in the code application. For example, the following would raise this warning: def new_a(attr, attr2): a_inst = Aclass() a_inst.attr2 = attr2 # should be: a_inst.attr = attr, but have a typo attr = attr return a_inst Close #2930 Added a new check property-with-parameterswhich detects when a property has more than a single argument. Close #3006 Added subprocess-run-checkto handle subprocess.run without explicitly set checkkeyword. Close #2848 We added a new check message dict-iter-missing-items. This is emitted when trying to iterate through a dict in a for loop without calling its .items() method. Closes #2761 We added a new check message missing-parentheses-for-call-in-test. This is emitted in case a call to a function is made inside a test but it misses parentheses. A new check class-variable-slots-conflictwas added. This check is emitted when pylintfinds a class variable that conflicts with a slot name, which would raise a ValueErrorat runtime. For example, the following would raise an error: class A: __slots__ = ('first', 'second') first = 1 A new check preferred-modulewas added. This check is emitted when pylintfinds an imported module that has a preferred replacement listed in preferred-modules. For example, you can set the preferred modules as xml:defusedxml,json:ujsonto make pylintsuggest using defusedxmlinstead of xmland ujsonrather than json. A new extension broad_try_clausewas added. This extension enforces a configurable maximum number of statements inside of a try clause. This facilitates enforcing PEP 8's guidelines about try / except statements and the amount of code in the try clause. You can enable this extension using --load-plugins=pylint.extensions.broad_try_clauseand you can configure the amount of statements in a try statement using --max-try-statements. Other Changes¶ Don't emit protected-accesswhen a single underscore prefixed attribute is used inside a special method Close #1802. OK: if len(x) == 0: pass while not len(x) == 0: pass assert len(x) > 5, message KO: if not len(x): pass while len(x) and other_cond: pass assert len(x), message A file is now read from stdin if the --from-stdinflag is used on the command line. In addition to the --from-stdinflag a (single) file name needs to be specified on the command line, which is needed for the report. The checker for ungrouped imports is now more permissive. The import can now be sorted alphabetically by import style. This makes pylint compatible with isort. The following imports do not trigger an ungrouped-imports anymore import unittest import zipfile from unittest import TestCase from unittest.mock import MagicMock The checker for missing return documentation is now more flexible. The following does not trigger a missing-return-doc anymore def my_func(self): """This is a docstring. Returns ------- :obj:`list` of :obj:`str` List of strings """ return ["hi", "bye"] #@ signature-mutatorsCLI and config option was added. With this option, users can choose to ignore too-many-function-args, unexpected-keyword-arg, and no-value-for-parameter for functions decorated with decorators that change the signature of a decorated function. For example a test may want to make use of hypothesis. Adding hypothesis.extra.numpy.arrays to signature_mutators would mean that no-value-for-parameter would not be raised for: @given(img=arrays(dtype=np.float32, shape=(3, 3, 3, 3))) def test_image(img): ... Allow the option of f-strings as a valid logging string formatting method. logging-fstring--interpolation has been merged into logging-format-interpolation to allow the logging-format-style option to control which logging string format style is valid. To allow this, a new fstr value is valid for the logging-format-style option. --list-msgs-enabledcommand was added. When enabling/disabling several messages and groups in a config file, it can be unclear which messages are actually enabled and which are disabled. This new command produces the final resolved lists of enabled/disabled messages, sorted by symbol but with the ID provided for use with --help-msg.
https://pylint.pycqa.org/en/latest/whatsnew/2.4.html
CC-MAIN-2021-49
en
refinedweb
Important: Please read the Qt Code of Conduct - ListView contentY changes on move When clicking an item in the ListView, it should be moved to the first implementation. I wanted to animate the opacity, but it seems not to be possible. Then I saw that the contentY of the listView changes, the moved item is at contentY -105. Am I doing something wrong, is this the desired behaviour or is it a bug? @ import QtQuick 2.0 ListView { id: listView width: 300; height: 350 model: ListModel { ListElement{borderColor: "red"} ListElement{borderColor: "blue"} ListElement{borderColor: "green"} ListElement{borderColor: "yellow"} ListElement{borderColor: "purple"} ListElement{borderColor: "pink"} ListElement{borderColor: "red"} ListElement{borderColor: "grey"} } onContentYChanged: console.log("contentY: " + contentY) spacing: 5 delegate: Rectangle { width: 200 height: 100 border.width: 2 border.color: borderColor MouseArea { anchors.fill: parent onClicked: listView.model.move(index, 0, 1) } } cacheBuffer: 150*count } @ Hi, Sorry, but can you elaborate the problem clearly ? When you move the item, i think it's obvious that the y position would change. On the side note, Can you try "PathView": ? I think it fits your requirement. It is clear for me that the contentY will change, but I am wondering why it changes to a negative value. I would think that the move operation always moves it to 0 or a positive pixel position Hi, I think it does actually set it to 0. I tested it as follows: @ onClicked: { listView.model.move(index, 0, 1); currentIndex = index; positionViewAtIndex(index,ListView.Beginning); console.log("C.Y:",contentY) } @ The negative value is seen when you move the item by mouse by clicking on it and then dragging down. This i think is due to the boundsBehavior which has Flickable.DragAndOvershootBounds as default and hence it overshoots a little bit. Try setting it to Flickable.StopAtBounds and contentY would always be > 0 Thank you for your answer! I also tried your code, but for me it still shows contentY < 0 when I scroll up. I just tried setting the Flickable.StopAtBounds, but sadly the contentY is still < 0 Ok. Not sure then. I'd suggest you to ask this at the Qt developer mailing list. Hi, yes this is documented behaviour : The top postion of a flickable is defined by originY and this can be a negative number : check it out here : originX : real originY : should use the originY value as the start position, instead of 0. OriginY can be negative, but should work fine as long as you take that into account in your calculations.
https://forum.qt.io/topic/43992/listview-contenty-changes-on-move/1
CC-MAIN-2021-49
en
refinedweb
Why didn't anyone use this obvious variant? proc lambda {p b} { set name [list lambda $p $b] if {[info procs $name] eq ""} { proc $name $p $b } return $name } General lambda use cases work well: % [lambda {s1 s2} {puts "s1=$s1 s2=$s2"}] asd cvb s1=asd s2=cvb % set aaa [lambda {s1 s2} {puts "s1=$s1 s2=$s2"}] lambda {s1 s2} {puts "s1=$s1 s2=$s2"} % $aaa q w s1=q s2=w Something wrong? Personally I think the [apply] method that we now have in 8.5 is the way to go. Lambdas are after all merely values. So there is no reason to treat a lambda as anything other than a string until you want to use it as a function. So: # In tcl a lambda is simply a string: set aaa {{s1 s2} {puts "s1=$s1 s2=$s2"}} # Only when you want to use it does it need special treatment: apply $aaa q w..... ;-) > > See "If we had no proc" > > OTOH since variables are locally scoped writing tcl code in pure > lambdas is going to be annoying: > > set something {{} { > global do_something print ;# must "import" all lambdas before > using :-( > $do_something here > $print Done! No.. just say $::do_something, $::print. Or if you use [interp alias] like in the Wiki page mentioned, you have global names back :^) We WILL have ;) there's still no release. At least, I could use this lambda for backward compatibility. Tcl 8.4 is actual standard now. >..... ;-) Procs are compiled only once. And what about apply-lambda? No ;) There are arrays AFAIK they get their bytecodes cached in the internal rep, so they only get compiled once like procs. Michael Correct. Donal. There's no reason why you couldn't implement [apply] in 8.4. It's not a syntax extension, it's just a special kind of [eval]. Here's one possible implementation: proc apply {lambda args} { # Sanity check: if {[llength $lambda] != 2} { error "malformed lambda" } foreach {vars script} $lambda break # Process the args: if {[lindex $vars end] == "args"} { set vars [lrange $vars 0 end-1] set vals [lrange $args 0 [expr {[llength $vars]-1}]] set args [lrange $args [llength $vars] end] if {[llength $args] < [llength $vars]} { error "wrong # of args" } } else { set vals [lrange $args 0 [expr {[llength $vars]-1}]] if {[llength $args] != [llength $vars]} { error "wrong # of args" } } # Done preprocessing arguments, # now do the eval: eval [string map [list \ %VARS% $vars \ %VALS% $vals \ %ARGS% $args \ %SCRIPT% $script \ ] { if {[llength {%VARS%}]} { foreach {%VARS%} {%VALS%} break } %SCRIPT% }] } In fact, I don't see anything from the code above that requires Tcl version > 7.3. So in theory we could have had lambdas way back in Tcl 7. The problem is not that Tcl didn't (or doesn't) support lambdas. The problem is lack of imagination of how to implement lambdas in a tclish way. I don't know who's idea is it to implement [apply] but the paradigm shift of having a command to create lambdas to having a command that treats strings as lambdas is genius -- very tclish. Actually, the word "apply" is not needed. It is like using the word "CALL" for procedure invocation. In Tcl, the first word IS the proc name to be called. So why "apply"? Being the first word in line, lambda must be automatically called. Because the lambda is an anonymous proc - it has no name. As you said, the first word IS the proc name - so it can't be the lambda. One could do without - e.g. with some [unknown] trickery as shown above. But that's not very efficient.). You don't have to store lambdas in variables, you can also [interp alias] them: interp alias {} do_something {} apply {{..} {...}} Although, of course, if you are doing this then it is best to just use a regular named procedure. The most likely use of a lambda is when it is passed either to or from a procedure as an argument/return value, so no importing would be needed. -- Neil I would say, that tricks are very inefficient :) - hope they are not going to be common. Transparent lambda calls should have been built-in, not emulated. >). Well.. it is not just so simple. There are already several separate tables to search for command name: built-in commands, global procs, package procs, namespaces, aliases.. The interpreter just need to recognize the special form of lambda as a "command name". This will make happy everyone :) Moreover.. I hope, lambdas are not being compiled every time before execution. There must be some kind of cache. So, just one more table to lookup.. As for apply, it should accept ordinary proc names then. No. All of those things are commands. They are all found in a command table. -- | Don Porter Mathematical and Computational Sciences Division | | donald...@nist.gov Information Technology Laboratory | | NIST | |______________________________________________________________________| Indeed, best to avoid such tricks in production code. > Transparent lambda calls should have been built-in, not emulated. Why? Does it seriously pain you having to write [apply $fun a b] rather than just [$fun a b]? That seems to me like a fairly small matter of syntax for very little gain over what [apply] already provides. Especially when you consider that lambdas are typically used as callbacks, and callbacks are usually evaluated with either [eval] or [uplevel] which neatly allows you to avoid any problems. For example, did you realise that [apply] works nicely with pretty much every command in the Tcl and Tk libraries that takes a callback? # I always recommend using a constructor for lambdas: proc lambda {params body} { list apply [list $params $body] } lsort -command [lambda {a b} { ... }] $xs http::geturl $url -command [lambda tok { ... }] after 1000 [lambda {} { puts "Hello!" }] socket -server [lambda {sock addr port} { ... }] 8080 trace add variable foo write [lambda {v1 v2 op} { ... }] etc etc These all work fine, as do the great majority of callbacks in tcllib and other packages. Indeed, I can't think of a counter-example of the top of my head. So, what exactly isn't transparent about lambdas using [apply]? ... > The interpreter just need to recognize the special form of lambda as a > "command name". This will make happy everyone :) Such an idea was proposed and rejected at the time. [apply] works fine without any new special forms being introduced and is syntactically convenient for 95%+ of all use cases I can think of. > > Moreover.. I hope, lambdas are not being compiled every time before > execution. There must be some kind of cache. That is correct, the byte-code is cached. > So, just one more table to lookup.. The byte-code is cached directly in the lambda internal representation, so there is no table lookup (that I know of). If you feel there are performance problems with [apply] then supply some figures. -- Neil The worst thing is that lambda implementation proposed in 8.5 doesn't make lambdas look like ordinary procs. In every language that support lambdas, it doesn't matter whether a function has name or not. Reference to lambda is equal to named function reference in any use case. But what do we have in Tcl? If I store an ordinary proc "reference", it is called this way: % set XX puts % $XX 123 But if I store lambda, I have to KNOW that! % set XX {{x} {puts $x}} % $XX 123 ;# This will NOT work! % apply $XX 123 What's next? Suppose, "apply" will accept proc names. But I don't want to write dummy "apply" for every call. It's not Fortran! :) Sweet.. so the variable itself is marked as a lambda then? (this means that lambda is just a "type" of object.. like integers, strings, arrays and lists?) If so does [eval] do the same? Also, does it mean that the following doesn't get bytecompiled? : # lambda not stored in a variable: apply {{x y} {set x [expr {$y*$x}];puts $x}} 20 20 Okay, I have really overlooked this. This is handy. > The byte-code is cached directly in the lambda internal representation, > so there is no table lookup (that I know of). If you feel there are > performance problems with [apply] then supply some figures. This means that variable containing "{{a b} {return a+b}}" will be compiled (at first use? at assignment? - it doesn't matter) and cached.}} ;) And one more advantage. With apply, deep lambda call stack would have incomprehensible look - bunch of "apply"'s. On the contrary, if one had lambdas as commands, they would be visible on the stack. [info level] would remain usable as well. You are not "thinking in Tcl". The string "{{a b} {return a+b}}" is just a string. When [apply] encounters it, it interprets it as an anonymous proc and runs it. If you send it to eg [llength], it will be interpreted as a list. Values are just strings, commands may interpret them in different manners. An interesting exercise is to type at the prompt set set set and then ask: is "set" now a command? a variable? a variable's value? The only answer reasonable answer in Tcl is "yes" ... The fact that anonymous procs are bytecompiled (and the bytecodes are saved/cached) is just a performance hack. A variable cannot be compiled - its value can. Note that when you do set L0 {{a b} {return a+b}} set L1 $L0 {*}$L0 1 2 the value of L0 is now bytecompiled ... as is the value of L1, they are the same! >}} ;) Considered, rejected. If lambdas are command names you lose the automatic lifetime management ("garbage collection"), which is very important. Commands are long-lived, they exist and occupy memory as long as you do not delete them. These anonymous procs do not. You may find it interesting to read the tips in this area: and. > And one more advantage. With apply, deep lambda call stack would have > incomprehensible look - bunch of "apply"'s. On the contrary, if one > had lambdas as commands, they would be visible on the stack. [info > level] would remain usable as well. Would? Have you actually tried it? % proc showCaller {} {puts **[info level -1]; moo} % apply {args showCaller} can I see this? **apply {args showCaller} can I see this? invalid command name "moo" % set errorInfo invalid command name "moo" while executing "moo" (procedure "showCaller" line 1) invoked from within "showCaller" (lambda term "args showCaller" line 1) invoked from within "apply {args showCaller} can I see this?" If you have concrete suggestions as to how to make the call stack clearer or more useful, please file a bug or RFE. > But you propose to store "lambda {{a b} {return a+b}}" everywhere > which is NOT a lambda-expression. Therefore, it may not be compiled > and cached. > This wat, the lambda expression itself gets generated, assigned and > compiled upon each call! No - it's not stored, but "executed immediately". Take > > trace add variable foo write [lambda {v1 v2 op} { ... }] This is sugar-coated for > > trace add variable foo write [list {v1 v2 op} { ... }] and what gets stored is the two-element {argl body} list. This, when first [apply]ed, gets compiled, and on later calls just the bytecode is executed. Aargh!!! Serves me right for posting before breakfast ... Either of set L0 {{a b} {return a+b}} set L1 $L0 apply $L0 1 2 or set L0 [list apply {{a b} {return a+b}}] set L1 $L0 {*}$L0 1 2 An by the way: I am using your proposed body ... it returns the string "a+b" for any input. I do imagine you wanted {{a b} {expr $a+$b}}] which returns the sum. I wonder, if the bytecode is attached to the whole thing, or just to the body. If it's attached to the whole "{params} {body}" thing, then a simple [llength $function] would replace the bytecode-rep with the list-rep, and next time it's got to be recompiled. (otoh, why should someone treat the lambda as list? perhaps to check for a namespace? or use lindex as pendant to [info args/body]? not too likely to happen repeatedly enough to be an issue) If it's "compiled" to a list, only whose second item has the actual bytecode attached, then it could perhaps be recombined with a different parameter list, and result in unexpected behaviour ... Or is it done even differently (e.g. such that both list-information and lambda-information is maintained together? also the string-rep?) PS: I know, as a scripter I shouldn't care about these internals, but I'm just curious... Heh: the correct answer is RTFS, but you are so nice and polite about it ... what the hell. The whole [list $params $body ?$ns?] is converted to lambdaType. If you for instance request the llength, the type shimmers and the bytecompiled code is lost - it will be regenerated at the next usage. No, that will get byte-compiled. The bytecode is stored in the internal rep of the Tcl_Obj, not any particular variable. In other words, the Tcl_Obj used to represent {{x y} {set x ...}} gets a compiled proc stashed in its internal rep. This bytecode is generated the first time that value is passed to [apply] and then sticks around until someone messes with the internal rep. Note that originally, apply was going to take the arguments separately: [apply params body args...]. Miguel pointed out that the bytecode depends on both the script body *and* the parameter list, which is why they are now combined into a single argument, to give a handy place to stash the bytecode. -- Neil This isn't an artefact of lambdas, but rather that Tcl has a distinction between command and variable namespaces, and that it expects a command *name* not a command itself. Common Lisp is an example of another language which makes this distinction and also has this same problem: (defun adder (n) (lambda (x) (+ x n))) ((adder 1) 2) --> ERROR: illegal function call (funcall (adder 1) 2) --> 3 (apply (adder 1) '(2)) --> 3 Tcl's [apply] is roughly equivalent to CL's (funcall) or (apply). To get to something like Scheme or Haskell in Tcl, you could drop variables and use commands for everything: proc def {name = args} { interp alias {} $name {} {*}$args } def XX = puts XX "Hello, World!" def XX = apply {{x} {puts $x}} XX "Hello, World!" Although, of course you lose local variables! I also prefer the Scheme situation, but that's a bigger change to Tcl than just lambdas. -- Neil On the other hand, if you're combining with actual parameters then you are using the form: [list apply [list $params $body ?$ns?] $arg1 $arg2 ...] and in that case adding extra arguments to the outer list won't hurt. (It's the inner former-list that holds the bytecode inside it.) Donal. Not in Common Lisp or in any of the earlier Lisp dialects. 2-Lisps use (FUNCALL) to call procedure values. > But what do we have in Tcl? If Tcl is like Lisp at all, it's like a 2-Lisp. [apply] is Tcl's rough equivalent of Lisp's (FUNCALL). See also: You might prefer 1-Lisps (so do I), but Tcl just ain't like that, and it don't work that way. --Joe English
https://groups.google.com/g/comp.lang.tcl/c/NKxsvPrLx8I
CC-MAIN-2021-49
en
refinedweb
Getting started with GPAW¶ In this exercise we will calculate structures and binding energies for simple molecules. Performing a structure optimization¶ A structure optimization, also called a relaxation, is a series of calculations used to determine the minimum-energy structure of a given system. This involves multiple calculations of the atomic forces \(\mathbf F^a = -\tfrac{\partial E}{\partial \mathbf R^a}\) with respect to the atomic positions \(\mathbf R^a\) as the atoms are moved downhill according to an optimization algorithm. The following script uses the EMT calculator to optimize the structure of H2. from ase import Atoms from ase.calculators.emt import EMT from ase.optimize import QuasiNewton system = Atoms('H2', positions=[[0.0, 0.0, 0.0], [0.0, 0.0, 1.0]]) calc = EMT() system.set_calculator(calc) opt = QuasiNewton(system, trajectory='h2.emt.traj') opt.run(fmax=0.05) This is the first ASE script we have seen so far, so a few comments are in order: - At the top is a series of import statements. These load the Python modules we are going to use. - An Atomsobject is created, specifying an initial (possibly bad) guess for the atomic positions. - An EMTcalculator is created. A calculator can evaluate quantities such as energies and forces on a collection of atoms. There are different kinds of calculators, and EMT is a particularly simple one. The calculator is associated with the Atomsobject by calling atoms.set_calculator(calc). - An optimizeris created and associated with the Atomsobject. It is also given an optional argument, trajectory, which specifies the name of a file into which the positions will be saved for each step in the geometry optimization. - Finally the call opt.run(fmax=0.05)will run the optimization algorithm until all atomic forces are below 0.05 eV per Ångström. Run the above structure optimization. This will print the (decreasing) total energy for each iteration until it converges, leaving the file h2.emt.traj in the working directory. Use the command ase gui to view the trajectory file, showing each step of the optimization. Structure optimization of H2O with EMT and GPAW¶ Adapt the above script as needed and calculate the structure of a H2O molecule using the EMT calculator. Note that water is not a linear molecule. If you start with a linear molecule, the minimization may not be able to break the symmetry. Be sure to visualize the final configuration to check that it is reasonable. The empirical EMT potential is fast, but not very accurate for molecules in particular. We therefore want to perform this calculation in GPAW instead. GPAW uses real-space grids to represent density and wavefunctions, and the grids exist in a cell. For this reason you must set a cell for the Atoms object. As a coarse value let us use a 6 Ångström cell: system.set_cell((6.0, 6.0, 6.0)) system.center() The cell must be centered in order to prevent atoms from lying too close to the boundary, as the boundary conditions are zero by default. Instead of importing and using EMT, we now use GPAW: from gpaw import GPAW ... calc = GPAW() ... Make a copy of your script and adapt it to GPAW, then recalculate the structure of H2O (make sure to choose a new filename for the trajectory file). During the calculation a lot of text is printed to the terminal. This includes the parameters used in the calculation: Atomic positions, grid spacing, XC functional (GPAW uses LDA by default) and many other properties. For each iteration in the self-consistency cycle one line is printed with the energy and convergence measures. After the calculation the energy contributions, band energies and forces are listed. Use ase gui to visualize and compare bond lenghts and bond angles to the EMT result. Bond lengths and angles are shown automatically if you select two or three atoms at a time. Atomization energies¶ Now that we know the structure of H2O, we can calculate other interesting properties like the molecule’s atomization energy. The atomization energy of a molecule is equal to the total energy of the molecule minus the sum of the energies of each of its constituent isolated atoms. For example, the atomization energy of H2 is \(E[\mathrm{H}_2] - 2 E[\mathrm H]\). GPAW calculations are by default spin-paired, i.e. the spin-up and spin-down densities are assumed to be equal. As this is not the case for isolated atoms, it will be necessary to instruct GPAW to do something different: calc = GPAW(hund=True) With the hund keyword, Hund’s rule is applied to initialize the atomic states, and the calculation will be made spin-polarized. Write a script which calculates the total energy of the isolated O and H atoms, and calculate the atomization energy of H2O. Exchange and correlation functionals¶ So far we have been using GPAW’s default parameters. The default exchange-correlation functional is LDA. This is not very accurate, and in particular overestimates atomization energies. You can specify different XC functionals to the calculator using GPAW(xc=name), where name is a string such as 'LDA', 'PBE' or 'RPBE'. Calculate the atomization energy of H2O with LDA and PBE (just reuse the geometry from the LDA optimization, i.e. do not repeat the minimization).
https://wiki.fysik.dtu.dk/gpaw/exercises/gettingstarted/gettingstarted.html
CC-MAIN-2019-09
en
refinedweb
Those of us who have to tiptoe around non-standard or ancient compilers will know that template template parameters are off limits. - Hubert Matthews [Matthews03] Dvbcodec fail Long ago, way back in 2004, I wrote an article for Overload [Guest04] describing how to use the Boost Spirit [Spirit] parser framework to generate C++ code which could convert structured binary data to text. I went on to republish this article on my website, where I also included a source distribution. Much has changed since then. The C++ language hasn't, but compiler and platform support for it has improved considerably. Boost survives - indeed, many of its libraries will feed into the next version of C++. Overload thrives, adapting to an age when print programming magazines are all but extinct. My old website can no longer be found. I've changed hosting company and domain name, I've shuffled things around more than once. But you can still find the article online if you look hard enough, and recently someone did indeed find it. He, let's call him Rick, downloaded the source code archive, dvbcodec-1.0.zip [DVBcodec], extracted it, scanned the README, typed: $ make ... and discovered the code didn't even build. At this point many of us would assume (correctly) the code had not been maintained. We'd delete it and write off the few minutes it took to evaluate it. Rick decided instead to contact me and let me know my code was broken. He even offered a fix for one problem. Code rot Sad to say, I wasn't entirely surprised. I no longer use this code. Unused code stops working. It decays. I'm not talking about a compiled executable, which the compiler has tied to a particular platform, and which therefore progressively degrades as the platform advances. (I've heard stories about device drivers for which the source code has long gone, and which require ever more elaborate emulation layers to keep them alive.) I'm talking about source code. And the decay isn't usually literal, though I suppose you might have a source listing on a mouldy printout, or on an unreadable floppy disk. No, the code itself is usually a pristine copy of the original. Publishers often attach checksums to source distributions so readers can verify their download is correct. I hadn't taken this precaution with my dvbcodec-1.0.zip but I'm certain the version Rick downloaded was exactly the same as the one I created 5 years ago. Yet in that time it had stopped working. Why? Standard C++ As already mentioned, this was C++ code. C++ is backed by an ISO standard, ratified in 1998, with corrigenda published in 2003. You might expect C++ code to improve with age, compiling and running more quickly, less likely to run out of resources. Not so. My favourite counter-example comes from a nice paper 'CheckedInt: A policy-based range-checked integer' published by Hubert Matthews towards the end of 2003 [Matthews03], which discusses how to use C++ templates to implement a range-checked integer. The paper includes a code listing together with some notes to help readers forced to 'tiptoe around non-standard or ancient compilers' (think: MSVC6). Yet when I experimented with this code in 2005 I found myself tripped up by a strict and up-to-date compiler (see Figure 1). I emailed Hubert Matthews using the address included at the top of his paper. He swiftly and kindly put me straight on how to fix the problem. What's interesting here is that this code is pure C++, just over a page of it. It has no dependencies on third party libraries. Hubert Matthews is a C++ expert and he acknowledges the help of two more experts, Andrei Alexandrescu and Kevlin Henney, in his paper. Yet the code fails to build using both ancient and modern compilers. In its published form it has a brief shelf-life. Support rot Code alone is of limited use. What really matters for its ongoing health is that someone cares about it - someone exercises, maintains and supports it. Hubert Matthews included an email address in his paper and I was able to contact him using that address. How well would my code shape up on this front? Putting myself in Rick's position, I unzipped the source distribution I'd archived 5 years ago. I was pleased to find a README which, at the very top, shows the URL for updates,. I was less pleased to find this URL gave me a 404 Not Found error. Similarly, when I tried emailing the project maintainer mentioned in the README, I got a 550 Invalid recipient error: the attempted delivery to thomas.guest@ntlworld.com had failed permanently. Cool URIs don't change [W3C] but my old NTL home was anything but cool; it came for free with a dial-up connection I've happily since abandoned. Looking back, maybe I should have found the code a more stable location. If I'd created (e.g.) a Sourceforge project then my dvbcodec project might still be alive and supported, possibly even by a new maintainer. How did this ever compile? These wise hindsights wouldn't fix my code. If I wanted to continue I'd have to go it alone. Figure 2 is what the README had to say about platform requirements. A 'good C++ compiler', eh? As we've already seen, GCC 3.3.1 may be good but my platform has GCC 4.0.1 installed, which is better. If my records can be believed, this upperCase() function (see Listing 1) compiled cleanly using GCC 3.3.1 and MSVC 7.1. Huh? Std::string is a typedef for std::basic_string<char> and, as GCC 4.0.1 says, there's no such thing as a std::basic_string<char><char>::iterator: stringutils.cpp:58: error: 'std::string' is not a template The simple fix is to write std::string::iterator instead of std::string<char>::iterator. A better fix, suggested by Rick, is to use std::transform(). I wonder why I missed this first time round? (See Listing 2.) Boost advances GCC has become stricter about what it accepts even though the formal specification of what it should do (the C++ standard) has stayed put. The Boost C++ libraries have more freedom to evolve, and the next round of build problems I encountered relate to Boost.Spirit's evolution. Whilst it would be possible to require dvbcodec users to build against Boost 1.31 (which can still be downloaded from the Boost website) it wouldn't be reasonable. So I updated my machine (using Macports) to make sure I had an up to date version of Boost, 1.38 at the time of writing. $ sudo port upgrade boost Boost's various dependencies triggered an upgrade of boost-jam, gperf, libiconv, ncursesw, ncurses, gettext, zlib, bzip2, and this single command took over an hour to complete. I discovered that Boost.Spirit, the C++ parser framework on which dvbcodec is based, has gone through an overhaul. According to the change log the flavour of Spirit used by dvbcodec is now known as Spirit Classic. A clever use of namespaces and include path forwarding meant my 'classic' client code would at least compile, at the expense of some deprecation warnings (Figure 3). To suppress these warnings I included the preferred header. I also had to change namespace directives from boost::spirit to boost::spirit::classic. I fleetingly considered porting my code to Spirit V2, but decided against it: even after this first round of changes, I still had a build problem. Changing behaviour Actually, this was a second level build problem. The dvbcodec build has multiple phases (Figure 4): - it builds a program to generate code. This generator can parse binary format syntax descriptions and emit C++ code which will convert data formatted according to these descriptions - it runs this generator with the available syntax descriptions as inputs - it compiles the emitted C++ code into a final dvbcodec executable I ran into a problem during the second phase of this process. The dvbcodec generator no longer parsed all of the supplied syntax descriptions. Specifically, I was seeing this conditional test raise an exception when trying to parse section format syntax descriptions. if (!parse(section_format, section_grammar, space_p).full) { throw SectionFormatParseException( section_format); } Here, parse is boost::spirit::classic::parse, which parses something - the section format syntax description, passed as a string in this case - according to the supplied grammar. The third parameter, boost::spirit::classic::space_p, is a skip parser which tells parse to skip whitespace between tokens. Parse returns a parse_info struct whose full field is a boolean which will be set to true if the input section format has been fully consumed. I soon figured out that the parse call was failing to fully consume binary syntax descriptions with trailing spaces, such as the the one shown below. " program_association_section() {" " table_id 8" " section_syntax_indicator 1" " '0' 1" .... " CRC_32 32" " } " If I stripped the trailing whitespace after the closing brace before calling parse() all would be fine. I wasn't fine about this fix though. The Spirit documentation is very good but it had been a while since I'd read it and, as already mentioned, my code used the 'classic' version of Spirit, in danger of becoming the 'legacy' then 'deprecated' and eventually the 'dead' version. Re-reading the documentation it wasn't clear to me exactly what the correct behaviour of parse() should be in this case. Should it fully consume trailing space? Had my program ever worked? I went back in time, downloading and building against Boost 1.31, and satisfied myself that my code used to work, though maybe it worked due to a bug in the old version of Spirit. Stripping trailing spaces before parsing allowed my code to work with Spirit past and present, so I curtailed my investigation and made the fix. (Interestingly, Boost 1.31 found a way to warn me I was using a compiler it didn't know about. boost_1_31_0/boost/config/compiler/gcc.hpp:92:7: warning: #warning "Unknown compiler version - please run the configure tests and report the results" I ignored this warning.) Code inaction Apologies for the lengthy explanation in the previous section. The point is that few software projects stand alone, and that changes in any dependencies, including bug fixes, can have knock on effects. In this instance, I consider myself lucky; dvbcodec's unusual three phase build enabled me to catch a runtime error. Of course, to actually catch that error, I needed to at least try building my code. Put more simply: if you don't use your code, it rots. Rotten artefacts It wasn't just the code which had gone off. My source distribution included documentation - the plain text version of the article I'd written for Overload - and the Makefile had a build target to generate an HTML version of this documentation. This target depended on Quickbook, another Boost tool. Quickbook generates Docbook XML from plain text source, and Docbook is a good starting point for HTML, PDF and other standard output formats. This is quite a sophisticated toolchain. It's also one I no longer use. Most of what I write goes straight to the web and I don't need such a fiddly process just to produce HTML. So I decided to freshen up dead links, leave the original documentation as a record, and simply cut the documentation target from the Makefile. Stopping the rot As we've seen, software, like other soft organic things, breaks down over time. How can we stop the rot? Freezing software to a particular executable built against a fixed set of dependencies to run on a single platform is one way - and maybe some of us still have an aging Windows 95 machine, kept alive purely to run some such frozen program. A better solution is to actively tend the software and ensure it stays in shape. Exercise it daily on a build server. Record test results. Fix faults as and when they appear. Review the architecture. Upgrade the platform and dependencies. Prune unused features, splice in new ones. This is the path taken by the Boost project, though certainly the growth far outpaces any pruning (the Boost 1.39 download is 5 times bigger than its 1.31 ancestor). Boost takes forwards and backwards compatibility seriously, hence the ongoing support for Spirit classic and the compiler version certification headers. Maintaining compatibility can be at odds with simplicity. There is another way too. Although the dvbcodec project has collapsed into disrepair the idea behind it certainly hasn't. I've taken this same idea - of parsing formal syntax descriptions to generate code which handles binary formatted data - and enhanced it to work more flexibly and with a wider range of inputs. Whenever I come across a new binary data structure, I paste its syntax into a text file, regenerate the code, and I can work with this structure. Unfortunately I can't show you any code (it's proprietary) but I hope I've shown you the idea. Effectively, the old C++ code has been left to rot but the idea within it remains green, recoded in Python. Maybe I should find a way to humanely destroy the C++ and all links to it, but for now I'll let it degrade, an illustration of its time. Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to see it as a soap bubble? Alan J. Perlis Thanks I would like to thank to Rick Engelbrecht for reporting and helping to fix the bugs discussed in this article. My thanks also to the team at Overload for their expert help. References [DVBcodec] Download of the DVBcodec is available from: [Guest04] Thomas Guest, 'A Mini-project to Decode a Mini-language - Part One', Overload #63, October 2004. Available from: [Matthews03] Hubert Matthews, 'CheckedInt: A Policy-Based Range-Checked Integer', Overload #58, December 2003. Available from: [Spirit] 'Spirit User's Guide' Available from: [W3C] 'Cool URIs don't change' Available from:
https://accu.org/index.php/journals/1573
CC-MAIN-2019-09
en
refinedweb
In one of Quarkslab's projects, we came across the issue of randomizing a large set of integers, described as a list of disjoint intervals. These intervals can be represented as a sorted list of integers couples, like this one: \([1, 4], [10, 15], [17, 19], \dots\). The idea is to randomly and uniquely select numbers across these intervals, giving a shuffled list of numbers that belong to them. For instance, \([1,10,18,4,3,11,15,17,19,12,14,13,2]\) is a possible output. Moreover, each possible permutation of the integers set should have equal probability of appearance. If you're just interested in the final library that "do the job", go directly to the implementation section to download the leeloo C++ open-source library on Github ! Trivial algorithm The not-so-trivial (but still) algorithm is to generate an array containing all the original sorted integers, and then apply a shuffle algorithm (like Fisher–Yates [1]) that uses a common Pseudo Random Number Generator (PRNG). As an example, in C++, std::shuffle can be used to do that. The main issue is that a buffer of n integers is required. For instance, with \(2^{31}\) 32-bit integers, one needs a buffer of 8GB, which is not acceptable in our situation. Other trivial algorithm Another approach to reduce the memory footprint is to randomly select numbers between \([\![0, n [\![\) (using a classical PRNG), and keep a bitfield of already returned candidates not to return twice the same. When we start to reach too many times the same numbers, we change the algorithm: - With \(R\) the remaining numbers of candidates to find, get a random number \([\![0, R [\![\) and find the position of the R-th bit not set in the bitfield. That can be optimized thanks to SSE instructions ; - Set that bit and return the value ; - Go on with \(R=R-1\) until \(R=0\). There are multiple drawbacks with this algorithm: - It still needs \(O(n)\) memory bytes (even if it is less than the previous algorithm) ; - The final stage can be really slow if R is such that the remaining bitfield does not fit in cache. See also [5] for a description of a similar algorithm. Problem reduction Thus, the main issue is to generate a list of unique random numbers between a given \([\![0, n [\![\) interval, with good performances (say about 50 million numbers per seconds on a Core i7 3rd gen) and a small memory footprint (\(O(1)\)). So, the final problem is to be able to choose in an equiprobable manner a permutation of \([\![0, n [\![\) among the \(n!\) ones (\(n!\) being the number of permutations of \(n\) distinct numbers [4]), using only \(O(1)\) bytes of memory (keeping in mind the performance criteria). The first question is to understand if this is even feasible, and the second issue is to figure out an efficient method to achieve this. Formalization Let's do some math to formalize this problem. Some context: let \(n < 2^{32}\) and \(\{i \in [\![0,n[\![\}\) the numbers' set to shuffle ; \(n\) is always chosen as a prime number. The choice of \(n\) as a prime induces interesting properties (as it is shown below), but the careful reader would notice that we won't always have a prime number of integers to generate. However, we can still live with that. Indeed, let \(n\) the original number of integers to generate and \(p\) chosen as: - \(p\) is prime ; - \(p \geq n\) ; - \(\forall i \in ⟧n,p[\![\), \(i\) is not a prime. Or, in other words, \(p\) is the smallest prime number greater than or equals to \(n\). That way, we will produce numbers between \([\![0,p[\![\), and not \([\![0,n[\![\). It is not really an issue because: - when a number in \([\![n,p[\![\) is generated, just discard it and compute the next one until it belongs to \([\![0,n[\![\) ; - the density of prime numbers in \([\![0,2^{32}[\![\) allows us to do that, as the maximal gap between two consecutive prime numbers is 354 [2]. We will now work in \(F_p = \mathbb{Z}/p\mathbb{Z}\). \(p\) being a prime, \(F_p\) is a division ring [3] (that's the great property). Then, with \(S_p\) the set of permutations of \(F_p\), our problem is equivalent to choose with equal probability a permutation in \(S_p\). (Partial) resolution All of that theory is nice, but it does not change a lot of things in concrete. Let's go now in the crux of the issue, and understand what can be done with \(S_p\) :) Permutation polynomial One can notice that every application \(F_p \rightarrow F_p\) can be written as a polynomial of \(F_p[X]\). Indeed, for instance, given \(F\) an application, a chebytchev polynomial equivalent to \(F\) can always be found. Thus, every element of \(S_p\) can be described as a polynomial of \(F_p[X]\). Trivial algorithm That way, one (still not-so-trivial ;)) algorithm would be: - Generate a random polynomial of \(F_p[X]\). This is equivalent to compute \(p\) random coefficients ; - Check if this polynom represents a permutation ; - If not, go back to the first step. But wait... There are multiple issues here. First, these \(p\) coefficients need to be stored in memory, giving a memory footprint of \(O(p)\) bytes. Moreover, the problem of checking whether a polynomial is a permutation one or not can be somehow complex and slow. Probabilistic methods exist (shown in [6]), but it still leaves us with some potential errors. The performance cost of all of this could be important. We didn't take the time to benchmark this algorithm as it suffers from the \(O(p)\) memory issue... And finally, left to generate a buffer of \(p\) integers, we could just stick to the first "trivial" algorithm described at the beginning of this paper. The real great stuff We need to find a better way to generate these polynomials. We will use the \(F_p\) division ring properties. Indeed, it can be demonstrated that, in \(F_p\), every permutation is a bijection, and every bijection is a permutation. Thus, a whole set of polynomials can be described: - for every \((a,b) \in (F_p^*,F_p), X \mapsto a*X+b\) is a bijection, and thus belongs to \(S_p\) (Equation 1) - for every \(c\) such as \(gcd(c,p-1) = 1\), \(X^c\) is also a bijection [6], and belongs to \(S_p\) (Equation 2). Moreover, as the combination of two bijection functions is a bijection, combining these two sets of polynomials will produce new ones. What's even more interesting is that it can be demonstrated that, for every \(a \in F_p* \{X+1, a*X, X^{p-2}\}\) is a generator of the \(S_p\) group, using the composition law. [6] So, the final result is that, theoretically, every permutation of \(S_p\) can be defined as a combination of these polynomials aforementioned. Entropy and equiprobability: how random is random? For the following, we will defines three sets : - \(G_a = F_p^*\), the values that can take a in (Equation 1) ; - \(G_b = F_p\), the values that can take b in (Equation 1) ; - \(G_c = \{c \in F_p \ / \ gcd(c,p-1)=1\}\), the values that can take \(c\) in (Equation 2). Let's define these two applications: \begin{align*} L : G_a \times G_b &\rightarrow S_p\\ (a,b) &\mapsto X \mapsto a*X + b \end{align*} \begin{align*} G : G_c &\rightarrow S_p\\ c &\mapsto X \mapsto X^c \end{align*} The first idea coming to our mind is to randomly combine the polynomials generated by these applications. Let's define \(GS_p\) (\(S_p\) stands for 'seed part') as \(G_a \times G_b \times G_c\). For instance, let's randomly choose \(S0=(a_0,b_0,c_0) \in GS_p\) and \(S1=(a_1,b_1,c_1) \in GS_p\). The couple \((S0,S1) \in GS=GS_p \times GS_p\) can be considered as the seed of our random number generator. We know that \(L(a_0,b_0) \circ L(a_1,b_1)\) is a permutation polynomial, \(L(a_0,b_0) \circ G(c_0)\) is another one, \(G(c_0) \circ G(c_1)\) also, etc... (Note: \(\circ\) is the function composition, which means that, for instance, \((L(a,b) \circ G(c))(X) = a*X^c+b\)) Thus, every 'seed' values that belongs to \(GS\) can produce a set of permutations. Unfortunately, there are main issues with this approach, that we will call "entropy reduction". Indeed, we know that we can create permutations by composing \(L(a_0,b_0), L(a_1,b_1), G(c_0)\) and \(G(c_1)\), but : - \(L(a_0,b_0) \circ L(a_1,b_1)\) can also be expressed as \(L(a_0*a_1, b_1*a_0+b_0)\). In other words, a combination of affine functions is an affine function). Moreover, as shown in Appendix A, choosing independently \(a_0\) and \(a_1\) in \(G_a\) and computing \(a_0*a_1\) is equivalent to randomly choose a number in \(G_a\). The same goes with \(b_1*a_0+b_0\). Thus, if we choose one seed \((S_0,S_1) \in (G_a \times G_b)^2\) and compute \(L(a_0,b_0) \circ L(a_1,b_1)\), this is equivalent to choose a seed \(S_0 \in (G_a \times G_b)\) ; - The same issue comes with \(G(c_0) \circ G(c_1)\), which is equals to \(G(c_0*c_1)\) ; - Even by combining \(L(a_0,b_0) \text{ with } G(c_0)\), then with \(L(a_1,b_1)\) and \(G(c_1)\), (giving \(L(a_1,b_1) \circ G(c_1) \circ L(a_0,b_0) \circ G(c_0)\)), the following question must be answered: \begin{align*} \text{With } GS' = GS \times GS,\\ UPRNG: GS' &\rightarrow S_p\\ (a_0,b_0,c_0,a_1,b_1,c_1) &\mapsto L(a_1,b_1) \circ G(c_1) \circ L(a_0,b_0) \circ G(c_0) \end{align*} is there any couple \((S_0,S_1) \in GS'xGS'\) such as \(UPRNG(S_0) = UPRNG(S_1)\) ? Another way to formalize this problem is as follows: given a seed taking values in a space \(S \text{ of } s\) integers from \(\mathbb{Z}/p\mathbb{Z}\) (\(s\) unknown), is: \begin{align*} UPRNG : S &\rightarrow S_p\\ seed &\mapsto \text{method to generate a permutation polynomial} \end{align*} a bijective function? Now, let's demonstrate a somehow intuitive result. If \(F\) is a bijection, then \(\|S\| = \|S_p\|\), which gives \(s = p!\). This means that, in order to generate a random permutation of \(S_p\), we must choose a seed number between the \(p!\) ones. In other words, we must choose \(p\) unique random numbers. Well, this has just sent us back to the beginning of this article. Our method for compromises But the game is not yet finished, we haven't gone this far for nothing. So let's work a bit with our results. We now understand that, somehow, some compromises have to be made. We know that the size of the seed must be reduced. By doing this, we know that we won't be able to uniquely generate all the possible permutations of \(S_p\). Moreover, we want to do this in such a way that these properties will be conserved the best way: - we still reach a fairly "reasonable" amount of permutations among \(S_p\) ; - all these permutations are unique (or a "lot of" them) ; - all of this has still "good" performances (we haven't talk yet a lot about this one, but we don't forget it :)). At this point, we decided to study the following UPRNG (named \(UPRNGcomp\)): \begin{align*} \text{With } GS = G_a \times G_b \times G_c \times N^*,\\ UPRNGcomp : GS &\rightarrow S_p\\ (a,b,c,n) &\mapsto (G(c) \circ L(a,b))^n \end{align*} This choice is made because it produces a function that can be easily computed, and can still give interesting results. Number of generated permutations If \(n\) is randomly chosen in \([\![1,N[\![\), then the number of generated permutations with this method, is : \(p*(p-1)*Phi(p-1)*N\) (with \(Phi\) the Euler totient function [3]). As we've seen above, the number of unique generated permutations may be inferior to this. Thus, if we have for instance \(n=2\), we can search for the set of \(seeds \in GS\) for which same UPRNG is the same. Let - \((S_0,S_1) \in G_a \times G_b \times Gc\) ; - \(S_0 = (a_0,b_0,c_0)\) ; - \(S_1 = (a_1,b_1,c_1)\). We need to resolve: The complete resolution of this equation being a bit human-time consuming, we'll do it with \(c_0=c_1=3\), and using mathematical software, we can find these solutions : - obviously, \(\{a_0=a_1, b_0=b_1\}\) ; - and \(\{a_0=p-a_1, b_0=b_1=0\}\). Which means than, when \(b_0=b_1=0\), only half the numbers of possible values for \(a\) will give a unique permutation. By the way, this proves the fact that our UPRNG function isn't bijective. We can test this easily with \(p=17\). \(gcd(3,17)\) being equals to 1, we can define: \begin{align*} UPRNG: G_a \times G_b \times Gc &\rightarrow S_p\\ (a,b) &\mapsto G(3) \circ L(a,b) \circ G(3) \circ L(a,b) \end{align*} And this python code: def l(x,a,b,p): return (a*x+b)%p def g(x,c,p): return (x**c)%p def lgn(x,a,b,c,p,n): for i in xrange(0,n): x = g(l(x,a,b,p),c,p) return x list_x0 = list() list_x1 = list() p = 17 a = 5 b = 0 c = 3 n = 2 for x in range(0,p): list_x0.append(lgn(x, a, b, c, p, 2)) list_x1.append(lgn(x, p-a, b, c, p, 2)) print(list_x0) print(list_x1) Which gives: [0, 4, 8, 5, 16, 14, 10, 6, 15, 2, 11, 7, 3, 1, 12, 9, 13] [0, 4, 8, 5, 16, 14, 10, 6, 15, 2, 11, 7, 3, 1, 12, 9, 13] One possible solution is to reduce the space of \(G_a\), for instance with \(G_a=[1,\frac{p-1}{2}]\). But, without the full resolution of the (equation 1), this is just a partial resolution. Going further with another UPRNG If we want to reduce the "entropy reduction", we need to improve the size of the seed values. For instance, this UPRNG could be defined as: \begin{align*} UPRNG2 : GS = G_a \times G_b \times G_a \times G_b \times G_c \times N^* &\rightarrow S_p\\ (a_0,b_0,a_1,b_1,c,n) &\mapsto (L(a_1,b_1) \circ G(c_1) \circ L(a_0,b_0))^n \end{align*} As above, we try to find \((S_0,S_1) \in GS\) such as \(UPRNG2(S_0) = UPRNG2(S_1)\). Using mathematical resolution software, with \(n=1 \text{ and } c=9\) (for instance), this gives the following solutions: Let - \(S_0=(a_0,b_0,a_1,b_1)\) ; - \(S_1=(a_2,b_2,a_3,b_3)\). We have: - \(S_0=S_1\) (trivial) ; - \(\{a_1 = a_3*a_2^9*(a_0^{-1})^9, b_0 = b_2*a_0*(a_2^{-1}), b_1 = b_3\}\). which gives constraints on the choice of our constants in order to try and have \(UPRNG2\) bijective. The resolution of the full system is left for further work on the subject ;) Implementation and benchmarks The implementaion of what's described here (and more) has been done in the C++ "leeloo" open-source library that you can find on github here :. It also provides python bindings for python fans around here. The library allows to manage integer intervals, aggregate them and randomly sort the elements as described in the introduction. It also provides an IPv4 range parser for convenience usage. There are two main UPRNG implemented: - one that uses the method described here [5]. This one is historical, optimised with SSE/AVX instructions and "fast" (see figures below) ; - one that uses \(URPNGcomp\). It is 8 to 14 times slower that the original one (due to the modular exponentation), but provides a larger possible set of permutations. Moreover, each UPRNG can be instantiated in "atomic" mode, which makes them thread-safe. Some figures about performances : on a Core i7-3770 (3.4GHz, 4 cores with Hyperthreading), we obtain: - with the first UPRNG, we can generate, with the SSE/AVX and parallelised version, about 290 millions of 32-bit numbers per second. This makes a memory bandwidth of about 1.2GB/s, making this generator CPU-bound (for now) ; - with the second UPRNG, we can generate, with the parallelised version, about 30/n million numbers/s ('n' being the part of the seed that defines the number of compositon of \(G \circ L\).). This is because the performance of this generator is limited mainly by the modular exponentation computations. This generator is also clearly CPU-bound. C++ and Python usage samples can be found on github at : and. Conclusion Giving some compromises, we find a solution to our original problem that is actually good enough for our project needs. We still are a bit frustrated not to have the actual time to go further in this subject. There exists other ways that haven't been studied here to generate permutation polynomial, as for instance described on this wikipedia page :. This is also another interesting work that could be done :) It can also be mentioned that some people already looked into the subject and published articles. For instance, [5] uses the quadratic residues with prime numbers. It can be noticed that the permutation given by this method can also be expressed as a permutation polynomial (but involves more computations). Finally, thanks to Sebastien Kaczmarek (@deesse_k) for the original talks on the subject (and other ideas), to Ninon Eyrolles for her help on the redaction and some of the mathematics here and Kévin Szkudlapski for his advices. Going further For the reader that might want to go further, here are some ideas: - Work on UPRNG2 ; - For the described UPRNGs, find out the number of unique permutations ; - Benchmark and analyze other ways to generate permutation polynmials ; - Something that would be nice: given a seed space \(S\) of size \(s\), find out \(S\) and a subset of \(S_p\) such as: \begin{align*} F : S &\rightarrow \text{subset of }S_p\\ seed &\mapsto F(seed) \end{align*} is a bijection. You're welcome to send us feedbacks :). Appendix A Let: - \(p\) a prime number ; - \(F_p = \mathbb{Z}/p\mathbb{Z}\) (which is a division ring) ; - \(X \text{ and } Y\) two independant random variables of \(F_p\). First, we have: \begin{align*} P(X=x) = \frac{1}{p},\\ P(Y=y) = \frac{1}{p} \end{align*} We have, for every \(n \in F, and \(y=n-x\) is unique for a fixed \(x\). The number of \((x,y) \in F_p^2\) such as \(x+y=n\) is then \(p\). So, which means that choosing independently two random numbers in \(F_p\) and sum the two of them is equivalent to only choose one random number in \(F_p\). Let's demonstrate the same result with \(X. As \(F_p\) is a division ring, we have \(y=n*x^{-1}\) which is unique for a given \(x\). So, the number of \((x,y) \in F_p^2\) such as \(x*y=n \text{ is }p\), and If we take these two results, and let \(X, Y \text{ and } Z \in F_p\), \begin{align*} P(X*Y+Z=n) &= \sum_{x*y+z}(P(X=x)*P(Y=y)*P(Z=z))\\ &= \sum_{x*y+z=n} \frac{1}{p^3} \end{align*} Let \(y\) and \(z \in F_p\), then \(x=(n-z)*y^{-1}\) exists and is unique for a given \(y\) and \(z\). Thus, the number of \((x,y,z) \in F_p^3\) such as \(x*y+z=n\) is \(p^2\), and:
https://blog.quarkslab.com/unique-random-number-set-computation.html
CC-MAIN-2019-09
en
refinedweb
14.2. Drawing flight routes with Network load and visualize a dataset containing many flight routes and airports around the world (obtained from the OpenFlights website at). Getting ready To draw the graph on a map, you need cartopy, available at. You can install it with conda install -c conda-forge cartopy. How to do it... 1. Let's import a few packages: import math import json import numpy as np import pandas as pd import networkx as nx import cartopy.crs as ccrs import matplotlib.pyplot as plt from IPython.display import Image %matplotlib inline 2. We load the first dataset containing many flight routes: names = ('airline,airline_id,' 'source,source_id,' 'dest,dest_id,' 'codeshare,stops,equipment').split(',') routes = pd.read_csv( '' 'cookbook-2nd-data/blob/master/' 'routes.dat?raw=true', names=names, header=None) routes 3. We load the second dataset with details about the airports, and we only keep the airports from the United States: names = ('id,name,city,country,iata,icao,lat,lon,' 'alt,timezone,dst,tz,type,source').split(',') airports = pd.read_csv( '' 'cookbook-2nd-data/blob/master/' 'airports.dat?raw=true', header=None, names=names, index_col=4, na_values='\\N') airports_us = airports[airports['country'] == 'United States'] airports_us The DataFrame index is the IATA code, a 3-characters code identifying the airports. 4. Let's keep all national US flight routes, that is, those for which the source and the destination airports belong to the list of US airports: routes_us = routes[ routes['source'].isin(airports_us.index) & routes['dest'].isin(airports_us.index)] routes_us 5. We construct the list of edges representing our graph, where nodes are airports, and two airports are connected if there exists a route between them (flight network): edges = routes_us[['source', 'dest']].values edges array([['ADQ', 'KLN'], ['KLN', 'KYK'], ['BRL', 'ORD'], ..., ['SOW', 'PHX'], ['VIS', 'LAX'], ['WRL', 'CYS']], dtype=object) 6. We create the networkX graph from the edges array: g = nx.from_edgelist(edges) 7. Let's take a look at the graph's statistics: len(g.nodes()), len(g.edges()) (546, 2781) There are 546 US airports and 2781 routes in the dataset. 8. Let's plot the graph: fig, ax = plt.subplots(1, 1, figsize=(6, 6)) nx.draw_networkx(g, ax=ax, node_size=5, font_size=6, alpha=.5, width=.5) ax.set_axis_off() 9. There are a few airports that are not connected to the rest of the airports. We keep the largest connected component of the graph as follows (the subgraphs returned by connected_component_subgraphs() are sorted by decreasing size): sg = next(nx.connected_component_subgraphs(g)) 10. Now, we plot the largest connected component subgraph: fig, ax = plt.subplots(1, 1, figsize=(6, 6)) nx.draw_networkx(sg, ax=ax, with_labels=False, node_size=5, width=.5) ax.set_axis_off() The graph encodes only the topology (connections between the airports) and not the geometry (actual positions of the airports on a map). Airports at the center of the graph are the largest US airports. 11. We're going to draw the graph on a map, using the geographical coordinates of the airports. First, we need to create a dictionary where the keys are the airports IATA codes, and the values are the coordinates: pos = {airport: (v['lon'], v['lat']) for airport, v in airports_us.to_dict('index').items()} 12. The node sizes will depend on the degree of the nodes, that is, the number of airports connected to every node: deg = nx.degree(sg) sizes = [5 * deg[iata] for iata in sg.nodes] 13. We will also show the airport altitude as the node color: altitude = airports_us['alt'] altitude = [altitude[iata] for iata in sg.nodes] 14. We will display the labels of the largest airports only (at least 20 connections to other US airports): labels = {iata: iata if deg[iata] >= 20 else '' for iata in sg.nodes} 15. Finally, we use cartopy to project the points on the map: # Map projection crs = ccrs.PlateCarree() fig, ax = plt.subplots( 1, 1, figsize=(12, 8), subplot_kw=dict(projection=crs)) ax.coastlines() # Extent of continental US. ax.set_extent([-128, -62, 20, 50]) nx.draw_networkx(sg, ax=ax, font_size=16, alpha=.5, width=.075, node_size=sizes, labels=labels, pos=pos, node_color=altitude, cmap=plt.cm.autumn) See also - Manipulating and visualize graphs with NetworkX - Manipulating geospatial data with Cartopy
https://ipython-books.github.io/142-drawing-flight-routes-with-networkx/
CC-MAIN-2019-09
en
refinedweb
Hacking Oscar! March 23, 2005 XQuery is a rich and expressive language. I love exploring the types of questions you can pose using it. In fact, I enjoy exploring the types of queries you can pose almost as much as I enjoy discovering what those queries can discover (if you can parse that sentiment). I realized around Academy Awards time a year ago that the Oscars® were a rich and exciting domain that seemed to be crying out for XQuery exploration. Think about it. Think of all the Oscar trivia sites on the web, and the newspaper columns that were appearing just a few short weeks ago, all focused on this year's awards. They're full of questions like: - What are the two most nominated films of all time? ("All About Eve" and "Titanic") - How many nominations did they each receive? (14) - What are the three movies that have won the most awards? ("Ben-Hur," "Titanic," and "The Lord of the Rings: The Return of the King") - How many awards did they each win? (11) - How many actors (male and female both) have been nominated for both Leading and Supporting roles in the same year? (10, including Jamie Foxx at this year's awards) - Which director has been nominated five times for Best Director but has never won? (Martin Scorsese) Reading through questions like these, I suddenly had a minor epiphany. I realized that, given XQuery and a suitable XML database of Academy Award information, I'd be able to ask and answer all those questions myself. What power! Even better, I'd be able to make up trivia questions of my own, limited in scope only by my imagination and creativity. I started getting excited thinking about XPaths. (Hey, it's better than playing on the freeway!) I decided that automating an Oscars trivia database would be an interesting and challenging project. Once I started playing around with hypothetical queries that could be posed against such a database, I quickly realized that the number of such hypotheticals was huge. And the database would be useful for many more things than simply asking and answering trivia questions. What about statistical analyses, say, of the factors correlating nominations and winners? What about "six degrees of separation"-type questions, but in an Oscars domain? What about adding Academy Award-based relationships to the semantic web? I realized I'd probably be able to come up with some good trivia questions and ideas for interesting research. However, if I could provide a web-based front-end that made the data available via a query interface to other people as well, they'd probably be able to come up with far better trivia questions and research ideas than I ever could on my own. In short, thus was born many a sleepless night. Once I'd decided to proceed, two questions immediately arose: What would such a database look like, and once I'd designed it, where would I get the data? Structuring the Data I thought about my requirements. Given the richness of the query domain, I realized I'd probably be doing a lot of ad hoc exploration at the keyboard. I decided that one of my main criteria would be query concision: I'd be doing a lot of typing, and the fewer the number of keystrokes I had to enter, the better. This meant I'd probably also want to have a fairly simple schema. Sitting at the keyboard, I didn't want to have to deal with complex structures or remembering a large number of attribute and/or element names. Happily, it quickly became evident that every Academy Award nomination has at heart an exceedingly simple structure. Every nomination associates, in addition to the name of the award and the year it was awarded, just two basic items: A motion picture, and one or more people involved in that picture's production. Two of the key elements in my schema would thus be <picture> and <person>. The role each <person> played in a particular nomination would be determined by the award category: In the case of a Best Picture nomination, for example, there might be multiple <person> entries, each one being a producer (and given Hollywood custom, there might be thousands of those :-), while in the case of Best Actor, the single <person> associated with each nomination would be the actor him- (or her-) self. If the award were for cinematography, the <person> would be a cinematographer. And so on. I couldn't think of a structure much simpler than that. Being able to notate winners and losers was also important. Each of the bulleted questions above, for example, asks either directly or indirectly about a competitive result: Who was nominated, and who won and lost? So I decided that while there are a number of honorary and technical achievement awards given each year that don't have clear winners and losers (the Irving Thalberg Memorial award, the Scientific and Engineering Award, and the Jean Hersholt Humanitarian Award, to name just three), I wasn't interested in those and thus wouldn't attempt to be authoritative about everything Oscar. I'd let other sites enumerate that type of information; I just wanted to be able to ask, in interesting ways, who had won and who had lost in particular categories. The schemas I came up were all minor variations on a basic structure. Here's one showing the data for Best Actor for the 77th Academy Awards just held (or best performance by an actor in a leading role, as the Academy of Motion Picture Arts and Sciences likes to put it). I figured some hands-on querying would quickly show me whether this was a reasonable format or not. If it wasn't, no big deal: I could easily use XQuery to transform this structure into something more suitable. <award year="2004"> <actor><won> <person>Jamie Foxx</person> <picture>Ray</picture></won></actor> <actor><lost> <person>Don Cheadle</person> <picture>Hotel Rwanda</picture></lost></actor> <actor><lost> <person>Johhny Depp</person> <picture>Finding Neverland</picture></lost></actor> <actor><lost> <person>Leonardo DiCaprio</person> <picture>The Aviator</picture></lost></actor> <actor><lost> <person>Clint Eastwood</person> <picture>Million Dollar Baby</picture></lost></actor> </award> You'll notice I'm using elements in several instances where you might typically expect to find attributes ( <actor> as opposed to <award name="actor">, for example, and <won> and <lost> instead of <award won="yes"> and <award won="no">). That's because such a structure makes for more easily typed (as in "keyboarded") queries, as shown below. Here's the corresponding data for Best Picture: <award year="2004"> <bestPicture><won> <picture>Million Dollar Baby</picture> <person>Clint Eastwood</person> <person>Albert S. Ruddy</person> <person>Tom Rosenberg</person></won></bestPicture> <bestPicture><lost> <picture>Finding Neverland</picture> <person>Richard N. Gladstein</person> <person>Nellie Bellflower</person></lost> </bestPicture> <bestPicture><lost> <picture>The Aviator</picture> <person>Michael Mann</person> <person>Graham King</person></lost></bestPicture> <bestPicture><lost> <picture>Ray</picture> <person>Taylor Hackford</person> <person>Stuart Benjamin</person> <person>Howard Baldwin</person></lost> </bestPicture> <bestPicture><lost> <picture>Sideways</picture> <person>Michael London</person></lost> </bestPicture> </award> An Oscars Trivia Sampler Given the above structures, here are some of the trivia-type questions you might want to pose against this data: - List the nominees for Best Actor in 2004 for $actor in //award[ year="2004" ]/actor//person/text() return ( $actor, ", " ) => Jamie Foxx, Don Cheadle, Johnny Depp, Leonardo DiCaprio, Clint Eastwood - How many nominees were there? count( //award[ year="2004" ]/actor ) => 5 - Who won? //award[ year="2004" ]/actor/won/person/text() => Jamie Foxx - What picture did he win for? //award[ year="2004" ]/actor/won/picture => Ray - Has this actor previously been nominated for any other awards? let $actorName := //award[ year="2004" ]/actor/won/person/text() return if ( exists( //award[ year<"2004" ]//person ftcontains $actorName ) ) then "Yes!" else "No"=> No This list provides just the barest hint of the many types of queries you could ask. The ftcontains expression in the last query, by the way, is from the XQuery Full-Text working draft published last July. Populating the Database Once I knew more or less what my data was going to look like, I went looking for a way to populate my database. One thing was clear: The Academy Awards encompass 77 years of data, and I was not eager to start practicing my typing skills again. My first thought was IMDB. Terms on their website clearly forbade either screen-scraping their site or creating a database from their downloadable files without prior consent, so I requested permission by email. I never got a reply, but rather than pursuing that further I settled on my number two choice, grabbing my data from a small, privately maintained site known as The Oscar Guy. I'd never done any screen-scraping before and was a bit nervous about the legal ramifications of what I was intending. A well-connected friend put me in contact with one of the world's leading experts on digital rights and technology, who assured me that I should be fine, since there's no copyright on the facts of who won which Oscar. And while there might be a "thin" degree of copyright on the selection and arrangement of material on the Oscar Guy site, there shouldn't be a problem as long as I was building my own database and wasn't merely duplicating that selection and arrangement. Feeling somewhat reassured (there's no such thing as certainty when it comes to the possibility of litigation), I pressed on. My next question was: How does one screen-scrape? The answer (no surprise) again involved XQuery. But that's where I'll stop for the moment. I'll leave the meat of the technical discussion for my next installment, when I'll outline how I used XQuery and TagSoup to convert the Oscar Guy's source HTML into the XML format I required. I'll also summarize my experience with some handy tips on how to use XQuery for screen-scraping in general. And I'll publish my promised query front-end for the Oscars Trivia Website. If this topic motivates you to come up with some interesting XQuery-based Academy Award trivia questions of your own, by the way, send them in to me at If they're sufficiently novel or illustrative of interesting things you can do with XQuery, I'll include them as part of the site. Judges are standing by.
https://www.xml.com/pub/a/2005/03/23/oscar.html
CC-MAIN-2019-09
en
refinedweb
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hi, I would like to know the Custom script post-function allows to update a custom date field (dd/mm/yyyy) with the current transition date. I have tried few codes but non of them works. using ScriptRunner 5.1.6 Thanks in advance. Problem solved. i used this script : import com.atlassian.jira.component.ComponentAccessor import java.sql.Timestamp def customFieldManager = ComponentAccessor.getCustomFieldManager() def dateCf = customFieldManager.getCustomFieldObject("customfield_12345") // Date time fields require a Timestamp issue.setCustomFieldValue(dateCf, new Timestamp((new Date()).time)) thanks for this document: Without knowing what you've tried, it's quite likely someone will give you the code you've already used. Could you tell us what you've tried? hi, i had some weird behavior (could not create new Timestamp) but seems like today it is working better... not sure what i did wrong..
https://community.atlassian.com/t5/Adaptavist-questions/ScriptRunner-How-to-update-current-date-on-custom-field-using-on/qaq-p/697370
CC-MAIN-2019-09
en
refinedweb
3.3. Mastering widgets ipywidgets package provides many common user interface controls for exploring code and data interactively. These controls can be assembled and customized to create complex graphical user interfaces. In this recipe, we introduce the various ways we can create user interfaces with ipywidgets. Getting ready The ipywidgets package should be installed by default in Anaconda, but you can also install it manually with conda install ipywidgets. Alternatively, you can install ipywidgets with pip install ipywidgets, but then you also need to type the following command in order to enable the extension in the Jupyter Notebook: jupyter nbextension enable --py --sys-prefix widgetsnbextension How to do it... 1. Let's import the packages: import ipywidgets as widgets from ipywidgets import HBox, VBox import numpy as np import matplotlib.pyplot as plt from IPython.display import display %matplotlib inline 2. The @interact decorator shows a widget for controlling the arguments of a function. Here, the function f() accepts an integer as an argument. By default, the @interact decorator displays a slider to control the value passed to the function: @widgets.interact def f(x=5): print(x) The function f() is called whenever the slider value changes. 3. We can customize the slider parameters. Here, we specify a minimum and maximum integer range for the slider: @widgets.interact(x=(0, 5)) def f(x=5): print(x) 4. There is also an @interact_manual decorator which provides a button to call the function manually. This is useful with long-lasting computations that should not run every time a widget value changes. Here, we create a simple user interface for controlling four parameters of a function that displays a plot. There are two floating-point sliders, a dropdown menu for choosing a value among a few predefined options, and a checkbox for boolean values: @widgets.interact_manual( color=['blue', 'red', 'green'], lw=(1., 10.)) def plot(freq=1., color='blue', lw=2, grid=True): t = np.linspace(-1., +1., 1000) fig, ax = plt.subplots(1, 1, figsize=(8, 6)) ax.plot(t, np.sin(2 * np.pi * freq * t), lw=lw, color=color) ax.grid(grid) 5. In addition to the @interact and @interact_manual decorators, ipywidgets provides a simple API to create individual widgets. Here, we create a floating-point slider: freq_slider = widgets.FloatSlider( value=2., min=1., max=10.0, step=0.1, description='Frequency:', readout_format='.1f', ) freq_slider 6. Here is an example of slider for selecting pairs of numbers, like intervals and ranges: range_slider = widgets.FloatRangeSlider( value=[-1., +1.], min=-5., max=+5., step=0.1, description='xlim:', readout_format='.1f', ) range_slider 7. The toggle button can control a boolean value: grid_button = widgets.ToggleButton( value=False, description='Grid', icon='check' ) grid_button 8. Dropdown menus and toggle buttons are useful when selecting a value among a predefined set of options: color_buttons = widgets.ToggleButtons( options=['blue', 'red', 'green'], description='Color:', ) color_buttons 9. The text widget allows the user to write a string: title_textbox = widgets.Text( value='Hello World', description='Title:', ) title_textbox 10. We can let the user choose a color using the built-in system color picker: color_picker = widgets.ColorPicker( concise=True, description='Background color:', value='#efefef', ) color_picker 11. We can also simply create a button: button = widgets.Button( description='Plot', ) button 12. Now, we will see how to combine these widgets into a complex graphical user interface, and how to react to user interactions with these controls. We create a function that will display a plot as defined by the created controls. We can access the control value with the value property of the widgets: def plot2(b=None): xlim = range_slider.value freq = freq_slider.value grid = grid_button.value color = color_buttons.value title = title_textbox.value bgcolor = color_picker.value t = np.linspace(xlim[0], xlim[1], 1000) f, ax = plt.subplots(1, 1, figsize=(8, 6)) ax.plot(t, np.sin(2 * np.pi * freq * t), color=color) ax.grid(grid) 13. The on_click decorator of a button widget lets us react to click events. Here, we simply declare that the plotting function should be called when the button is pressed: @button.on_click def plot_on_click(b): plot2() 14. To display all of our widgets in a unified graphical interface, we define a layout with two tabs. The first tab shows widgets related to the plot itself, whereas the second tab shows widgets related to the styling of the plot. Each tab contains a vertical stack of widgets defined with the VBox class: tab1 = VBox(children=[freq_slider, range_slider, ]) tab2 = VBox(children=[color_buttons, HBox(children=[title_textbox, color_picker, grid_button]), ]) 15. Finally, we create the Tab instance with our two tabs, we set the titles of the tabs, and we add the plot button below the tabs: tab = widgets.Tab(children=[tab1, tab2]) tab.set_title(0, 'plot') tab.set_title(1, 'styling') VBox(children=[tab, button]) There's more... The documentation of ipywidgets demonstrates many other features of the package. The styling of the widgets can be customized. New widgets can be created by writing Python and JavaScript code (see recipe Creating custom Jupyter Notebook widgets in Python, HTML, and JavaScript). Widgets can also remain at least partly functional in a static notebook export. Here are a few references: - ipywidgets user guide at - Building a custom widget at See also - Creating custom Jupyter Notebook widgets in Python, HTML, and JavaScript
https://ipython-books.github.io/33-mastering-widgets-in-the-jupyter-notebook/
CC-MAIN-2019-09
en
refinedweb
Introduction. CandyBar SparkFun Wish List - USB Cable Extension – 6 Foot - Polarized Connectors – Crimp Pins - CAT 6 Cable – 3ft - Ribbon Cable – 6 wire (15ft) - SparkFun USB Mini-B Cable – 6 Foot - MicroSD Card with Adapter – 8GB - LED RGB Strip – Addressable, Bare (1m) - Foam PCB Tape – 3M VHB Acrylic 1″ (1 yard) - FadeCandy NeoPixel Driver – USB-Controlled Dithering - Wall Adapter Power Supply – 5V DC 2A (USB Micro-B) - Raspberry Pi – Model B+ - Hook-Up Wire – Silicone 12AWG (Red, 10m) - Hook-Up Wire – Silicone 12AWG (Black, 10m) - Polarized Connectors – Housing (3-Pin)protection.×1 meter segments). If you want to add more LEDs, you will need to add more power. Dan ended up using five of these supplies to power, one for each of the five FadeCandies controlling the 2300 LEDs in the sculpture. Image courtesy of danjuliodesigns.com.. #!/usr/bin/env python # Light each LED in sequence, and repeat. import opc, time numLEDs = 480 client = opc.Client(‘fadecandy.local:7890’) while True: for i in range(59, , –1): pixels = [ (,,) ] * numLEDs pixels[i] = (255, 255, 255) client.put_pixels(pixels) time.sleep(0.01) for i in range(60, 119): pixels = [ (,,) ] * numLEDs pixels[i] = (255, 255, 255) client.put_pixels(pixels) time.sleep(0.01) for i in range(179, 120, –1): pixels = [ (,,) ] * numLEDs pixels[i] = (255, 255, 255) client.put_pixels(pixels) time.sleep(0.01) for i in range(180, 239): pixels = [ (,,) ] * numLEDs pixels[i] = (255, 255, 255) client.put_pixels(pixels) time.sleep(0.01) for i in range(299, 240, –1): pixels = [ (,,) ] * numLEDs pixels[i] = (255, 255, 255) client.put_pixels(pixels) time.sleep(0.01) for i in range(300, 359): pixels = [ (,,) ] * numLEDs pixels[i] = (255, 255, 255) client.put_pixels(pixels) time.sleep(0.01) for i in range(419, 360, –1): pixels = [ (,,) ] * numLEDs pixels[i] = (255, 255, 255) client.put_pixels(pixels) time.sleep(0.01) for i in range(420, 479): pixels = [ (,,) ] *.
https://projects-raspberry.com/building-large-led-installations/
CC-MAIN-2019-09
en
refinedweb
Source: Deep Learning on Medium It’s been a while since I posted a new blog post having been busy with other things in my life. I have been working on this project for a while now. And now, when it is finally done I can share it with you. Besides my passion to Machine Learning and AI algorithm in general, I have another not very common hobby, and it is the Japanese language. I have been studying it for a while now and can even get to technical words in ML field (機械学習 and デイープラーニング ) although I still have a long way to go. With this said, I thought to myself, why not join my two biggest passions together and build a cool project? I decided to design a computer algorithm which can reproduce Japanese letters (especially Hiragana and Katakana – ひらがなとカタカナ) using a Variational autoencoder. The database I used in this project is from the “ ETL Character Database”. The letters are organized in a very unusual way so make sure to read the instructions of how to handle the different databases (ETL 1–9). The first part we will cover is the preprocessing of our data. As I said, the database is not very friendly to Data scientists (although of course massive projects are on a whole different caregory). Let’s open the dataset: import bitstring import numpy as np from PIL import Image, ImageEnhance from PIL import ImageOps, ImageMath from matplotlib import pyplot as plt import cv2 %pylab inline t56s = '0123456789[#@:>? ABCDEFGHI&.](< JKLMNOPQR-$*);\'|/STUVWXYZ ,%="!' def read_record_ETL4(f, pos=6112): f = bitstring.ConstBitStream(filename=f) f.bytepos = pos * 2952 r = f.readlist('2*uint:36,uint:8,pad:28,uint:8,pad:28,4*uint:6,pad:12,15*uint:36,pad:1008,bytes:21888') return r filename = 'ETL4/ETL4/ETL4C' # specify the ETL4 filename here r = read_record_ETL4(filename) iF = Image.frombytes('F', (r[18], r[19]), r[-1], 'bit', 4) iP = iF.convert('L') enhancer = ImageEnhance.Brightness(iP) iE = enhancer.enhance(r[20]) plt.imshow(iE) In this part I am opening a single character from the database (using ETL -4 database only at the moment). The code I am using is taken from here with different tweaks concerning the execution (e.g. bitstring is not compatible with tensorflow in my system, so I had to split to two notebooks, one for preprocessing and the other for the model itself). As you can see the letter needs a serious preprocessing like cropping, filtering out the noise and stronger greyscale contrast in order to recoginze the character (which is 小 by the way, means “small” ). def create_data(): data = np.zeros((6113,76,72)) for i in range(6113): r = read_record_ETL4(filename,pos=i) iF = Image.frombytes('F', (r[18], r[19]), r[-1], 'bit', 4) iP = iF.convert('L') enhancer = ImageEnhance.Brightness(iP) iE = enhancer.enhance(r[20]) temp=np.array(iE) data[i,:,:] = temp return data data = create_data() This function creates the dataset itself in order to handle it in numpy array for convenience. There are 6113 pictures (grey scale) with resolution of 76×72 pixels. Let’s set a function that cleans our dataset. I implemented a simple gaussian blur and then thresholding (otsu’s histogram method) and a “TOZERO” binarization in order to preserve the stroke pressure grey scale. Did this in order to get a better a results hopefully later on, When we create the Japanese letters. def preprocessing_data(data1): # this function cleans the images and binarize them in order to create a better dataset for our VAE kernel = np.ones((3,3),np.float32)/9 crop_template = np.zeros((data.shape[0],data1.shape[2],data1.shape[2])) # cropping template for i in range(data1.shape[0]): dst = cv2.GaussianBlur(data1[i,:,:],(3,3),0) # smoothing ret,data1[i,:,:] = cv2.threshold(dst,0,255,cv2.THRESH_TOZERO+cv2.THRESH_OTSU) #binarizing crop_template[i,:,:] = data1[i,:72,:] # cropping return crop_template data = np.array(data, dtype = np.uint8) # 8 bit unsigned pictures for opencv data1=data.copy() #copy by value(and not by reference) data1 = preprocessing_data(data1) Let’s check some random samples from the dataset. ran =np.random.randint(int(data1.shape[0]), size=(2, 1)) figure() subplot(1,4,1),imshow(data[int(ran[0]),:,:],cmap = 'gray') title('original {}'.format(int(ran[0]))), xticks([]), plt.yticks([]) subplot(1,4,2),imshow(data1[int(ran[0]),:,:],cmap='gray') title('Sample {}'.format(int(ran[0]))), xticks([]), plt.yticks([]) clim([0, 45]) subplot(1,4,3),imshow(data[int(ran[1]),:,:],cmap = 'gray') title('original {}'.format(int(ran[1]))), xticks([]), plt.yticks([]) subplot(1,4,4),imshow(data1[int(ran[1]),:,:],cmap='gray') title('Sample {}'.format(int(ran[1]))), xticks([]), plt.yticks([]) clim([0, 45]) Great, let’s move on to our model after this preprocessing phase. First, I will explain in a nutshell the concept of the VAE in order to shed some light for those of you who are not familiar with this architecture. For a more in depth and elaborate explanation you can try this page. It gives a very thorough explanation about the relationship with Probablistic graphical models and deep learning concepts. Autoencoders are in great use in many fields of data science. The autoencoders can be used to compression of feature vectors, anomaly detection etc. It is based on unsupervised approach. The main idea of the autoencoder is to encode the data (labeled x in the graph) to a smaller dimension vector and then try to decode it back to the original (reconstruct) x’. The main difference between the AE (Autoencoder) and the VAE is that in the VAE the middle layer is considered as a normal distribution (every node represents its own normal distribution). How do we achieve that? — good question. 2 main things are different then the AE: - The loss function used in the model contains two elements: the first is the reconstruction loss which is the same as in the AE in order to train the network to reconstruct the data. and the second element is KL(Kullback -leibler) divergence loss. The KL divergence represents how different two distributions are. It has very unique characteristics like non symmetry, direct relation to fischer information metric and more. We use this loss in order to force the network to capture the layer between the encoder and the decoder to capture distribution similar to normal distribution. The KL divergence works as a regularizer. - The second difference is the reparameterization trick. Since the network learn it’s parameters using our trusty old back propagation algorithm, it needs to differntiate the layers. If we’re using sampling (like you can see on the right side of the graph), and we want to take derivate of a function of our sampled variable with respect to our parameter we have a problem since our variable is a random variable. The reparameterization trick will solve this problem (and I urge you to read the links I wrote before!). So you asking, how will the network will synthesize it’s own Japanese letters then? What we’re going to do is , after we trained the network on our dataset and made sure we were satisfied with the reconstruction, we will sample variables from standard normal distribution. Later, insert it as the input to the Decoder only and watch the output of the network ,which is basically random since we don’t have a designated input besides random variables sampled from a normal distribution! nice, isn’t it? So our goal here is to train the network with the dataset we preprocessed beforehand in order to create new handwritten Japanese letters that are not part of the dataset but based on it. I seperated the preprocessing and the model to two seperate notebooks since the “bitstring” package and tensorflow weren’t compatible for some reason on my rig. Saving the data we preprocessed using: np.save('Japanese.npy', data1) And went on to the next notebook: import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import cv2 %matplotlib inline data1 = np.load('Japanese.npy') data1 =data1/160 %normalize pixels Let’s move on to build our encoder and our model: tf.reset_default_graph() # sess = tf.InteractiveSession batch_size = 32 X_in = tf.placeholder(dtype=tf.float32, shape=[None, 72, 72], name='X') Y = tf.placeholder(dtype=tf.float32, shape=[None, 72, 72], name='Y') Y_flat = tf.reshape(Y, shape=[-1, 72 * 72]) #for estimating loss keep_prob = tf.placeholder(dtype=tf.float32, shape=(), name='keep_prob') dec_in_channels = 1 n_latent = 12 reshaped_dim = [-1, 7, 7, dec_in_channels] inputs_decoder = int(49 * dec_in_channels / 2) def lrelu(x, alpha=0.3): return tf.maximum(x, tf.multiply(x, alpha)) def encoder(X_in, keep_prob): activation = lrelu with tf.variable_scope("encoder", reuse=None): X = tf.reshape(X_in, shape=[-1, 72, 72, 1]) x = tf.layers.conv2d(X, filters=64, kernel_size=4, strides=2, padding='same', activation=activation) x = tf.nn.dropout(x, keep_prob) x = tf.layers.conv2d(x, filters=64, kernel_size=4, strides=2, padding='same', activation=activation) x = tf.nn.dropout(x, keep_prob) x = tf.layers.conv2d(x, filters=64, kernel_size=4, strides=1, padding='same', activation=activation) x = tf.nn.dropout(x, keep_prob) x = tf.layers.Flatten()(x) mn = tf.layers.dense(x, units=n_latent) sd = 0.5 * tf.layers.dense(x, units=n_latent) epsilon = tf.random_normal(tf.stack([tf.shape(x)[0], n_latent])) z = mn + tf.multiply(epsilon, tf.exp(sd)) return z, mn, sd The batch size is set to 32, as a loss function we have a leaky relu (probably other loss functions will do in this simple model) with the negative end slope set to 0.3 (totally arbitrary). Our dataset is made of greyscale images of 72*72 . dropout helped to avoid isolated activated neurons and overfitting. Kernel size not too big of 4 and other standard hyperparameters. The number of latent units (the sampled layer) is 12 after several failed optimization attempts. z variable holds all the hidden units composed of mean and a standard deviation multiplied by a normal distribution sampled epsilon (This is the reparametrization trick mentined before) def decoder(sampled_z, keep_prob): with tf.variable_scope("decoder", reuse=None): x = tf.layers.dense(sampled_z, units=inputs_decoder, activation=lrelu) x = tf.layers.dense(x, units=inputs_decoder * 2 + 1, activation=lrelu) x = tf.reshape(x, reshaped_dim) x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=2, padding='same', activation=tf.nn.relu) x = tf.nn.dropout(x, keep_prob) x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1, padding='same', activation=tf.nn.relu) x = tf.nn.dropout(x, keep_prob) x = tf.layers.conv2d_transpose(x, filters=64, kernel_size=4, strides=1, padding='same', activation=tf.nn.relu) x = tf.layers.Flatten()(x) x = tf.layers.dense(x, units=72*72, activation=tf.nn.sigmoid) img = tf.reshape(x, shape=[-1, 72, 72]) return img This is the decoder, it recieves a sampled z like we mentioned before (12 latent variables) and reconstructs the image. sess = tf.Session() sampled, mn, sd = encoder(X_in, keep_prob) dec = decoder(sampled, keep_prob) unreshaped = tf.reshape(dec, [-1, 72*72]) img_loss = tf.reduce_sum(tf.squared_difference(unreshaped, Y_flat), 1) latent_loss = -0.5 * tf.reduce_sum(1.0 + 2.0 * sd - tf.square(mn) - tf.exp(2.0 * sd), 1) loss = tf.reduce_mean(img_loss + latent_loss ) optimizer = tf.train.AdamOptimizer(0.0005).minimize(loss) sess = tf.Session() sess.run(tf.global_variables_initializer()) The most important part in this piece of code is the loss function definition, as we discussed before we have two parts, the image loss (MSE/L2 loss for this simple image) and the latent loss for the KL divergence. Adam optimizer, learning rate and all the other stuff are quite standard. def next_batch(num, data): ''' Return a total of `num` random samples and labels. ''' idx = np.arange(0 , len(data)) np.random.shuffle(idx) idx = idx[:num] data_shuffle = [data[ i] for i in idx] return np.asarray(data_shuffle) # batch = next_batch(batch_size, data1) This is the batch function for convenient input. And now let’s move on to train the network. for i in range(30000): batch = next_batch(batch_size, data1) sess.run(optimizer, feed_dict = {X_in: batch, Y: batch, keep_prob: 0.5}) if not i % 100: ls, d, i_ls, d_ls, mu, sigm = sess.run([loss, dec, img_loss, latent_loss, mn, sd], feed_dict = {X_in: batch, Y: batch, keep_prob: 1.0}) plt.imshow(np.reshape(batch[0], [72, 72]), cmap='gray') plt.show() plt.imshow(d[0], cmap='gray') plt.show() print('iteration: {}, loss:{}, image loss:{}, distribution loss:{}'.format(i, ls, np.mean(i_ls), np.mean(d_ls))) You can see here the reconstruction ability in the beginning of training(left) and the end of the training session (right). Now, after we finished training let’s see if the decoder is able to produce a new letter from a random sampled z (normally distributed). randoms = [np.random.normal(0, 1, n_latent) for _ in range(1)] imgs = sess.run(dec, feed_dict = {sampled: randoms, keep_prob: 1.0}) imgs = [np.reshape(imgs[i], [72, 72]) for i in range(len(imgs))] # imgs = np.array(imgs) # imgs.shape # for img in imgs: # plt.figure(figsize=(1,1)) # plt.axis('off') plt.imshow(imgs[0], cmap='gray') Well we got a decent looking さ(“sa”) as you can see. I guess with more hyperparameters tuning we can get way better results. The notebooks: Preprocessing , Model. Thanks for reading so far, make sure to take a look at the reference link since I took some of my code from there. References: - — really good article, very well organized and explained. - - - For any questions , let me know: tomer@nahshoh.net Thank you!
http://mc.ai/computer-made-japanese-letters-through-variational-autoencoder/
CC-MAIN-2019-09
en
refinedweb
> I really think that we need to avoid trying to have a single 'known good' > flag/generationnrwith the inode.I don't think we should have anything in the inode. We don't want tobloat inode objects for this cornercase.> if you store generation numbers for individual apps (in posix attributes > to pick something that could be available across a variety of > filesystems), you push this policy decision into userspace (where itAgreed> 1. define a tag namespace associated with the file that is reserved for > this purpose for example "scanned-by-*"What controls somewhat writing such a tag on media remotely ? Locally youcan do this (although you are way too specialized in design - an LSM hookfor controlling tag setting or a general tag reservation sysfs interfaceis more flexible than thinking just about scanners.> 2. have an kernel option that will clear out this namespace whenever a > file is dirtiedThat will generate enormous amounts of load if not carefully handled.> 3. have a kernel mechanism to say "set this namespace tag if this other > namespace tag is set" (this allows a scanner to set a 'scanning' tag when > it starts and only set the 'blessed' tag if the file was not dirtied while User space problem. Set flags 'dirty', then set bit 'scanning'clear 'dirty' then clear 'scanning' when finished. If the dirty flag gotset while you were scanning it will still be set now you've cleared youscanning flag. Your access policy depends upon your level of paranoia (eg"dirty|scanning == BAD")> programs can set the "scanned-by-*" flags on that the 'libmalware' library We've already proved libmalware doesn't make sense> L. the fact that knfsd would not use this can be worked around by running > FUSE (which would do the checks) and then exporting the result via knfsdwNot if you want to get any work done.> what did I over complicate in this design? or is it the minimum feature > set needed?> > are any of the features I list impossible to implement?Go write it and see, provide benchmarks ? I don't see from this how youhandled shared mmap ?
http://lkml.org/lkml/2008/8/16/65
CC-MAIN-2017-13
en
refinedweb
User talk:Mormegil From OpenStreetMap Wiki Wikiteam Hi Mormegil, I saw your edits at the wikicleanup and would like to ask if you are interested in Talk:Wiki#Forming_a_Wiki_Team? --!i! 16:35, 4 November 2010 (UTC) - Hi! I’ll be glad to help around, but as you can see, I am not present too often on the wiki. --Mormegil 11:57, 14 November 2010 (UTC) - This isn't a problem, everybody works as much as he is willing to :) Thanks for your help on the armenain namespace. Was there a trick? I messed up with this large template hierarchy :( --!i! 16:08, 14 November 2010 (UTC) - Not too much of a trick, it’s just the template works completely automagically, it expects all translations to be at Xx:Page title; if it is there, it is displayed, using a few nested templates etc. It’s not like there is a place where you could just add a link to the new page. --Mormegil 22:32, 15 November 2010 (UTC) Moving pages Please have a look at links to the old page and resolve DoubleRedirects after moving a page! --phobie m d 18:11, 7 August 2011 (BST) Thank you For this (ah, the perils of multitasking!) and this edit. -- CristianCantoro (talk) 08:55, 23 September 2014 (UTC)
http://wiki.openstreetmap.org/wiki/User_talk:Mormegil
CC-MAIN-2017-13
en
refinedweb
When you need a full-text search for Django-CMS-based website, you can use Haystack and django-cms-search. The latter module ensures that all CMS Pages get indexed. One important thing to mention is that if you use any custom Plugins, search_fields need to be defined for them, so that the Pages using them are indexed properly. For example, here is an Overview plugin which makes its title and description searchable: from django.db import models from django.utils.translation import ugettext_lazy as _ from cms.models import CMSPlugin class Overview(CMSPlugin): title = models.CharField(_('Title'), max_length=200) description = models.TextField(_('Description')) url = models.URLField(_('url'), max_length=200, blank=True) search_fields = ("title", "description") def __unicode__(self): return self.title For more information check the documentation online:
http://djangotricks.blogspot.com/2011/11/django-cms-haystack-and-custom-plugins.html
CC-MAIN-2017-13
en
refinedweb
FvwmPerl - the fvwm perl manipulator and preprocessor F. This module is intended to extend fvwm commands with the perl scripting power. It enables to embed perl expressions in the fvwm config files and construct fvwm commands. If you want to invoke the unique and persistent instanse of FvwmPerl, it is suggested to do this from the StartFunction. Calling it from the top is also possible, but involves some issues not discussed here.. Aliases allow to have several module invocations and work separately with all invocations, here is an example: One of the effective proprocessing solutions is to pass the whole fvwm configuration with embeded perl code to "FvwmPerl --preprocess". An alternative approach is to write a perl script that produces fvwm commands and sends them for execution, this script may be loaded using "FvwmPerl --load". There are hovewer intermediate solutions that preprocess only separate configuration lines (or alternatively, execute separate perl commands that produce fvwm commands)."); ""}% There an other line, is perl code.. export [func-names] Send to fvwm the definition of shortcut functions that help to activate different actions of the module (i.e. eval, load and preprocess).. There. See FVWM::Module::Toolkit::show_message for more information. There are several global variables in the main namespace that may be used in the perl code: . F). A simple test: just like any other fvwm commands expands several dollar prefixed variables. This may clash with the dollars perl uses. You may avoid this by prefixing SendToModule with a leading dash. The following 2 lines in each pair are equivalent:. FvwmPerl being written in perl and dealing with perl, follows the famous perl motto: "There’s more than one way to do it", so the choice is yours.. The fvwm(1) man page describes all available commands. Basically, in your perl code you may use any function or class method from the perl library installed with fvwm, see the man pages of perl packages General::FileSystem, General::Parse and FVWM::Module. Mikhael Goikhman <migo@homemail.com>.
http://huge-man-linux.net/man1/FvwmPerl.html
CC-MAIN-2017-13
en
refinedweb
gsasl_base64_encode - API function #include <gsasl.h> int gsasl_base64_encode(char const * src, size_t srclength, char * target, size_t targsize); char const * src input byte array size_t srclength size of input byte array char * target output byte array size_t targsize size of output byte array Encode data as base64. Converts characters, three at a time, starting at src into four base64 characters in the target area until the entire input buffer is encoded. Returns the number of data bytes stored at the target, or -1 on error. Use gsasl_base64.
http://huge-man-linux.net/man3/gsasl_base64_encode.html
CC-MAIN-2017-13
en
refinedweb
Hi, I ve got problem in implementing ADT in my program, below I posted my PQ.c, PQ.h -> which is the ADT (not full ADT) you'll see why in my code...and office.c+ office. which is the program using the functions implemented in PQ this is the code (not full code, but will gives you the idea of my problem): PQ.h PQ.cPQ.cCode: typedef struct workers PQItem; //typedef WRecord PQItem; struct pqueue { int size; PQItem *item; }; typedef struct pqueue PQ; PQ *initPQ( void ); void swapArray( PQItem *arr1, PQItem *arr2 ); office.hoffice.hCode: #include <stdio.h> #include <stdlib.h> #include "PQ.h" PQ *initPQ( void ) { PQ *pq; pq = malloc ( sizeof(PQ) ); if( pq == NULL ) { fprintf( stderr, "ERROR: Memory allocation for priority queue failed;" "program terminated.\n" ); exit( EXIT_FAILURE ); } return pq; } void swapArray( PQItem *arr1, PQItem *arr2 ) { PQItem temp; temp = *arr1; *arr1 = *arr2; *arr2 = temp; return; } office.coffice.cCode: #include "PQ.h" #define NAMESIZE 100 #define BASE 100 /* Structure Template */ typedef int Time; struct workers { char *name; Time starttime; Time stoptime; }; typedef struct workers WRecord; //typedef struct workers PQItem; /* Functions Prototype */ void usage( char *progname ); FILE *open_file( char *progname, char *fname, const char *mode ); char *alloc_string_memory( int len ); char *get_name( FILE *fp ); WRecord *read_worker( FILE *fp, int workers_total ); Time add_time( Time time1, Time time2); My problem is PQItem, where it is the same type as struct workers, but I dont know how to linked the files therefore PQ recognize struct workers (or PQItem or WRecord), there in my program you'll see two appearences ofMy problem is PQItem, where it is the same type as struct workers, but I dont know how to linked the files therefore PQ recognize struct workers (or PQItem or WRecord), there in my program you'll see two appearences ofCode: #include <stdio.h> #include <stdlib.h> #include <string.h> #include "office.h" int main (argc....) { /* main program here, not related with problem */ } /*functions here (from function prototype declared in office.h, which is not related as well IMO */ one with // and without...that's one the combination I ve tried to make the linking work..one with // and without...that's one the combination I ve tried to make the linking work..Code: typedef struct workers PQItem; but so far what I ve get is these error: - redefinition error -or no semicoolon at the end of the struct -or pointer redeferencing into INCOMPLETE type (this is in swapArray functions in PQ.c, where it doesnt recognize PQItem, and I believe this is the problem, HOW to make PQItem recognized??) i hope someone can help me or gimme any suggestion, I dont need full ADT...just semi ADT thanks Ferdinand
https://cboard.cprogramming.com/c-programming/27315-half-adt-nested-struct-problem-printable-thread.html
CC-MAIN-2017-13
en
refinedweb
Consider the simple relation between Employee and Company models(many to many): Company model: has_many :employees, through: :company_employees has_many :company_employees has_many :companies, through: :company_employees has_many :company_employees belongs_to :employee belongs_to :company has_many :companies def owners_linked @company_employees = [] owner.companies.each do |company| @company_employees.push (company.company_employees.includes(:company, :employee)) # when += instead of push - it works end respond_to do |format| format.js {render "employees_list"} end end @company_employees.push company.company_employees.includes(:company, :employee) This doesn't have anything to do with your use of includes. When you use += you end up with an array of CompanyEmployee objects. However when you use push you are no longer concatenating arrays but creating an array of collections. You are then calling employee on the collection rather than an element of the collection which is why you get an error. Personally I would write this as @company_employees = owner.companies.flat_map do |company| company.companee_employees.include(...) end Although I would do so for reasons of succinctness rather than performance. Any performance difference between += and other ways of concatenating arrays is minuscule compared to the time it takes to fetch data from the database. This doesn't entirely solve your n+1 problem though, since the data for each company is loaded separately. I would do @company_employees = owner.companies.include(company_employees: [:company, :employee]).flat_map(&:company_employees) Which doesn't do as many queries.
https://codedump.io/share/RbeDwYHP50uD/1/includes-method--n-1-issue--doesn39t-work-with-a-push-method-but-does-with--when-assigning-to-an-array
CC-MAIN-2017-13
en
refinedweb
:- It was just added recently (probably a devfs thing). Before that, it just ran the "lvm_fs_setup()" (or whatever) function, and didn't do anything with the output. Hmm, that means my suggestion may be broken. Need something like: #if LINUX_KERNEL_VERSION > KERNEL_VERSION (2, 3, 38) foo.de = #endif lvm_fs_setup(); Cheers, Andreas -- Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto, \ would they cancel out, leaving him still hungry?" -- Dogbert
https://www.redhat.com/archives/linux-lvm/2001-October/msg00124.html
CC-MAIN-2017-13
en
refinedweb
: #!/lusr/bin/python print "Hello World!" Before you run it, you must make it into an executable file. > chmod +x Hello.py To run the script, you simply call it by name like so: > ./Hello.py For complex problems, you will break up the task of solving the problem into separate subtasks. Each of the subtasks will be implemented as one or more functions. There will be a main function that will call these other functions to implement the solution. Even for the simplest problems, where you do not have any auxiliary functions, write the main() function and call it. The skeleton of your Python code will look like: def main(): ... ... main() A couple of things to remember about Python programs.
http://www.cs.utexas.edu/~mitra/csSummer2011/cs303/lectures/python.html
CC-MAIN-2014-52
en
refinedweb
24 November 2009 20:37 [Source: ICIS news] WASHINGTON (ICIS news)--US Interior Secretary Ken Salazar on Tuesday announced broad onshore oil and natural gas leasing plans for 2010 and promised that oil, gas and coal will continue to play important roles in the Obama administration’s energy policies. But Salazar also had harsh words for unnamed energy industry trade associations, saying they have used “poison and deception” to criticise his department’s policies. Several major energy industry trade associations - and some in Congress - have accused the Obama administration in general and the Interior Department in particular of using go-slow tactics to delay and discourage development of carbon-based resources in favour of renewable and alternative energy technologies. The production, availability and pricing of natural gas are crucial concerns for the ?xml:namespace> Salazar said the Department of the Interior will hold 38 lease sales next year for energy development on 2.7m acres of federal lands in nine western states and in He said the Obama administration will continue to invite oil and gas development while it pursues what he termed a more balanced energy policy. “Our nation needs a balanced and appropriate use of our conventional and renewable energy resources,” Salazar told a press conference. “That means oil, gas and coal will continue to play an important role in our energy mix as we develop and expand the use of wind, solar, geothermal and other renewable sources.” Salazar was asked to respond to drilling industry criticism that the Obama administration had issued 1,000 fewer leases in its first year than did the administration of President George W Bush. “Frankly, we have made many announcements of onshore and offshore leasing programmes, a significant number of properties,” Salazar said, noting that by the end of this year his department will have held 36 lease sales covering nearly 2,400 parcels on nearly 3m acres of federal lands. “We have brought our nation’s energy development into balance, ensuring that those oil and gas resources are developed in the right way and in the right places,” he said. “But you wouldn’t know it from the untruths that are being issued by oil and gas industry trade associations,” Salazar added. Although he did not identify specific trade groups, he said that some had used “poison and deception” in criticising the administration’s oil and gas development policies. In addition, he said that significant portions of parcels already leased to energy companies are not being aggressively developed. “Of the more than 7,000 current onshore leases, some 5,211 are not producing, and of the 53,585 offshore leases, 26,000 are not producing,” Salazar said. “So large parts of the public domain are being made available but are not being developed.” Some energy legislation pending in Congress would impose a “use it or lose it” deadline on exploration and development companies for leases they acquire. Salazar said his department is in the process of “rebalancing” the 2007-2012 offshore leasing programme that had been issued by the Bush administration in its final days in office but which was ruled invalid by a federal court on environmental grounds. He said he could not say when a new five-year offshore development plan would be ready, but that “I hope to bring my evaluation of that plan to a conclusion in the near future”.
http://www.icis.com/Articles/2009/11/24/9266882/us-sets-onshore-oil-and-gas-leasing-plan-for-2010.html
CC-MAIN-2014-52
en
refinedweb
Insert"... A Statement will always proceed through the four steps above for each SQL query Executing Prepared Statement ; } Executing Prepared Statement Prepared Statement represents the pre... is no-parameter prepared statement example. Example- At first create table named student... String PreparedStatement statement = con.prepareStatement(query JDBC Prepared Statement Insert JDBC Prepared Statement Insert The Tutorial illustrates a program in JDBC Prepared... the code.set String ( ) - This is a method defined in prepared Statement class insertion in SQL - SQL insertion in SQL Query is "insert into employee values('"+eno... using prepared statement!"); Connection con = null; try... in the database because of single code in the name. dbase is MS-SQL emp.name data type PDO Prepared Statement for us to use a Prepared Statement for sending SQL statements to the database..., using this we can reduce the execution time. The prepared statement can... benefits: The only requirement in this statement is that the query should... of command in sql query to the database, In case all the commands successfully, return Usage of setDate() in prepared Statement - JDBC Usage of setDate in prepared Statement Hi, I have created a jsp...() of prepared statement,the following error is displayed: setDate() not available in prepared statement. Basically, I need to accept the date dynamically...; : Prepared statement is good to use where you need to execute same SQL statement... statement. Update record is most important operation of database. You can update one < JDBC Prepared Statement Update JDBC Prepared Statement Update The Update Statement... Prepared Statement Update. The code include a class Jdbc Prepared data insertion and fetch 1 data insertion and fetch 1 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ <..." content="text/html; charset=UTF-8"> <title>JSP Page</title>..."); // String query = "select * from posting where ignite_id='"+id Creation and insertion in table ; statement and insert data using "insert " statement query in the created database. Before running this java code you need to paste "mysql...CREATION OF TABLE,INSERTION &DISPLAY OF DATA USING SQL JDBC Prepared Statement Example how to update the table using prepared statement. At first create a database...; } JDBC Prepared Statement java.sql.PreparedStatement is enhanced version... Insert Prepared Statement ); System.out.println("Connected to the database"); // Create a query String String query... a precompiled SQL statement. It is alternative to Statement At first Create named student a table in MySQL database as CREATE TABLE `student` ( `rollno` int(11 Prepared Statement With Batch Update Prepared Statement With Batch Update  ... PreparedStatementBatchUpdate Prepared Statement Batch Update Example! Added... facility. In batch update more than one records can be added in the database Deleting Records using the Prepared Statement Deleting Records using the Prepared Statement  ... DeleteRecords Delete records example using prepared statement! Number... the records from the database table by using the PreparedStatement interface Insertion into database Insertion into database Hi, I need code for inserting the multiple select box values into database.Please do send me the code. Thanks for ur immediate replies its helping a lot Set Timestamp by using the Prepared Statement example by using the Prepared Statement! 1 row(s) affected) Database... Set Timestamp by using the Prepared Statement... will teach how to set the Timestamp in database table by using Set Time by using the Prepared Statement Set Time by using the Prepared Statement  ...:\vinod\jdbc\jdbc\PreparedStatement>java SetTime Prepared Statement Set... the time in database table by using the PreparedStatement interface of java.sql Hibernate Prepared Statement This section contain how prepared statement works in hibernate insertion in SQL - SQL insertion in SQL Hi! Everybody... i have a problem with sql insertion. When i am inserting values through command i.e. insert into employee values(,,,,); here i want to insert ' in employee name column of database Prepared Statement Set Big Decimal Prepared Statement Set Big Decimal  ... the big decimal and how can be set it in the database table by using... decimal and which type to use this in the database. We know that using of both Update Records using Prepared Statement Update Records using Prepared Statement  ... through Prepared Statement! Updating Successfully! After... a question, what is updating the records. In the case of relational database Select Records Using Prepared Statement Select Records Using Prepared Statement  ... SelectRecords Select Records Example by using the Prepared Statement... that the PreparedStatement object represents a precompiled SQL statement. See brief JDBC: Delete Record using Prepared Statement JDBC: Delete Record using Prepared Statement In this section, we will discuss...; : Prepared statement is good to use where you need to execute same SQL... of student whose roll_no is 3 using prepared statement. package jdbc Prepared Statement Example . An INSERT INTO statement is used to insert the value into the database table... is used to make the SQL statement execution efficient. In Java, when we use the JDBC to work with database java.sql provides two interfaces for executing File insertion into oracle database File insertion into oracle database How to Read and Insert a file (any format) into a Oracle database problem in insert query - JSP-Servlet problem in insert query Hi! I am using this statement for data insertion into table but data is not moving only null passed and stored... Hi friend, We check your Query it is correct .If you have PHP SQL Insertion PHP SQL Insertion PHP SQL Insertion is used to execute MySQL queries in PHP script. It is used to send insert query or command that adds the records to MySQL error : not an sql expression statement error in connecting to database SQLserver 2005 in Jdeveloper,i m usin struts and jsp my pogram: import java.sql.*; public class TaskBO { public TaskBO... { Connection conn = DatabaseManager.getConnection(); Statement stmt Set Data Types by using Prepared Statement Set Data Types by using Prepared Statement  ... the prepareStatement takes SQL query statement and then returns the PreparedStatement... Statement! 1 row(s) affected) After executing the program: Database Table Select query in JSP Select query in JSP We are going to describe select query in JSP.... After that we create JSP page than we have make database connection. After that we use SELECT query. SELECT query is a retrieve the data from database than Prepared Statement Set Object Prepared Statement Set Object  ... the parameterized SQL statement to the database that contains the pre-compiled... PreparedStatementSetObject Prepared Statement Set Array Example! 1 Record Using the Prepared Statement Twice Using the Prepared Statement Twice  ... TwicePreparedStatement Twice use prepared statement example! List of movies... represents the precompiled SQL statement. Whenever, the SQL statement is precompiled PHP MySQLI Prep Statement PHP-MySQLI:Prepared-Statement mysqli::prepare - Prepares a SQL query and returns a statement handle to be used for further operations on the statement. The query should consist of a single sql query. There are two styles are available Inserting Records using the Prepared Statement Inserting Records using the Prepared Statement  ... records example using prepared statement! Enter movie name: Bagban... to learn how we will insert the records in the database table by using Count Records using the Prepared Statement Count Records using the Prepared Statement  ... to count all records of the database table by using the PreparedStatement... will know that how many records in a database table then you get easily with the help Set Date by using the Prepared Statement Set Date by using the Prepared Statement  ...\PreparedStatement>java SetDate Prepared statement set date example... for setting date in the database table by using the PreparedStatement interface Record using Prepared Statements ; : Prepared statement is good to use where you need to execute same SQL statement many...(); } } } Output : Insert Record using Prepared Statement...JDBC: Insert Record using Prepared Statements In this section, you will learn insertion error - JSP-Servlet insertion error my first jsp page : In this i m getting all the values through a method called getAllDetails,the values are getting inserted... into table. below is the codeof jsp and java pages; function writing querries qnd connecting for insertion, accessing.... writing querries qnd connecting for insertion, accessing.... i...:8080/examples/jsp/insert.jsp"> <table> <tr><td>Name:</td><... = DriverManager.getConnection("jdbc:odbc:student"); Statement st=con.createStatement file insertion - JSP-Servlet file insertion How to insert and retrieve .doc files into sql server with example using jsp and servlets Update statement Update statement I create a access database my program When I click... database is not update I write this program using 3 differfnt notepad pages MY...(); f.setVisible(true); } } this is my update query inside CarConnector.java public how to write a query for adding records in database how to write a query for adding records in database How write fire query in JSP for adding records in database Set byte, short and long data types by using the Prepared Statement Set byte, short and long data types by using the Prepared Statement... SetByteSortLong Set Byte,short and long example by using Prepared Statement... with MySQL database by using the JDBC driver. After establishing the connection Html+jsp+database is enough to do the small operation Html+jsp+database is enough to do the small operation Hai , If u want to do simple insetion and data retrival operation throw jsp ?.you need... result = null; Statement stmt = null; String Query=" INSERT data insertion from xml file to database table data insertion from xml file to database table Hi all, I have data in the XML file. I need to insert it into table in the database using servlet. so please reply me . ThankYou Accessing database from JSP by a database query. "stmt" is a object variable of Statement .Statement Class... Accessing database from JSP  ... ; This will create a table "books_details" in database "books" JSP Code database through jsp database through jsp sir actually i want to retrieve the data from database dynamically.because i dont know how many records are there in the database? thanks Here is an example of jsp which retrieves data from Inserting Data In Database table using Statement Inserting Data In Database table using Statement...; Table in the database before Insertion...: Table in the database after Insertion Database the database, create and populate tables, query individual tables. (You must...Database Hi, i need help building a database based on something like...), multiplicity (or cardinality), in the context of the database system (i.e. give query string database and printing on another page. i have to take that variable from servlet page to different servlet jsp page and that i want to do with query string so...query string on my servlet page i take the values of the field The DELETE Statement The DELETE Statement The DELETE statement is used to delete rows from a table. database will update that is why deletion and insertion of data will be done. Syntax   query query how to get data from database in tables using swings database query database query create table file1 ( file_id int , file_data text ); i'm unable to create this table and the error is invalid type text. plz help me javascript variable value insertion in DB javascript variable value insertion in DB how can I insert javascript variable value into database using php Query Query how can i set path and Classpath or Environmental Variable for jsp/servlet program to compile and run do not repeat values in databse during insertion using php and javascript do not repeat values in databse during insertion using php and javascript ... sql.php and form.jsp. i want to do insert values into mssql database from user input form. sql.php is able to insert the values into database. i have connected query - SQL tell me how to write the query in database but not jsp code. thank u...query hi sir i have 2 tables,one is user_details and the other is employee in my database. in user_details i have given a userid varchar Php Sql Query Insert Php Sql Query Insert This example illustrates how to execute insert query with values in php application. In this example we create two mysql query for insert statement with separate values in the database table. The table before query on strings query on strings i want to convert JTextField data into string... with a database element but it shows some error. Hi Friend, Try the following... = DriverManager.getConnection("jdbc:odbc:student"); Statement st dynamic delete and insertion in tables dynamic delete and insertion in tables hey... i have a problem..I am working on a problem management system..my code for a particular jsp page is as follows in this page i want to show the admin the already present records SQL Backup query with where statement SQL Backup query with where statement SQL Backup query with where statement ... in Backup query return you the restricted records into stu_tab_ Backup mysql query with ("jdbc:odbc:excel"). Execute query "select name, address from [Sheet1$]". Here... database using the loops. For more illustration, we are providing you java example... = DriverManager.getConnection("jdbc:odbc:excel"); Statement stmt = con.createStatement update statement in mysql update statement in mysql Update statement to update the existing records in database table. The given code creates a mysql connection and use the update query to update the record. To update record, we write query Login Query Login Query how to login with usertype in jsp page and redirect then whith user tyep here is my code <% Connection con=null; Statement stmt=null; ResultSet rst=null; String email=request.getParameter JDBC Batch Example With SQL Insert Statement for data insertion in the database table. 1. Create Statement object using...JDBC Batch Example With SQL Insert Statement: In this tutorial, we are discuss about insert SQL statement with the jdbc batch. First of all, we
http://www.roseindia.net/tutorialhelp/comment/86740
CC-MAIN-2014-52
en
refinedweb
Using Terracotta for Configuration Management By JR Boyens 01 May 2008 | TheServerSide.com Configuration. Getting a value out of the configuration looks something like this: String myProperty = Config.getInstance().get("hostname", "propertyname"); However, this was too restrictive. In a lot of cases, we had JVMs that served specific roles, and the hostname wasn't important - or we had JVMs that needed specific tuning by hostname and the role didn't matter. So we decided to use regex for the "primary" index lookup, so that we could specify "hostname.*" or ".*rolename", or a full, exact match, to pull out the property we were interested in. Terracotta makes most of the implementation very easy. All we had to do was design the Config class such that we could get properties out and store them, then we tuned Terracotta such that the Terracotta runtime knew that the Config object's map was to be shared across other machines. Then, at runtime, whenever the JVMs accessed the Config object's internal map, Terracotta would keep track of changes and propagate them automatically. Network traffic was very low, as not only are the values very small, but only the changes are sent across the wire. To be the "most current," all the client JVMs would have to do is keep asking the Config object for values, instead of caching them. (They can cache the returned property values, of course... but if you wanted the values to refresh, you'd have to hit the Config object again eventually, so you'd have the cache expire every so often.) Here's an example Config object for us: package config; import java.util.HashMap; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; public class Config { private Map<String, Map<String, String>> configuration = new ConcurrentHashMap<String, Map<String, String>>(); private static Config instance=new Config(); public static Config getInstance() { return instance; } public String getMatchingPrimaryKey(String regex) { if(getInstance().configuration.containsKey(regex)) { return regex; } for(String s : getInstance().configuration.keySet()) { if(s.matches(regex)) { return s; } } return null; } public String get(String primary, String secondary) { String v = null; Map<String, String> m = getInstance().configuration.get(primary); if(m == null) { // okay, we need to try to fetch based on a regex. Let's iterate through, looking // for the first match String s = getMatchingPrimaryKey(primary); if(s != null) { m = getInstance().configuration.get(s); } } if(m != null) { v = m.get(secondary); } return v; } public void set(String primaryKey, String secondary, String value) { Map<String, String> m=getInstance().configuration.get(primaryKey); if(m == null) { m = new ConcurrentHashMap<String, String>(); getInstance().configuration.put(primaryKey, m); } m.put(secondary, value); } } Note how vanilla the code is. All of the hard work will be managed by the Terracotta configuration file. (Note, also, the getMatchingPrimaryKey() - if you are using a regex, you're far better off calling this early and storing the resulting key.) The <dso> part of the tc-config.xml should contain these sections: <instrumented-classes> <include> <class-expression>config.Config</class-expression> </include> </instrumented-classes> <roots> <root> <field-name>config.Config.instance</field-name> </root> </roots> This means that the config.Config class is to be processed as being distributed by the DSO engine. The roots section explains what instance variables to share across the DSO network, and the locks tells DSO how to synchronize access (and what kind of access to synchronize.) As long as we start the JVM with the DSO bootjars in place, this is all that's necessary to give us our distributed configuration. Remember the code we used to access the Config? Here're some classes to show you the tests, as you run them along with each other: package executables; import config.Config; public class SetValue { public static void main(String[] s) { System.out.println(s[0]+","+s[1]+","+s[2]); Config.getInstance().set(s[0], s[1], s[2]); } } package executables; import config.Config; public class ReadValue { public static void main(String[] s) { System.out.println(Config.getInstance().get(s[0], s[1])); try { Thread.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Config.getInstance().get(s[0], s[1])); } } To see these in action, start up three console windows. In the first, crank up the DSO server with "start-tc-server", then in the second, run the ReadValue class with keys you'd like to use: dso-java -cp . executables.ReadValue myhost myprop Immediately after starting the ReadValue class, run the SetValue class in the third console: dso-java -cp . executables.SetValue myhost myprop value1 The ReadValue class will give you a null report immediately (because the value hasn't been set); then, after ten seconds, the "value1" will be dumped. If you re-run the ReadValue, you'll get "value1" still - showing that the configuration is persistent across client JVM runs - and if you rerun SetValue with a different third argument, you'll see that ReadValue updates properly. Easy stuff, really, and it's very convenient. Biography JR Boyens is a Senior Developer for Interactions Corporation. Interactions optimizes customer service by integrating human intent recognition into standards based voice platforms delivering significant cost savings and unparalleled caller experience. His previous work included being a contributor to the RIFE web framework ( ). He enjoys short walks around indoor pools and talking about himself in the third person. In his free time he solves complex addition problems and mentors mallards on self-actualization with his 1 year old daughter. He can be contacted via email at jboyens@interactions.net.
http://www.theserverside.com/news/1363824/Using-Terracotta-for-Configuration-Management
CC-MAIN-2014-52
en
refinedweb
>>?NEPTUNE_=96_unedited?= Released on 2012-10-17 17:00 GMT Condensed will go to clients. NEPTUNE - East Asia - 110627 REGION July will likely bring to the foreground elements of instability in East Asia, ranging from China's economy to South China Sea territorial disputes to Southeast Asian politics and business. Regionally, tensions over the South China Sea have shown signs of subsiding after several months of escalation between China and both Vietnam and the Philippines. For instance, China and Vietnam conducted a joint sea patrol to show they can still cooperate and negotiated a temporary cessation of tensions. However there is no solution to territorial disputes in sight, and underlying factors driving tensions remain firmly in place. Vietnam feels that backing down from China will run enormous economic and security risks. The U.S. and the Philippines are conducting naval drills in the sea to show alliance strength. Meanwhile there is great desire to explore for oil and natural resources during a time of high commodity prices: the Philippines has increased exploration this year, and China will deploy Marine Oil 981, a large deep-water oil drilling platform in the South China Sea in July. CHINA July 1st marks the 90th anniversary of the founding of the Communist Party of China. Rumors say the event will be marked by a test float of the nation's first aircraft carrier (set for full deployment in October) and the opening of the Shanghai-Beijing high-speed train. The aircraft carrier is mostly symbolic about China's global status, rather than militarily significant; and the high-speed train opens amid an anti-corruption crackdown targeting the railway ministry and controversy over the nation's railways plans. Even after the big event China will maintain tight security. But the events will garner fanfare and attention. On a deeper level, risks of instability are continuing to climb. Inflation will remain high, or even at peak levels, in July, putting pressure on a society that has already proved to be increasingly restless in terms of protests, strikes and riots in 2011. Inflation is driving a new wave of unauthorized labor strikes emerging at factories similar to the 2010 round of strikes, but raising the stakes since many companies feel they have already raised wages enough. There are rumors of growing unrest in Tibet that have been muffled, and warnings from leaders that stability must still be maintained Xinjiang, the two are the most restive regions yet have not seen big trouble so far this year. Rising threats to economic growth will make the government reluctant to harden its stance against inflation, and policy mistakes in either direction can exacerbate economic volatility and social problems. Fears that China will not be able to handle this precarious balance will contribute to economic doubts internationally, especially with the ongoing debate about China's massive local government debt problem and need for a bailout. At the end of the month, top leaders will gather for an annual economic policy meeting, which will be watched closely, especially for signs that they will deem inflation sufficiently contained to re-accelerate economic growth. But re-acceleration poses risks too. SOUTH KOREA South Korea will remain focused on expanding trade and reviving negotiations with North Korea. Pyongyang has shown signs of renewed hostility, and the move toward resuming denuclearization talks has seen some delays, but the move toward talks has not collapsed. Meanwhile, the South Korean international economic and free trade agenda will see a boost with the Korea-European Union free trade agreement taking effect July 1, and the U.S. House of Representatives possibly voting to ratify the long-awaited Korea-U.S. Free Trade Agreement, which was renegotiated by the Obama administration in early 2011. THAILAND Thailand's highly anticipated and hotly contested general elections will occur July 3. There is some risk of voting booth violence, but the real threat to stability comes after the election results. Former prime minister Thaksin Shinawatra's party, Pheu Thai, is leading the incumbent Democrat Party by as much as 13 points in opinion polls, and has won the past four elections. Thaksin's sister, Yingluck, gave the opposition party a boost by running for prime minister. The Democrat Party and the Thai military have both warned the public against re-electing Thaksin's supporters. There is concern they will grant amnesty for Thaksin and his partners, and further challenge the Bangkok elite establishment. However, short of a landslide, Pheu Thai could have trouble forming a ruling coalition, due to military maneuvering behind the scenes. Whether the pro-Thaksin group is regains power, or is deprived of power, further instability will ensue. The loser of elections will begin mounting a campaign to destabilize the new government, though it may not be launched as soon as July. Separately, Thailand's first liquid natural gas (LNG) import facility is expected to begin operating in July to import 1 million metric tons of LNG from Qatar. MALAYSIA/SINGAPORE/INDONESIA/AUSTRALIA Immigrant workers will attract attention throughout the region in July. Malaysia on July 1 will stop hiring foreign workers while launching a one-month program to grant amnesty to about two million existing illegal migrant workers. Singapore and Australia will simultaneously raise fees and required qualifications for immigrant workers in a bid to shift the flow to more high-skilled immigrants. Separately, Malaysia may see street rallies from NGOs calling for free and fair elections, and counter-rallies from establishment-supporters, with politics heating up in anticipation of a highly anticipated general election that STRATFOR sources say is likeliest to be held in July, in Sept-Nov, or else in 2012. Indonesia is threatening to cut off cattle imports from Australia if it does not lift a temporary export ban by the second week of July. The two differ over regulatory standards, with Indonesia want to be able to import live cattle, at lower weight than required by Australia, and to put a cap on frozen beef imports, whereas Australia is arguing for tougher regulation on animal welfare and point of origin; the approach of Ramadan has complicated the time frame. It seems unlikely that Indonesia would go so far as to impose a full ban on cattle exports. Separately in Indonesia, state port operator PT Pelindo II will sign an MOU with a consortium to build a $94 million container port in Sorong, West Papua. -- Matt Gertken Senior Asia Pacific analyst US: +001.512.744.4085 Mobile: +33(0)67.793.2417 STRATFOR
http://www.wikileaks.org/gifiles/docs/30/3092935_-eastasia-windows-1252-q-neptune_-96_unedited-.html
CC-MAIN-2014-52
en
refinedweb
19 November 2009 15:29 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--The chemicals sector faces “total re-invention” as the ?xml:namespace> Speaking to industry executives at the Chemical Industry Association’s (CIA’s) annual dinner in The government expects chemicals to remain a tough business as the global economy recovers from recession as it competes with producers in industrialising economies and faces the challenges of tougher carbon control. The industry has lost 10% of its workforce in the current crisis and does not expect to return to pre-recession levels of output until at least 2012, according to the CIA. “As we search for those infamous ‘green shoots of recovery,’ we have an opportunity to change how the UK encourages manufacturing and return it to the heart of our economy,” CIA chief executive Steve Elliott says in a blueprint for UK manufacturing published by the association to coincide with its major annual gathering. The CIA is seeking wider recognition of the role of chemicals and pharmaceuticals in an advanced manufacturing economy and a “joined-up” approach from government and other agencies to support The It has called for a new emphasis on education for science, a wider recognition of technical skills and an “appropriate” regulatory regime focused on outcomes and not processes. “We urge support for our sector’s role in delivering a low-carbon economy. Central to this will be a challenging but pragmatic and simplified system of incentives. This must give clear signals to business while still preserving the competitive position of Government commitment to equipping industry to enable the
http://www.icis.com/Articles/2009/11/19/9265653/uk-chems-face-total-re-invention-in-low-carbon-drive-mandelson.html
CC-MAIN-2014-52
en
refinedweb
Working through the specific link that was clicked. The challenge was complicated by the fact that the taskflows had to be completely independent, of each other and of the page in which they were embedded. The general approach with a taskflow that has a link that when clicked should result in effects outside the taskflow is to have the taskflow publish a contextual event with appropriate payload. It is then up to the page that embeds the taskflow in a region to consume and handle the event. That was the easy part. The event handler can read the payload from the event, store values in a managed bean and navigate to the page that contains the drill-down-target-taskflow. This page has configured the input parameters for this second taskflow using EL expressions that refer to the managed bean that was populated by the event handler. Sounds straightforward, does it not? What then is the catch in this story? It turned out to be not so straightforward to programmatically arrange for navigation to the specified page. WebCenter does not work according to the standard JSF navigation model, but instead uses its own Navigation Models that contain pages and other node-types. Usually navigation is performed through the activation of an action component (command link, command button) that invokes the processAction operation on the Navigation Context. <af:commandLink <f:attribute name="node" value="#{navigationContext.navigationModel['modelPath=/oracle/webcenter/portalapp/navigations/programmaticNavigationModel']. </af:commandLink> However, I did not find any clear documentation on how to do navigation programmatically. After a lot of trial and error and reading through a substantial number of OTN forum threads and blog articles, I put together the following contextual event handler that performs programmatic navigation: public class PortalEventsHandler { private static final String RECORD_DETAILS_PAGE_EL = "#{navigationContext.navigationModel['modelPath=/oracle/webcenter/portalapp/navigations/programmaticNavigationModel'].node['p1']}"; private static final String CURRENT_CE_CONTEXT_EL = "#{currentCEContext}"; private static final String RECORD_ID_PAYLOAD_PARAMETER = "recordId"; public void handleDrilldownEvent(Map payload) { // drill down needs to take us to page 1 (with id p1 in the programmatic navigation model) // set the selected recordId in the currentCECntext Integer recordId = (Integer)payload.get(RECORD_ID_PAYLOAD_PARAMETER); CurrentCEContext ceContext = (CurrentCEContext)JSFUtils.resolveExpression(CURRENT_CE_CONTEXT_EL); ceContext.setCurrentRecordId( recordId); // the processAction method that we need to use for navigation requires an ActionEvent // as input; this ActionEvent needs to have a Component as a source // this component should have an attribute called node that contains a node from a NavigationModel // 1. create the component to put into the Action Event Application application = FacesContext.getCurrentInstance().getApplication(); HtmlCommandButton submitButton = (HtmlCommandButton)application.createComponent(HtmlCommandButton.COMPONENT_TYPE); // 2. find the page to navigate to - the nomination details page SiteStructureResource node = (SiteStructureResource)JSFUtils.resolveExpression(NOMINATION_DETAILS_PAGE_EL); // 3. create the ActionEvent and put the page node into it ActionEvent actionEvent = new ActionEvent(submitButton); actionEvent.getComponent().getAttributes().put("node", node); // 4. get hold of the NavigationContext to invoke the processAction on NavigationContext navContext = SiteStructureContext.getInstance(); navContext.processAction(actionEvent); } What is happening here is that on the fly an ActionEvent is created – since that is what the processAction method expects for an input. The ActionEvent is associated with a UIComponent – also created on the fly – because that is what ActionEvents are and because this component is the carrier of the node attribute that contains the node from Navigation Model to which navigation must be performed. Hi, Can you share a sample application for this? Hi ! I’ve made the exact same thing (tnks a lot for your code) but nothing happens, it stays in the same page. In my case i’m reacting to a js286 event published by one of my portlets. The page that hosts the portlet catches that event (that is working fine) but when i use your code I’m not able to jump to other page. There is some kind of precondition to execute this code that i’m not aware of ? What I’m in doing a datacontrol is this: package vdf.myvdf.ui.portal.wc; import javax.el.ELContext; import javax.el.ExpressionFactory; import javax.el.ValueExpression; import javax.faces.application.Application; import javax.faces.component.html.HtmlCommandButton; import javax.faces.context.FacesContext; import javax.faces.event.ActionEvent; import oracle.adf.model.binding.DCBindingContainerValueChangeEvent; import oracle.adf.view.rich.context.AdfFacesContext; import oracle.webcenter.navigationframework.NavigationContext; import oracle.webcenter.portalframework.sitestructure.SiteStructureContext; import oracle.webcenter.portalframework.sitestructure.SiteStructureResource; public class EventHandler { public EventHandler() { super(); } public void handleEventObjectPayload(DCBindingContainerValueChangeEvent customPayLoad) { String changedDepartmentName = (String)customPayLoad.getNewValue(); handleEventStringPayload(changedDepartmentName); } public void handleEventStringPayload(String customPayLoad) { FacesContext facesCtx = FacesContext.getCurrentInstance(); Application application = facesCtx.getApplication(); ELContext elCtx = facesCtx.getELContext(); ExpressionFactory expFactory = application.getExpressionFactory(); ValueExpression ve = expFactory.createValueExpression( elCtx, "#{navigationContext.navigationModel['modelPath=/oracle/webcenter/portalapp/navigations/default-navigation-model'].node['home']}", Object.class); SiteStructureResource node = (SiteStructureResource)ve.getValue(elCtx); HtmlCommandButton submitButton = (HtmlCommandButton)application.createComponent( HtmlCommandButton.COMPONENT_TYPE); ActionEvent actionEvent = new ActionEvent(submitButton); actionEvent.getComponent().getAttributes().put("node", node); NavigationContext navContext = SiteStructureContext.getInstance(); navContext.processAction(actionEvent); } } Hi Lucas, Great article and a wonderful tip. However, the code snippet is missing some bits like the definition of NOMINATION_DETAILS_PAGE_EL. Would it be possible to put together a small sample around it and make available? Thanks Z.
http://technology.amis.nl/2011/12/01/programmatic-navigation-in-webcenter-portal-application-do-processaction-from-java/
CC-MAIN-2014-52
en
refinedweb
RESTfulie - A Gem To Create Hypermedia Aware Services And Clients Guilherme Silveira writes to InfoQ on the release of a ruby gem that makes developing hypermedia aware services and clients that consume them a breeze. He said Much has been spoken about what is and what are the advantages of using rest/restful ideas in one's application. Last year, Mark Baker wrote about hypermedia content in restful applications. There are also a few texts on more formal attempts to define HATEOAS and its advantages. Although being some good usage of the web in order to create web-based services, it is still missing the very best part of our every day life: hyperlinks and hypermedia content. He goes on to describe an example of defining an order that goes through a well defined set of transitions for e.g. from example from unpaid to paid etc. It also allows the mapping of various transitions to a corresponding actions... class Order < ActiveRecord::Base state :unpaid, :allow => [:latest, :pay, :cancel] state :cancelled, :allow => :latest transition :latest, {:action => :show} transition :cancel, {:action => :destroy}, :cancelled transition :pay, {}, :preparing end Which generates, for example, an atom based resource representation that has embedded hypermedia: <order> <product>basic rails course</product> <product>RESTful training</product> <atom:link xmlns: <atom:link xmlns: <atom:link xmlns: </order> And allowing the client to invoke dynamically created methods from consuming that resource representation: order = Order.from_web '' order.pay(payment) Jim Webber, on whose RESTBucks article and forthcoming REST book has been inspiration for the creation of this gem, said The mulit-talented Guilherme Silveira with Adriano Almeida and Lucas Cavalcanti, has been coding up a storm on the RESTful services front. [...] More importantly, they’ve written up a generic client that can be used to explore that protocol. They’re hosting the demo service on GAE, and have released their code for all to enjoy on GitHub. Fabulous work guys, and very timely too Savas Parastatidis, the co-author of the book had the following comment I can’t wait for our book to finish so that everyone can check out our discussion of hypermedia and the stuff we’ve built. It’s really great to see Restfulie taking a very similar approach to ours. Detailed examples of the gem usage for creating RESTful services and clients that consume those services are available at the GitHub project repository. I must be dreaming by Jean-Jacques Dubray order = Order.from_web resource_uri puts "Order price is #{order.price}" order.pay payment # sends a post request to pay this order order.cancel What? Actions? Dilip, are you sure you gave us the right URL? That can't be RESTful? More seriously, what have we gained from Web Services? How does a client "adapt" to a changing lifecycle (on the "server" side)? It's kind of sad that for the last several years the RESTafarians have talked about the "uniform interface" and all we do now in RESTafaria is encoding actions behind the HTTP verbs. That's called progress? What a waste of time, what a bunch of boloney. What's next? a contract? Ah no, they already have one... class Order < ActiveRecord::Base def following_transitions transitions = [] transitions << [:show, {}] transitions << [:destroy, {}] if can_cancel? transitions << [:pay, {:id => id}] if can_pay? transitions << [:show, {:controller => :payments, :payment_id => payment.id }] if paid? transitions end end On the positive side, it's good so see yet another evidence of the emergence of the state machine / entity lifecycle in connected systems. Re: I must be dreaming by Dilip Krishnan More seriously, what have we gained from Web Services? How does a client "adapt" to a changing lifecycle (on the "server" side) For one, the client no longer relies on "cool urls" and the server can "guide" the clients as the service as it progresses thru' the business process. Its WS-* speak its similar to using a UDDI service to provide an indirection for service locators, only a much lighter weight way of doing it. On the question of what has been gained from web services, I would say the fact that we can have an object oriented programming model via some ruby meta-programming magic. Tho' whether thats a good thing or not is debatable. Re: I must be dreaming by Dilip Krishnan What? Actions? Dilip, are you sure you gave us the right URL? That can't be RESTful? The link to the gem has been updated. Thank you for pointing that out. Re: I must be dreaming by Jean-Jacques Dubray Re: I must be dreaming by Dilip Krishnan Could you elaborate on what it means for a programmatic client to be "guided"? If you take the following example say you GET an order which is in a particular state that allows you to pay or cancel (demonstrated by the link/rel) <order> <product>basic rails course</product> <product>RESTful training</product> <atom:link xmlns: <atom:link xmlns: <atom:link xmlns: </order> If the seller decides to offer a coupon perhaps, for orders that meet a certain criteria; then they could add another transition. Or if the service changes the tracking uri. When the client GETs the latest order for tracking he/she sees a new available "action" and uri. <order> <product>basic rails course</product> <product>RESTful training</product> <atom:link xmlns: <atom:link xmlns: <atom:link xmlns: <atom:link xmlns: </order> This is what I mean buy guide. do you imply that somehow the server can change the order lifecycle and the client code will somehow know what to do? that sounds like science fiction. The intention is not to imply that this is a mechanism to automate client interactions, for e.g. if a BOT were tasked to place 20 orders, they will not suddenly recognize that the order lifecycle has a new coupon linked to it and know exactly what to do about it and alter behavior on the fly. That would be science fiction :) The implication is just that the client is not bound to a particular lifecycle uri. After the initial request the service can guide the clients navigation accross the "known" business process states. The client would still break if the lifecycle were to change for e.g if we add an approval process to the order after its paid etc. Re: I must be dreaming by Jean-Jacques Dubray thanks for the precisions. >> The intention is not to imply that this is a mechanism to automate client interactions makes sense >> The implication is just that the client is not bound to a particular lifecycle uri. No sure I am following what you are saying, again, assuming that actions are well known to both the client and server, I am not sure clients would easily invoke these actions on dynamic endpoints. This presents such a security threat (injection) that you would think no one in their right mind would want to do that. Re: I must be dreaming by Dilip Krishnan No sure I am following what you are saying For some reason my formatting isnt showing up in firefox... but notice how the "latest" action is now in the second listing. Consequently the server has now changed the uri to access the "latest" orders without affecting the client. May not be the best example, but it shows how the server can evolve independently from the client, thus removing the coupling of the client to the action uri. This presents such a security threat (injection) that you would think no one in their right mind would want to do that. The example is probably not elaborate enough to account for security scenarios, which will most likely involve SSL/OAuth etc. Having take care of that how is this any different from an ESB? Re: I must be dreaming by Jean-Jacques Dubray I was hoping a less mundane application of HATEOAS, but I guess we agree that when the client is a software agent, there is not much that can be done in terms of adapting to changing states and transitions. In the end, there is no gain with respect to Web Services, since we are just changing the encoding of the actions. We have circled back where we started, wasting a couple of years in the process and pushing countless people to CRUD. Now, they can invoke actions again. What a progress. I suggest we could give a hint to the server with a custom HTTP header called RESTaction... We could also suggest a totally new and cool pattern, the wrapped resource representation (WRR) pattern whereby we wrap the resource representation that we post as part of an action invocation with a root element which is named after the action we invoke. That way we keep everything conveniently in one document for further downstream processing. That way we could actually route the action invocation in the back-end, rather than always wiring it to a Java or C# method. Of course, by using REST we have lost everything else such as bi-directional interfaces, asynchrony, assemblies, orchestration... but who cares? I would strongly encourage Guilherme to explain how "events" fit in the picture. The good news is that Guilherme understands the concept of a resource lifecycle and the difference with a business process (unlike Jim and Savas). Since a resource lifecycle is made up of states and events are the occurence of a state, how can REST handle events? Again, minor (and rather annoying) architecture detail. Re: I must be dreaming by Guilherme Silveira I am sorry about the delayed (and long) response. In the end, there is no gain with respect to Web Services, since we are just changing the encoding of the actions. I believe that even if, in the end, the result would be the same, one can not say that taking different paths to achieve the same result is the same and therefore a waste of years of research. If that was true, why would other companies try to build cars when we already have american ones doing so? They have the technology and it would be enough if they provide us all with it: we would drive our cars - as we do today. But koreans, japaneses, germans, chineses, french... have also their own cars, which have the same goal in mind... are we back at the same place? The process that allows us to create different solutions for the same problems is the basis of innovation. And it is true not only for private-research based technology as cars, but with open ones too. If the world was a place where people would just adopt one solution and never try creating different ones for the same type of problem, corba could be the only solution for distributed systems and, following that line of thought, why would someone ever try webservices? After all, corba might solve your problems... but what would happen then? You would have to adapt your system to corba's limitations or adapt corba to your system's limitations. In the end, you would have your system up and running, but the process that took you there brought you new choices that might be cheaper and faster (or cost more and slower) than the already existing ones. The same holds for programming languages and pretty much every other possible human evolution: we could still be living with the same technology from the 80's... but everything would have a much higher cost, being therefore less profitable and scalable (thinking about humankind reach). Summing up, even if in the end, two ideas (web services and what has been debated), achieve the same goal, the pure existence and its competitors development process allows technologies to evolve. An old technology which does not get ideas from others upcoming ones is faded to be outdated. We have circled back where we started, wasting a couple of years in the process and pushing countless people to CRUD. Now, they can invoke actions again. What a progress. Therefore I am afraid I can not agree with this sentence. There is no waste of research or money even if the result was exactly the same. But again, the result is not the same: it involves costs, time-to-production, quality of code and so on. And that's where I believe we might be helping. Creating a system using a full WS-stack nowadays still takes a lot of effort although some technologies (as soap4r) can really help you out. Restfulie might help projects pretty much in the same way that SpringMVC does in the Java web frameworks path. It is not an commitee-based standardized implementation, but it helps companies on the whole world solve their problems. As there is space for Spring MVC and JSR-based framework implementations on the market, WS-stack-based and restfulie (or any other framework) based solutions can help our clients solve their problems, we just have to use the one that fits better depending on the client's reality. Regards Guilherme Re: I must be dreaming by Guilherme Silveira Again, sorry for the delayed response... Having take care of that how is this any different from an ESB? I believe your question shows one of the big issues on the previous response. I will try to ellaborate it here. Even if the resulting system works the same way as if I had chosen another language, technology or bought a complete solution, the process that took me there varies. I can have my team improve their coding skills, tools-usage or have them learn a new technology while solving my companie's problem by implementing the solution. The result is the same, but the money spent, time requirement and my client's company intellectual growth depends on the technology I will choose. If one believes that both lead to the same solution (I don't but there is no need to argue), there is still no need to show the key points that differ on implementing a solution using restfulie, java RMI, http based cruds or a BPM/BPEL solution as they are quite clear: the code, the tools, the languages in which the system is described, everything differs. Regards
http://www.infoq.com/news/2009/11/restfulie-hypermedia-services
CC-MAIN-2014-52
en
refinedweb
MagicJack/Contribute From Wikibooks, open books for an open world Anyone may contribute to this book. And you are encouraged to do so. Below are guidelines we should all try to follow, and resources useful for beginners to Wiki editing. Guidelines[edit] Please try to follow these general guidelines: - If adding to the FAQ, and it's a short answer, add it inline. Otherwise, create a separate sub-page. - If a FAQ answer is complex, consider whether some of it could be a "How-To" and reused for other purposes. - When creating sub-pages, please name them using the full path. This will ensure that the sub-page will be contained within this book's namespace, and you won't create a new "book." - You can see how this should be done by editing an existing page which refers to a sub-page. - To create a sub-page, just edit the parent page, create the link to the sub-page, save the parent page. Then, follow the link to sub-page. You will be presented a page to create it. - Please edit similar pages to get an idea what a new sub-page should contain. - Notice that the main "MagicJack" page contains special wiki tags that don't appear in sub-pages. - Don't be afraid to make changes. If you make a mistake, it can be undone. Be bold! - If you're unsure of a change you want to make, or if it's a large change, or something that might be controversial (or upset another contributor), please use the "Discussion" feature to talk about it and gain consensus. - If you use the "Discussion" feature, please sign your comments with these characters --~~~~. (This can be inserted using one of the icons available when editing text.). However, please do not sign actual contributions to pages. - If you have special interest in a page, please watch for changes and discussions by using the "Watch" feature at the top of that page. - For other questions, please contact Az2008 using the discussion tab on that page, or email him here Resources[edit] The following are useful for those new to wiki editing: The following are useful references for Wikibooks: The following are more esoteric topics: - How to interpret page history - Glossary - Manual of Style - Help:Pages - Subpage convention - Naming policy - What is a module The following Wikibooks policies may be useful:
http://en.wikibooks.org/wiki/MagicJack/Contribute
CC-MAIN-2014-52
en
refinedweb
NAME driver -- structure describing a device driver SYNOPSIS #include <sys/param.h> #include <sys/bus */ { 0, 0 } }; static driver_t foo_driver { "foo", foo_methods, sizeof(struct foo_softc) }; static devclass_t foo_devclass; DRIVER_MODULE(foo, bogo, foo_driver, foo_devclass, 0, 0); DESCRIPTION Each driver in the kernel is described by a driver_t structure. The structure contains the name of the device, a pointer to a list of methods, argument as the last two arguments. SEE ALSO devclass(9), device(9), DEVICE_ATTACH(9), DEVICE_DETACH(9), DEVICE_IDENTIFY(9), DEVICE_PROBE(9), DEVICE_SHUTDOWN(9), DRIVER_MODULE(9) AUTHORS This manual page was written by Doug Rabson.
http://manpages.ubuntu.com/manpages/oneiric/man9/driver.9freebsd.html
CC-MAIN-2014-52
en
refinedweb
Tools for Visual C++ Development As part of the Visual Studio Integrated Development Environment (IDE), Visual C++ shares many windows and tools in common with other languages. Many of those, including Solution Explorer, the Code Editor, and the Debugger, are documented in the MSDN library under Application Development in Visual Studio. Often, a shared tool or window has a slightly different set of features for C++ than for the .NET languages or Javascript. Some windows or tools are only available in Visual Studio Pro or Visual Studio Ultimate. This topic introduces the Visual Studio IDE from the perspective of Visual C++, and provides links to other topics relevant to Visual C++. In addition to shared tools in the Visual Studio IDE, Visual C++ has several tools specifically for native code development. These tools are also listed in this article. For a list of which tools are available in each edition of Visual Studio, see Visual C++ Tools and Templates in Visual Studio Editions. In all editions of Visual C++, you organize the source code and related files for an executable ( such as an .exe, .dll or .lib) into a project. A project has a project file in XML format (.vcxproj) that specifies all the files and resources needed to compile the program, as well as other configuration settings, for example the target platform (x86, x64 or ARM) and whether you are building a release version or debug version of the program. A project (or many projects) are contained in a Solution; for example, a solution might contain several Win32 DLL projects, and a single Win32 console application that uses those DLLs. For general information about projects, see PAVE: Managing Solutions and Projects. Project templates Visual C++ comes with several project templates, which contain starter code and the settings needed for a variety of basic program types. Typically you start by choosing File | New Project to create a project from a project template, then add new source code files to that project, and/or start coding in the files provided. For information specific to C++ projects and project wizards, see Creating and Managing Visual C++ Projects. Application wizards Visual C++ provides wizards for some project types. A wizard guides you step-by-step through the process of creating a new project. For more information, see Creating Desktop Projects By Using Application Wizards. If your program has a user interface, one of the first tasks is to populate it with controls such as buttons, list boxes and so on. Visual Studio Pro and above includes a visual design surface and a toolbox for each flavor of C++ application. Visual Studio Express includes tools for Windows Stores. No matter which type of app you are creating, the basic idea is the same: you drag a control from the toolbox window and drop it onto the design surface at the desired location. In the background, Visual Studio generates the resources and code required to make it all work. For more information about creating a user interface for a Windows Store, see _____. For more information about creating a user interface for an MFC application, see MFC Desktop Applications. For information about Win32 Windows programs, see Win32 Windows Applications (C++). For information about Windows Forms applications with C++/CLI, see Creating a Windows Forms Application By Using the .NET Framework (C++). Semantic colorization After you create a project, all the project files are displayed in the Solution Explorer window.. Intellisense The code editor also supports several features that together are known as Intellisense. You can hover over a method and see some basic documentation for it. After you type a class variable name and a . or ->, a list of instance members of that class appears. If you type a class name and then a ::, a list of static members appears. When you start typing a class or method name, the code editor will offer suggestions to complete the statement. For more information, see Using IntelliSense. Code snippets You can use Intellisense code snippets to generate commonly-used or complicated code constructs with a shortcut keystroke. For more information, see Code Snippets. The VIEW menu provides access to many windows and tools for navigating around in your code files. For detailed information about these windows, see Viewing the Structure of Code. Solution Explorer In all editions of Visual Studio, use the Solution Explorer pane to navigate between the files in a project. Expand a .h or .cpp file icon to view the classes in the file. Expand a class to see its members. Double-click on a member to navigate to its definition or implementation in the file. Class View and Code Definition Window Use the Class View pane to see the namespaces and classes across all the files, including partial classes. You can expand each namespace or class to see its members and double-click on the member to navigate to that location in the source file. If you open the Code Definition Window, you can view the definition or implementation of the type when you choose it in Class View. Object Browser Use Object Browser to explore type information in Windows Runtime components (.winmd files), .NET assemblies and COM type libraries. It is not used with Win32 DLLs. Go To Definition/Declaration Press F12 on any API name or member variable to go to its definition. If the definition is in a .winmd file (for a Windows Store app) then you will be shown the type info in the Object Browser. You can also Go To Definition or Go To Declaration by right-clicking on the variable or type name and choosing the option from the context menu. Find All References In a source code file, right-click with the mouse cursor over the name of a type or method or variable, and choose Find all References to return a list of every location in the file, project or solution where the type is used. Find All References is intelligent and only returns instances of the same identical variable, even if other variables at different scope have the same name. Architecture Explorer and Dependency Graphs (Ultimate) Use Architecture Explorer to view relationships between various elements in your code. For more information, see Find Code with Architecture Explorer. Use dependency graphs to view dependency relationships. For more information, see How to: Generate Dependency Graphs for C and C++ Code. The term "resource" in the context of a Visual Studio desktop project includes things such as dialog boxes, icons, localizable strings, spash screens, database connection strings, or any arbitrary data that you want to include in the executable file. Visual Studio. For more information on adding and editing resources in native desktop C++ projects, see Working with Resource Files. For more information about resources in a Windows Store, see, Press Ctrl + Shift + B to compile and link a project. Visual Studio uses MSBuild to create executable code. You can set many build options under Tools | Options | Projects and Solutions. Build errors and warnings are reported in the Error List (Ctrl +\, E). Additional information is sometimes shown in the Output Window (Alt + 2). For more information, see Building C++ Projects in Visual Studio. You can also use the Visual C++ compiler (cl.exe) and many other build-related standalone tools such as NMAKE and LIB directly from the command line. For more information, see Building on the Command Line and C/C++ Building Reference. Visual Studio includes a unit test framework for both native C++ and C++/CLI. For more information, see Verifying Code by Using Unit Tests and Writing Unit tests for C/C++ with the Microsoft Unit Testing Framework for C++ You can debug your program by pressing F5 when your project configuration is set to Debug. While debugging you can set breakpoints by pressing F9, step through code by pressing F10, view the values of specified variables or registers, and even in some cases make changes in code and continue debugging without re-compiling. For more information, see Debugging in Visual Studio. You deploy a Windows Store to customers through the Windows Store through the PROJECT | Store menu option. Deployment of the CRT is handled automatically behind the scenes. For more information, see Selling Apps. When you deploy a native C++ desktop application to another computer, you must install the application itself and any library files that the application depends on. Visual C++ in Visual Studio 2012 gives you three ways to deploy the Visual C++ runtime with an application: central deployment, local deployment, or static linking. For more information, see Deploying Native Desktop Applications (Visual C++). For more information about deploying a C++/CLI program, see .NET Framework Deployment Guide for Developers,
http://msdn.microsoft.com/en-us/library/hh967574(d=printer).aspx
CC-MAIN-2014-52
en
refinedweb
. using System; using System.Reflection; class Asminfo1 { public static void Main(string[] args) { Console.WriteLine ("\nReflection.MemberInfo"); //Get the Type and MemberInfo. //Insert the fully qualified class name inside the quotation marks in the following statement. Type MyType =Type.GetType("System.IO.BinaryReader"); MemberInfo[] Mymemberinfoarray = MyType.GetMembers(BindingFlags.Public|BindingFlags.NonPublic|BindingFlags.Static|BindingFlags.Instance|BindingFlags.DeclaredOnly); //Get and display the DeclaringType method. Console.Write("\nThere are {0} documentable members in ", Mymemberinfoarray.Length); Console.Write("{0}.", MyType.FullName); foreach (MemberInfo Mymemberinfo in Mymemberinfoarray) { Console.Write("\n" + Mymemberinfo.Name); } } } Show:
http://msdn.microsoft.com/en-us/library/y5x1feba(d=printer,v=vs.90).aspx
CC-MAIN-2014-52
en
refinedweb
08 February 2008 02:30 [Source: ICIS news] SINGAPORE (ICIS news)--These were the top stories at 1:00 GMT in the following Northeast Asia/ ?xml:namespace> These stories have been taken from the Internet editions of the papers. ICIS has not verified these stories and does not vouch for their accuracy. Front page 4 held in deadly assault on wrestler The Aichi prefectural police on Thursday arrested former sumo stablemaster Tokitsukaze and three apprentices on suspicion of beating a novice wrestler so badly that he died. Police superintendent fired for role in spiritual group A police superintendent was discharged Thursday for his involvement in a "spiritual healing" group accused of illicit sales practices that bilked desperate and superstitious customers. Business & Industry 117 companies downgrade earnings forecasts for 2007 More than 100 listed companies revised down their forecasts for fiscal 2007 earnings, citing the FOOD: JT expects loss after 'gyoza' scare Japan Tobacco (JT) Inc. said Thursday it expects to report an operating loss in its frozen-food business for the year ending March because of the food-poisoning scandal involving frozen gyoza dumplings imported from Front page Chinese people are looking forward to an auspicious Year of the Rat, as the country recovers from transport and power chaos triggered by a long spell of bad weather. Fighting Blizzard Tornadoes in US South kill at least 55 Tornadoes flattened the land and shattered lives across the US South on Tuesday and Wednesday, killing at least 55 people and injuring more than 150. Business & Industry A senior Philippine diplomat said on Thursday that more and more Chinese tourists are considering the Power mostly restored in snow-stricken Electricity was partly or fully restored to 162 snow-stricken counties in Front page Give US Deputy Secretary of State John Negroponte urged Business & Industry Yahoo brooding over Microsoft Yahoo chief executive Jerry Yang told employees on Wednesday that the struggling Internet pioneer is still examining ways to avoid a takeover by rival Microsoft Corp. Softbank CEO backs Microsoft's offer Softbank Corp chief executive officer Masayoshi Son said that Microsoft Corp, which made a $44.6bn unsolicited bid for Yahoo Inc, could boost the value of the Yahoo brand. Front page ( no news updates ) NEW STRAITS TIMES, Front page Avenue of hope for settlers' children Her parents could not afford to send her to a private institution, she said. But then they told her about the Generasi Baru programme, set up by The LimKokWing University of Creative Technology and Felda to provide tertiary education to the children of settlers and her hopes were rekindled. Rural school transformed Batu Kikir is certainly not a cluster school; neither does it enjoy Smart or even premier status. It is not well known and many Malaysians would not have heard of it, or the sleepy hollow it is nestled within. Business and Industry Windfall for Felda settlers with new payment scheme Felda settlers who replant oil palm and rubber are in for a windfall beginning next month after the government yesterday announced payment increases of between 25% and 44%. BUSINESS TIMES, Front page US Congress passes stimulus bill, sends to Bush The US Congress passed a nearly $152bn plan on Thursday to stave off an election-year recession by sending government rebate checks to millions of Americans and providing business tax incentives to boost spending. Investors scurry on eve of Year of Rat Investors in Asian stock markets yesterday said a nervous goodbye to the Year of the Pig by diving for cover in the face of renewed Business & Industry ( no news update ) Front page Army pans disarmament plan Army chief Anupong Paojinda has made it clear he has strong reservations against disarming civilians and, eventually, members of the security forces in the deep South, saying it is the insurgents who should hand over their guns. Looking back on the junta’s rein Business & Industry ICT chief prizes neutrality The new Information and Communications Technology (ICT) minister says his clean background and independence from business will be an advantage in his new posting. Suvit pledges improvements in three areas Attracting foreign investment, developing small and medium enterprises (SMEs) and reducing industrial pollution are the priorities of the new Industry Minister Suvit Khunkitti.
http://www.icis.com/Articles/2008/02/08/9099278/in-fridays-asia-papers.html
CC-MAIN-2014-52
en
refinedweb
15 November 2012 08:21 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The company will invest yuan (CNY) 19bn ($3bn) in the project and the construction period is expected to be four years, the statement added. The Inner Mongolia Development and Reform Commission has approved the construction earlier this week, it added. The company did not disclose when construction work will be started or the start-up date of the project. (
http://www.icis.com/Articles/2012/11/15/9614221/chinas-inner-mongolia-yitai-chemical-to-build-fine-chemical-project.html
CC-MAIN-2014-52
en
refinedweb
hiberante installation - Hibernate hiberante installation how to install hibernate s/w in windows xp. Hi Friend, Please visit the following link: Hope that it will be helpful for you. Thanks Ask for latest version for hibernate Ask for latest version for hibernate any new version for hibernate after 3.0, if yes can u give some exmp for latest version of hibernate Insert This tutorial will help you to learn how to insert data into table by using hibernate : Dynamic-insert This tutorial contains description of Hibernate dynamic-insert Ask Questions Ask Questions  ..., professions, students and learners, we have initiated a new service ‘Ask Question’. Using this new service, our visitors can ask any sort of question ask how function jCalender - Date Calendar ask how function jCalender d, Halo friend, i want to ask how to make this script can run and call JCalender. I want to know how to make private this code.i has already insert plugin in my netbeans. try Ask Questions with Options Ask Questions with Options using Java In this section, we are going to ask five questions one after the other with four options to test the user. For this, we...; } public String getOp4() { return op4; } } public class AskQuestions Ask Hibernate Questions Online Ask Hibernate Questions Online  .... Feel free to ask questions on Hibernate related problems. In the move... is open for all. Ask Hibernate related Hibernate : Bulk Insert/Batch Insert This tutorial contains description of Hibernate bulk insertion(Batch insertion -insert? Hi friend, It should be neccesary to have both a namespace property and a tagged value to allow dynamic-insert and dynamic-update.... Thanks hibernate - Hibernate hibernate Hai,This is jagadhish I have a problem while developing insert application of Hibernate.The application is compiled,while running.... For read more information: Software Questions and Answers Questions - Ask Hibernate Interview questions and browser the answers... online Discuss Software development questions, ask your questions and get answers... of answers to common programming problems. Ask Questions | Browse Criteria Expression (and) | Hibernate Criteria Expression (or) | Insert Data...Home | About-us | Contact Us | Advertisement | Ask Questions...; Tutorial Section Introduction to Hibernate 3.0 | Hibernate Architecture Problem in running first hibernate program.... - Hibernate ? ----------------------------------------------------------------------- Hibernate: insert...Problem in running first hibernate program.... Hi...I am using... programs.It worked fine.To run a hibernate sample program,I followed the tutorial below code - Hibernate Hibernate code can you show the insert example of Hibernate other... of example related to hibernate... Thanks helpful Insert Data into Database Using Hibernate Native SQL Insert Data into Database Using Hibernate Native SQL... operations like insert, update, delete and select. Hibernate provides... how you can use Native SQL with hibernate. You will learn how to use Native hibernate Excetion - Hibernate the same error that Hibernate: insert into login (uname, password) values...:// It will be helpful for you...hibernate Excetion The database returned no natively generated Ask iBatis Questions Online Ask iBatis Questions Online  ... model. iBatis is also a popular framework like Hibernate... relating to programming, coding, implementing and using. Ask any iBatis hibernate sql error - Hibernate hibernate sql error Hibernate: insert into EMPLOYE1 (firstName..., Please visit the following links: Hope Ask about java Ask about java Create a java program for add and remove the details of a person, which is not using by database, a simply java program. If possible, please wil it in switch case Hibernate code - Hibernate Hibernate code firstExample code that you have given for hibernate to insert the record in contact table,that is not happening neither it is giving... inserted in the database from this Ask about looping in database Ask about looping in database Good afternoon, I want to ask something, Now i have 2 tables,name of the table is RULE and Heritage. Table heritage...=True,A2=True : code is the same data Rule and Heritage. I want to ask firstExample not inserting data - Hibernate hibernate firstExample not inserting data hello all , i followed the steps in hiberante tutorial i.e FirstExample.java as mentioned in tutorial my... for more information. Thanks.  Ask java count Ask java count Good morning, I have a case where there are tables sumborrowbook fieldnya: codebook, bookname, and sumborrowbook . I want to make the results: for example: | code book | name of book | sum | | b001 ask - Java Beginners ask dear how to "print out" into a file ? regard suhadi Hi friend, Please explain properly requirement. I am sending simple code according to your requirement. import java.io.*; public class Ask Programming Questions Online Ask Programming Questions Online With the rapid development of technology..., SOA questions, Hibernate questions, Struts questions, JavaFX questions Ask date difference Ask date difference Hello, I have a problem about how to calculate date, the result from this code is not complete , this is my code . please help me. thank you public void a(){ String date1 = jTextField33.getText(); String Ask SQL Questions Online Ask SQL Questions Online Structured Query Language in short (SQL... the user to execute, retrieve, insert, update and delete new records, new tables Association Hibernate Association 1) <bag name="product" inverse="true... name="dealer" class=" net.roseindia.Dealer" column="did" insert="false" update... cascade column insert update JSP Radio Button MySQL insert - JSP-Servlet however I wanted to ask you if there are tutorials or perhaps you can help... in the backend table there is only one column for Gender, how do I insert male Insert Image in DB through Servlet - JSP-Servlet Insert Image in DB through Servlet Dear Sir, You write me: copy this link and paste in your Url... there it will ask for save. save it extract it then use the code... service bus and then insert into database. Thanks hibernate annotations hibernate annotations to insert records into these tables. But it is trying... address_.adno=? Hibernate: insert into student_tbl (age, sname, sid) values (?, ?, ?) Hibernate: insert into address_tbl (city, street, sid, adno) values ask a user to enter 5 integer ask a user to enter 5 integer make a program that enter 5 numbers then identify the largest and the smallest number sample program 2 4 3 5 6 the smallest number: 2 the largest number: is 6 66 Hibernate - Framework Hibernate hi..... how i insert,update,delete data to Insurence table using nativesql query.i m not getting the code. thnx Insert Image in DB through Servlet - JSP-Servlet Insert Image in DB through Servlet Dear Sir, My previous Query... pre = conn.prepareStatement("insert into MyPictures values...(e.getMessage()); } } } Whether Any entry will be made in web.xml when Insert Image2ee - Hibernate that cannot insert exampleVO into database.. please help to me to solve this problem Ask Questions delete row from a table in hibernate delete row from a table in hibernate is there method to delete row in a table using hibernate like save(-) to insert row How to ask a questions to you clearly with normal english? How to ask a questions to you clearly with normal english? hi, i want to know how should or can i may ask a question. send me some format... in the same way as you have asked this question. Go to the Ask Questions part Complete Hibernate 4.0 Tutorial Hibernate Application : Insert Record using Hibernate Save Method Hibernate 4... Hibernate insert Query Hibernate polymorphic Queries... This section contains the Complete Hibernate 4.0 Tutorial Ask Programming Questions and Discuss your Problems Ask Programming Questions and Discuss your Problems  ... You Ask, read it carefully. Try to find the solutions in archive, use our... of the programs. How to Ask? Simple? Just browser the appropriate section Hibernate Architecture of the hibernate is used to select, insert, update and delete the records form... Hibernate Architecture In this lesson you will learn the architecture of Hibernate Hibernate session.refresh() method Hibernate session.refresh() method What is session.refresh() in hibernate? It is possible to re-load an object and all its collections.... When Insert data into Cat TABLE trigger update hit_count coulmn to 1 Ask JSP questions online Ask JSP questions online Facing problem in JSP? Ask to us, we... has just started a new problem solving service ‘ask question’. In our Hibernate Architecture Hibernate Architecture - Understand the architecture of the Hibernate ORM... Hibernate ORM framework. Hibernate is based on the Java technologies... cream architecture of Lite architecture in our application. Hibernate is ORM How many qusetions can you ask? How many qusetions can you ask? Hi I was very impressed following my very first question I asked regarding some coding. However I am not sure why any further questions have not yet been answered. Is this because they are more on collection mapping - Hibernate . The index informs hibernate whether a particular in-memory object is the same one as an equal on-DB object or not, so there is no need to delete or re-insert Ask Applet Questions Online Ask Applet Questions Online  ... ‘Ask Questions’. Now, you can get the quick answers of your questions... service of ‘Ask Questions’ has given you the tool to resolve your HIBERNATE HIBERNATE What is difference between Jdbc and Hibernate hibernate hibernate what is hibernate flow hibernate hibernate what is hibernate listeners Foreign key hibernate Foreign key sir, I am using hibernate in netbeans. I have... persons (P_Id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1$$ I want to insert data using set methods in orders table in feild P_Id which is foreign key. How to insert Ask PHP Questions Ask PHP Questions PHP Questions and Answers Ask PHP Questions and get answers from.... In your questions and answers section you can ask PHP questions and get delete query problem - Hibernate '] question no: 1) why table STUDENT is not mapped , for insert its work...(); Read for more information. Thanks DB Insert DB Insert How to insert XML data into a database column? Column data type is CL Tools Update Site the confirmation message and then ask for restart. After restart Hibernate tools... Hibernate Tools Update Site Hibernate Tools Update Site In this section we jdbc insert jdbc insert Hi , i want to insert a declared integer variable into a mysql table through jdbc. how to insert that. help me with query... thanks... a table there. After creating a table in database, it insert a rows in the database
http://www.roseindia.net/tutorialhelp/comment/95746
CC-MAIN-2014-52
en
refinedweb
java.lang.Object org.jboss.dna.common.text.TokenStreamorg.jboss.dna.common.text.TokenStream org.jboss.dna.sequencer.ddl.DdlTokenStreamorg.jboss.dna.sequencer.ddl.DdlTokenStream public class DdlTokenStream A TokenStream implementation designed around requirements for tokenizing and parsing DDL statements. Because of the complexity of DDL, it was necessary to extend TokenStream in order to override the basic tokenizer to tokenize the in-line comments prefixed with "--". In addition, because there is not a default ddl command (or statement) terminator, an override method was added to TokenStream to allow re-tokenizing the initial tokens to re-type the tokens, remove tokens, or any other operation to simplify parsing. In this case, both reserved words (or key words) and statement start phrases can be registered prior to the TokenStream 's start() method. Any resulting tokens that match the registered string values will be re-typed to identify them as key words (DdlTokenizer.KEYWORD) or statement start phrases (DdlTokenizer.STATEMENT_KEY). public DdlTokenStream(String content, TokenStream.Tokenizer tokenizer, boolean caseSensitive) content- tokenizer- caseSensitive- public void registerStatementStartPhrase(String[] phrase) Examples would be: {"CREATE", "TABLE"} {"CREATE", "OR", "REPLACE", "VIEW"}see DdlConstantsfor the default SQL 92 representations. phrase- public void registerStatementStartPhrase(String[][] phrases) public void registerKeyWord(String keyWord) keyWord- public void registerKeyWords(List<String> keyWords) Listof key words. keyWords- public void registerKeyWords(String[] keyWords) keyWords- public boolean isNextKeyWord() DdlTokenStream.DdlTokenizerKEYWORD. public boolean isNextStatementStart() public void mark() public String getMarkedContent() public static DdlTokenStream.DdlTokenizer ddlTokenizer(boolean includeComments) DdlTokenStream.DdlTokenizerimplementation that ignores whitespace but includes tokens for individual symbols, the period ('.'), single-quoted strings, double-quoted strings, whitespace-delimited words, and optionally comments. Note that the resulting Tokenizer may not be appropriate in many situations, but is provided merely as a convenience for those situations that happen to be able to use it. includeComments- true if the comments should be retained and be included in the token stream, or false if comments should be stripped and not included in the token stream
http://docs.jboss.org/jbossdna/latest/api/org/jboss/dna/sequencer/ddl/DdlTokenStream.html
CC-MAIN-2014-52
en
refinedweb
Squish 5.1 is a major new release with many new features and bug fixes. Here is a selected summary of the release's highlights—a detailed list of all the main changes is given in the sections that follow. The Android, iOS and Qt editions of Squish now have full support for automating tests on touch-devices, including full support for multi-touch gestures. Gestures can be recorded and replayed from test scripts and a convenient editor is provided for editing recorded gestures. Major improvements to the handling of "Object Not Found" errors when replaying tests. The test execution will suspend and a dialog is shown which allows the user to select an alternative object, to edit the object map and more. New integration plugins for Atlassian Bamboo and JetBrains TeamCity have been added. A command line tool for importing test results into Atlassian JIRA has been added. Web tests performed with Chrome no longer require the use of a proxy server. QNX, Android and VxWorks have been added to list of target platforms for testing of Qt applications. This has been achieved through the addition of an optional Embedded SDK that also streamlines builds for embedded Linux and Windows CE. The Python editor used by the Squish IDE will do more extensive Code Analysis on the test scripts. This enables better code completion including the functions and modules provided by Squish. If the test scripts use the standard Python import mechanism to include shared scripts the editor's code completion will also include functions and classes from those imported scripts. Various stability improvements for the squishrunner (Section 19.4.1) program in case the AUT terminates unexpectedly. Fixed a problem which broke the replay order of test scripts with Unicode characters in the file path. When running Squish on Windows, special "dump files" will be generated more often in case a problem occurs to improve the interaction with froglogic technical support. Creating screenshot verification points on Windows now handles transparent windows correctly. The setup window is no longer fixed in size. The setup program now performs sanity checks on the selected JRE to detect architecture mismatches. Various stability improvements for the squishtest Python module. The overall performance of accessing the AUT (calling methods, getting/setting properties etc.) has been improved. A new xml2jira (Section 19.4.9) tool is now included with Squish, it allows creating and updating JIRA tickets based on Squish test reports. squishrunner (Section 19.4.1) gained a new command line option --exitCodeOnFail which can be used to define a custom exit code which squishrunner should return in case a fatal error is encountered while executing a test. A standalone JavaScript interpreter squishjs has been added. The standalone squishtest Python module can now be used with the Python Tools for Visual Studio integration as well as other third-party Python debuggers. Clicking the Pause button in the control bar no longer suspends test execution in all running instances of the Squish IDE. Fixed various usability issues with the environment table in the test suite settings. Fixed various usability issues when editing real names in the object map editor. Resolved an issue related to selecting multiple real-name entries in the object map editor. The Squish IDE no longer "stalls" when launching the image verification point editor or the setup program. Fixed a bug related to test cases and test suites with certain characters ( &, # and certain non-ASCII characters) in their names. Improved layouting of the Squish IDE to avoid that the Test Suite view gets too small in certain cases. Silenced a External Modification Detected dialog triggered when creating a test case while the test suite settings are opened with unsaved changes. Corrected window focusing when external editors (such as the image verification point editor or the gesture editor) are launched. Fixed a bug which caused incorrect code to be generated when creating verifications of certain properties (such as class) in Ruby test suites. Fixed various cases in which dialogs wouldn't get hidden when the control bar is shown. The Pause button in the control bar is no longer enabled when picking an object (the pause button used to be enabled, but clicking it had no effect). Resolved various issues due to which error messages related to Python were shown when opening multiple Python test suites simultaneously. Added a new Close All Test suites button. The Integration package now supports integration into Eclipse 4. Added support for DLTK5/PyDev 3. Major improvements to code analysis and code completion features for Python tests. The Manage AUTs dialogs can now be opened directly via the test suite settings editor. Starting a second Squish IDE will now show a dialog offering to terminate the existing Squish IDE in order to re-use the workspace of the existing Squish IDE. Added support for managing, viewing and editing gestures. The What's New page will now show the froglogic RSS feed. The tools-selection dialog now offers collecting technical information for submission to froglogic technical support to simplify debugging issues caused by broken installations. The working directory behaviour of squishserver (Section 19.4.2) can now be changed such that it uses the working directory of the Squish IDE. JavaScript tests can now use the SQL Object (Section 18.16.6.1) to connect to SQLite, PostgreSQL as well as MySQL databases. Improved output of test.compare for Python and Ruby scripts to include the type of the compared values. JavaScript RegExp objects now support Unicode characters. The JavaScript SQLResult Object (Section 18.16.6.3) properties no longer clash with field names in SQL search results. A new GestureBuilder API is available for creating gestures programmatically in test scripts. A new ApplicationContext.totalTime property has been introduced which returns the CPU time used by the AUT. The JavaScript XML Object (Section 18.16.5) now features a new getElementsByTagName method which yields a list of elements with a given tag name. Added a new testSettings.logScreenshotOnPass property which can be used to automatically create screenshots in case a verification passes. Extended JavaScript Array objects with various new methods: map(), filter(), indexOf(), lastIndexOf(), forEach(), some() and every(). The setup program now defaults to locating the Qt library used by the AUT automatically, except for tests executed on Mac OS X (which still requires specifying the path to the library). The openContextMenu() function is now recorded correctly when clicking QGraphicsItem objects. Example programs for Qt5 now include the required Qt libraries to make them usable out of the box. Optimized accessing web objects in embedded web controls when using hierarchical names. Correctly expose QObject properties with custom QObject-based types in Qt 5 applications. Users who resorted to accessing built-in properties via function calls like QQuickItem::parent() should now simply use the .parent property. Added support for Qt 5.2. Added dedicated support for QtQuick controls Added support for Visual Studio 2013 builds. It's now possible to record and replay touch events as well as multi-touch gestures. Major improvements to the diagnostic output generated by Squish in case hooking into a Qt application fails. Support for scripts calling methods with return type QList<QObject*>. uninstallEventHandler() function added to allow uninstalling event handlers. Support calling slots from test scripts with qint16, quint16, quint32, qreal> or ushort parameters or return values. QPixmap sub-properties (such as width, height or depth) are now accessible from test scripts and shown in the Squish IDE. Added support for recording and replaying QWindow resizes, moves and state changes (minimizing/maximizing/etc.). Added support for QWebView controls when running tests on Solaris. On touch-enabled devices, objects can now be picked by tapping the screen. Support for single-touch tap recording on QWidgets and QtQuick controls added. Support for single-touch drag recording on QtQuick controls added. Support for (multi-)touch gesture recording and replay on QWidgets and QtQuick controls added. It's now possible to use wildcards for the text property for multi-property QListViewItem object names. Fixed crash when creating a string representation of a self-referencing nested Java array. The Spy tool now works correctly for JavaFX applications containing more than one Stage object. Simplified deployment of example applications; each example is now bundled into a single .jar archive. Fixed accessing multi-dimensional Java arrays from test scripts. Corrected super-class of JavaArray class, which is now a sub-class of java.lang.Object. Corrected handling of JFace ControlDecoration objects. Fixed an issue related to the "chevron" object of CTabFolder controls in recent SWT versions. Resolved a bug which made scrolling to table cells fail when replaying a test script. Added support for Java 8. Dedicated support for Nebula Grid and Nebula NatTable added. Added dedicated support for HTML objects contained in WebView controls in JavaFX applications. Test scripts can now access invisible items in JavaFX lists, tables and trees. Improved compatibility with touch-enabled devices by replaying clicks using touch events. Firefox versions older than 4.0 are no longer supported. Fixed a bug in closePrompt() function which occasionally caused it to not cancel prompt dialogs correctly. Corrected coordinates for elements contained in dialogs shown by Internet Explorer. Improved robustness of hooking into <iframe> elements. The Internet Explorer window is no longer closed when a test finished if Squish attached to a running IE instance. Improved support for stripped-down SmartGWT applications. Fixed a problem related to loading URLs without a scheme when replaying tests with Chrome. Fixed a rare crash triggered when closing Internet Explorer windows. Made it possible to run the addressbook example with Internet Explorer 8. It's now possible to pick links of buttons without triggering the associated actions (for browsers other than Internet Explorer). Added support for automating alert/confirm/prompt dialogs shown by embedded web controls. Extended Chrome support by removing limitations imposed by proxy-based testing. Chrome is now accessed using a Chrome extension. visible property now takes CSS-defined visibility into account. A new BodyUnloaded event has been added to allow test scripts to react to the current page being unloaded. The selectOption() function can now be called with a list of strings to simplify selecting multiple entries in a multi-select field. The selectOption() function now supports selecting items based on their value property instead of the user-visible string. Added support for running tests with Firefox on Solaris. WPF DatePicker controls now expose the nested edit control to make replaying text input to date pickers work correctly. Fixed a bug which occasionally kept Squish from hooking into .NET applications. Resolved an issue which sometimes made replaying test scripts on MFC list view items trigger a high CPU load. Fixed a resource leak which might cause the system to run out of memory if Squish repeatedly attaches to and detaches from an application. Greatly improved accuracy of determining which WPF control (if any) is beneath the mouse cursor, this fixes various issues related to recording actions on WPF controls and picking them. Resolved a potential crash when replaying tests on Windows Forms applications. Added dedicated support for DevExpress WPF controls. Squish now tries to make use of the Microsoft UI Automation framework for accessing GUI controls before falling back to generic WindowsControl-based automation. The names of objects shown in the Spy are now much more descriptive. MFC menu items now expose an id property. It's now possible to create screenshot verification points for web objects in embedded web controls. Squish packages are now "universal builds", both 32-bit as well as 64-bit applications can be tested with a single Squish package. The injectMacWrapper example script no longer requires modifications to be usable. The selectOption() can now be used with WebView controls embedded into OS X applications. Support for recording and replaying multi-touch gestures. Added support for building with Xcode 5.1 and iOS 7.1. The selectOption() can now be used with WebView controls embedded into iOS apps. Added support for recording and replaying multi-touch gestures. An experimental alternative hooking method with greatly improved performance has been added. Native.java.lang.Class.forName("package.Class") can now be used to access all native classes without any further steps. The tapMenuItem() function now correctly raises an exception when the menu item is not found. Fixed recording of keyboard input on WebView HTML objects for Android 4.0 ("Ice Cream Sandwich"). It is now possible to specify TCP device strings when calling startApplication to launch Android apps. Added support for applications using a renamed Tk library (only on Linux). Added support for Perl/Tk applications (only on Linux). The Ant integration now supports running multiple test cases of a given test suite. The Jenkins integration plugin now supports setting squishserver (Section 19.4.2) options. The Jenkins integration now allows running multiple Squish test cases in a single build step. The settings entered for a Jenkins Squish build step are now verified more diligently. The algorithm used by Squish/Web for generating object names is now documented. The Search feature of the HTML help has been reworked to yield better results. Also, the results are grouped by topic. The instructions for creating partial builds (for Qt test) have been simplified. A newly added Embedded SDK provides support for QNX (using the qcc compiler frontend), Android, VxWorks and a simplified configuration for embedded Linux. Customers with existing support contracts for cross-compilation builds will be receive a free upgrade for their target platform. Make the code compile with Qt 5.3 (still in beta). The squishidl (Section 19.4.3) utility is now shipped with Squish binary packages to simplify doing partial builds. Squish source builds can now make use of an existing squishidl binary (useful for embedded builds). The configure program now queries an optional Qt namespace name as well as a library infix from qmake (when using qmake). The configure program can optionally query qmake for compiler and linker flags to use. An install target has been added to simplify distribution to remote & embedded systems (currently unsupported on Mac OS X).
http://doc.froglogic.com/squish/5.1/rel-510.html
CC-MAIN-2014-52
en
refinedweb
Address-of Operator: & The unary address-of operator (&) takes the address of its operand. The operand of the address-of operator can be either a function designator or an l-value that designates an object that is not a bit field and is not declared with the register storage-class specifier. The address-of operator can only be applied to variables with fundamental, structure, class, or union types that are declared at the file-scope level, or to subscripted array references. In these expressions, a constant expression that does not include the address-of operator can be added to or subtracted from the address-of expression. When applied to functions or l-values, the result of the expression is a pointer type (an r-value) derived from the type of the operand. For example, if the operand is of type char, the result of the expression is of type pointer to char. The address-of operator, applied to const or volatile objects, evaluates to const type * or volatile type *, where type is the type of the original object. When the address-of operator is applied to a qualified name, the result depends on whether the qualified-name specifies a static member. If so, the result is a pointer to the type specified in the declaration of the member. If the member is not static, the result is a pointer to the member name of the class indicated by qualified-class-name. (See Primary Expressions for more about qualified-class-name.) The following code fragment shows how the result differs, depending on whether the member is static: In this example, the expression &PTM::fValue yields type float * instead of type float PTM::* because fValue is a static member. The address of an overloaded function can be taken only when it is clear which version of the function is being referenced. See Address of Overloaded Functions for information about how to obtain the address of a particular overloaded function. Applying the address-of operator to a reference type gives the same result as applying the operator to the object to which the reference is bound. For example: // expre_Address_Of_Operator2.cpp // compile with: /EHsc #include <iostream> using namespace std; int main() { double d; // Define an object of type double. double& rd = d; // Define a reference to the object. // Obtain and compare their addresses if( &d == &rd ) cout << "&d equals &rd" << endl; } &d equals &rd The following example uses the address-of operator to pass a pointer argument to a function: // expre_Address_Of_Operator3.cpp // compile with: /EHsc // Demonstrate address-of operator & #include <iostream> using namespace std; // Function argument is pointer to type int int square( int *n ) { return (*n) * (*n); } int main() { int mynum = 5; cout << square( &mynum ) << endl; // pass address of int }
http://msdn.microsoft.com/en-US/library/64sa8b1e(v=vs.80).aspx
CC-MAIN-2014-52
en
refinedweb
Reporters are MATLAB® objects that generate formatted content when added to a MATLAB Report Generator™ Report object. MATLAB Report Generator provides reporters for generating common report components, such as title pages, tables of contents, chapters, subsections, figures, and MATLAB variables values. You can customize the content and appearance of these reporters. You can also create your own reporters. For a list of built-in Report API objects, enter this MATLAB command: help mlreportgen.report In addition to reporters, MATLAB Report Generator provides another set of objects for generating report content. These objects are Document Object Model (DOM) objects. They implement a model of a document used by HTML, Word, and other document creation software. The model defines a document as a hierarchy of objects commonly found in documents, such as text strings, paragraphs, images, and tables. The DOM API contains software objects that generate these basic document objects. For a list of the DOM objects, enter this MATLAB command: help mlreportgen.dom Reporters, by contrast, create high-level document structures, such as title pages, tables of contents and chapters, that occur in many, but not all types of documents. The advantage of reporters is that a single reporter can create content that would require many DOM objects. However, a report generator program typically requires both DOM and reporter objects. For example, a chapter reporter generates the title and page layout of a report chapter, but not its content. The DOM API provides text, paragraph, table, list, image, and other objects that you can use to create reporter content. The following MATLAB program illustrates using both reporters and DOM objects to create a PDF report. The program uses a DOM Text object to add a block of text to the chapter. All other objects in this example ( TitlePage, TableOfContents, and Chapter) are reporter objects. rpt = mlreportgen.report.Report('myreport','pdf'); append(rpt,mlreportgen.report.TitlePage('Title','My Report',... 'Author','Myself')) append(rpt,mlreportgen.report.TableOfContents) ch = mlreportgen.report.Chapter('Title','Sample Text'); append(ch,mlreportgen.dom.Text... ('Here is sample text using a DOM Text object.')) append(rpt,ch) close(rpt) rptview(rpt) A reporter typically includes the following elements: Template documents that define the appearance, fixed content, and holes for dynamic content generated by the reporter. A reporter typically provides a set of templates files, one for each supported output type: Word, PDF, and HTML. Each template file contains a library of templates used by the reporter to format its content. For example, the Report API TitlePage reporter uses a template named TitlePage to format a title page. The TitlePage template is stored in the template libraries of its template files. You can modify this template to rearrange or add content to a title page. For information, see Templates. Properties that specify the dynamic content generated by the reporter. These properties correspond to holes in the reporter template. A reporter fills the template holes with the values of the corresponding properties. MATLAB class that defines the reporter properties and methods you use to create and manipulate the reporter. Reporter class names begin with the prefix, mlreportgen.report. For example, the title page reporter is mlreportgen.report.TitlePage. You can omit the prefix in a MATLAB script or function by inserting this statement at the beginning of the script or function: import mlreportgen.report.* import mlreportgen.dom.*to use short DOM class names. Constructor method that creates a reporter object as an instance of the reporter class. The name of the constructor is the same as the name of the class. DOM object that contains the content generated by the report. This object is referred to as the implementation of the reporter. Each reporter has a getImpl method that creates the implementation object, which is typically a DOM DocumentPart object. To generate content in a report program, follow these steps: Create an Instance of the Reporter Set the Properties of an Existing Reporter Add the Reporter to a Report The example program described in these steps creates a simple document that includes only a title page. However, the steps demonstrate the tasks to create a full report. The full program listing is shown after the step descriptions. Create a Report object ( mlreportgen.report.Report) to contain the content generated by the report. The report object uses a DOM Document object to hold content generated by reporters added to the report. This code imports the Report API package, which enables the code to use short class names. Then, it creates a PDF report object ( rpt). import mlreportgen.report.* rpt = Report('myReport','pdf'); Create an instance of the reporter class, that is, instantiate the reporter, using its constructor. The constructor can also set the properties of the reporter object it creates. For example, this code creates a title page reporter ( tp) and sets its Title and Author properties. tp = TitlePage('Title','My Report','Author','John Smith'); To set reporter properties after a program has created a reporter, the program can use MATLAB dot notation. For example, this code sets the Subtitle and PubDate properties of a TitlePage reporter ( tp). tp.Subtitle = 'on My Project'; tp.PubDate = date; To generate content using a reporter, a report program must add the reporter to the report object, using the append method of the report object. The append method works by invoking the getImpl method of that reporter. The getImpl method creates the implementation of the reporter. Then, the append method adds the implementation to the DOM Document object that serves as the implementation of the report object. You can also use the append method to add DOM objects to the report. You cannot, however, add another DOM Document to a report. For example, this code adds the title page reporter ( tp) to the report ( rpt). append(rpt,tp) When a report program has finished adding content to a report, it must close the report, using the close method of the report object. Closing a report writes the report content to a document file of the type, such as PDF, specified by the constructor of the report object. close(rpt) This code is the complete program for the report, which includes only a title page. import mlreportgen.report.* rpt = Report('myReport','pdf'); tp = TitlePage('Title','My Report',... 'Author','John Smith'); tp.Subtitle = 'on My Project'; tp.PubDate = date; append(rpt,tp) close(rpt) rptview(rpt) mlreportgen.dom.Text | mlreportgen.report.Report | mlreportgen.report.TableOfContents | mlreportgen.report.TitlePage
https://au.mathworks.com/help/rptgen/ug/what-is-a-reporter.html
CC-MAIN-2021-39
en
refinedweb
sasl_getsecret_t (3) - Linux Man Pages sasl_getsecret_t: The SASL callback for secrets (passwords) NAMEsasl_getsecret_t - The SASL callback for secrets (passwords) SYNOPSIS #include <sasl/sasl.h> int sasl_getsecret_t(sasl_conn_t *conn, void *context, int id, sasl_secret_t ** psecret); DESCRIPTION sasl_getsecret_t is used to retrieve the secret from the application. A sasl_secret_t should be allocated to length sizeof(sasl_secret_t)+<length of secret>. It has two fields len which is the length of the secret in bytes and data which contains the secret itself (does not need to be null terminated). RETURN VALUE SASL callback functions should return SASL return codes. See sasl.h for a complete list. SASL_OK indicates success. CONFORMING TORFC 4422 SEE ALSOsasl(3), sasl_callbacks(3), sasl_errors(3)
https://www.systutorials.com/docs/linux/man/3-sasl_getsecret_t/
CC-MAIN-2021-39
en
refinedweb
Unity SDK¶ New changes since May 10th, 2021 Installation¶ Using Unity Package Manager¶ - Go to Window > Package Manager. Click "+" button, then select "Add package from git URL..." - Enter Git URL: - Click "ADD" Click to import the example project in order to test the built-in demonstration. Using legacy .unitypackage:¶ - Download the latest Colyseus Unity SDK - Import the Colyseus_Plugin.unitypackagecontents into your project. The Colyseus_Plugin.unitypackage contains an example project under Assets/Colyseus/Example you can use as a reference. Setup¶ Here we'll be going over the steps to get your Unity client up and running and connected to a Colyseus server. Topics covered include: - Running the server locally - Server settings - Connecting to a server - Connecting to a room - Communicating with a room, and the room's state. The topics should be enough for you to set up a basic client on your own, however, you are welcome to use and modify the included example code to suit your needs. Running the server locally¶ To run the demonstration server locally, run the following commands in your terminal: cd Server npm install npm start The built-in demonstration comes with a single room handler, containing a suggested way of handling entities and players. Feel free to change all of it to fit your needs! Creating a Colyseus Settings Object:¶ - Right-click anywhere in the Project folder, select "Create", select "Colyseus", and click "Generate ColyseusSettings Scriptable Object" - Fill in the fields as necessary. - Server Address - The address to your Colyseus server. - Server Port - The port to your Colyseus server. - Use secure protocol - Check this if requests and messages to your server should use the "https" and "wss" protocols. - Default headers - You can add an unlimited number of default headers for non web socket requests to your server. - The default headers are used by the ColyseusRequestclass. - An example header could have a "Name"of "Content-Type"and a "Value"of "application/json" Colyseus Manager:¶ - You will need to create your own Manager script that inherits from ColyseusManageror use and modify the provided ExampleManager. public class ExampleManager : ColyseusManager<ExampleManager> - Make an in-scene manager object to host your custom Manager script. - Provide your Manager with a reference to your Colyseus Settings object in the scene inspector. Client:¶ - Call the InitializeClient()method of your Manager to create a ColyseusClientobject which is stored in the clientvariable of ColyseusManager. This will be used to create/join rooms and form a connection with the server. ExampleManager.Instance.InitializeClient(); - If your Manager has additional classes that need reference to your ColyseusClient, you can override InitializeClientand make those connections in there. //In ExampleManager.cs public override void InitializeClient() { base.InitializeClient(); //Pass the newly created Client reference to our RoomController _roomController.SetClient(client); } - If you wish to have multiple ColyseusClientreferences in your manager, or if you want to provide an alternate endpoint/ ColyseusSettingsobject for your ColyseusClientyou can skip the call to base.InitializeClient(). - Within your overridden InitializeClient()function, you can now either pass an endpoint to any additional new ColyseusClients that you create or you can create a new ColyseusClientwith a ColyseusSettingsobject and a boolto indicate if it should use websocket protocol rather than http when creating a connection. If you create a new Clientwith a stringendpoint, it'll create a ColyseusSettingsobject in it's constructor and infer the protocol from the endpoint. public override void InitializeClient() { chatClient = new ColyseusClient(chatSettings, true); //Endpoint will be chatClient.WebSocketEndpoint deathmatchClient = new ColyseusClient(deathmatchSettings, false); //Endpoint will be deathmatchSettings.WebRequestEndpoint guildClient = new ColyseusClient(guildHostURLEndpoint); //Create the guildClient with only a string endpoint } - You can get available rooms on the server by calling GetAvailableRoomsof ColyseusClient: return await GetAvailableRooms<ColyseusRoomAvailable>(roomName, headers); Connecting to a Room:¶ - There are several ways to create and/or join a room. - You can create a room by calling the Createmethod of ColyseusClientwhich will automatically create an instance of the room on the server and join it: ExampleRoomState room = await client.Create<ExampleRoomState>(roomName); - You can join a specific room by calling JoinById: ExampleRoomState room = await client.JoinById<ExampleRoomState>(roomId); - You can call the JoinOrCreatemethod of ColyseusClientwhich will matchmake into an available room, if able to, or will create a new instance of the room and then join it on the server: ExampleRoomState room = await client.JoinOrCreate<ExampleRoomState>(roomName); Room Options:¶ - When creating a new room you have the ability to pass in a dictionary of room options, such as a minimum number of players required to start a game or the name of the custom logic file to run on your server. - Options are of type objectand are keyed by the type string: Dictionary<string, object> roomOptions = new Dictionary<string, object> { ["YOUR_ROOM_OPTION_1"] = "option 1", ["YOUR_ROOM_OPTION_2"] = "option 2" }; ExampleRoomState room = await ExampleManager.Instance.JoinOrCreate<ExampleRoomState>(roomName, roomOptions); Room Events:¶ ColyseusRoom has various events that you will want to subscribe to: OnJoin¶ - Gets called after the client has successfully connected to the room. OnLeave¶ Updated as of 0.14.7 In order to handle custom websocket closure codes, the delegate functions now pass around the int closure code rather than the WebSocketCloseCode value. - Gets called after the client has been disconnected from the room. - Has an intparameter with the reason for the disconnection. room.OnLeave += OnLeaveRoom; where OnLeaveRoom functions as so: private void OnLeaveRoom(int code) { WebSocketCloseCode closeCode = WebSocketHelpers.ParseCloseCodeEnum(code); LSLog.Log(string.Format("ROOM: ON LEAVE =- Reason: {0} ({1})", closeCode, code)); } OnStateChange¶ - Any time the room's state changes, including the initial state, this event will get fired. room.OnStateChange += OnStateChangeHandler; private static void OnStateChangeHandler(ExampleRoomState state, bool isFirstState) { // Do something with the state } OnError¶ - When a room related error occurs on the server it will be reported with this event. - Has parameters for an error code and an error message. Room Messages:¶ You have the ability to listen for or to send custom messages from/to a room instance on the server. OnMessage¶ - To add a listener you call OnMessagepassing in the type and the action to be taken when that message is received by the client. - Messages are useful for events that occur in the room on the server. (Take a look at our tech demos for use case examples of using OnMessage) room.OnMessage<ExampleNetworkedUser>("onUserJoin", currentNetworkedUser => { _currentNetworkedUser = currentNetworkedUser; }); Send¶ - To send a custom message to the room on the server use the Sendmethod of ColyseusRoom - Specify the typeand an optional messageparameters to send to your room. room.Send("createEntity", new EntityCreationMessage() { creationId = creationId, attributes = attributes }); Room State:¶ See how to generate your RoomStatefrom State Handling - Each room holds its own state. The mutations of the state are synchronized automatically to all connected clients. - In regards to room state synchronization: - When the user successfully joins the room, they receive the full state from the server. - At every patchRate, binary patches of the state are sent to every client (default is 50ms) onStateChangeis called on the client-side after every patch received from the server. - Each serialization method has its own particular way to handle incoming state patches. ColyseusRoomStateis the base room state you will want your room state to inherit from. - Take a look at our tech demos for implementation examples of synchronizable data in a room's state such as networked entities, networked users, or room attributes. (Shooting Gallery Tech Demo) public class ExampleRoomState : Schema { [Type(0, "map", typeof(MapSchema<ExampleNetworkedEntity>))] public MapSchema<ExampleNetworkedEntity> networkedEntities = new MapSchema<ExampleNetworkedEntity>(); [Type(1, "map", typeof(MapSchema<ExampleNetworkedUser>))] public MapSchema<ExampleNetworkedUser> networkedUsers = new MapSchema<ExampleNetworkedUser>(); [Type(2, "map", typeof(MapSchema<string>), "string")] public MapSchema<string> attributes = new MapSchema<string>(); } Debugging¶ If you set a breakpoint in your application while the WebSocket connection is open, the connection will be closed automatically after 3 seconds due to inactivity. To prevent the WebSocket connection from dropping, use pingInterval: 0 during development: import { Server, RedisPresence } from "colyseus"; const gameServer = new Server({ // ... pingInterval: 0 // HERE }); Make sure to have a pingInterval higher than 0 on production. The default pingInterval value is 3000.
https://docs.colyseus.io/fr/colyseus/getting-started/unity3d-client/
CC-MAIN-2021-39
en
refinedweb
Form1: Text: VB270 SQL Client Tool Back color: blue Go to project [menu] à Add a reference Click on the .NET tab Double click on the Execute button Above formal class write Imports System.Data Imports System.Data.SqlClient // Code for button click execute ‘Execute (button_Click) Try Dim cn as new SqlConnection (“userid=sa; password = 123(changes from system to system); database = master; data source = sekhar”) (server name changes from system 2 system) Dim stmt as String Stmt = Mid (txtstmt.Text, 1, txtStmt.Text.IdexOf (“ “)) If stmt.ToUpper = ‘SELECT then Dim da as new sqlDataAdapter (txtstmt.Text, cn) Dim ds as new DataSet Da.Fill (ds, “tmpTable”) dgvResults.DataSource = ds.Tables (0)or (temptable) lb1 Result.Text = ds.Tables (0).Rows.Count & “Rows selected” Else Dim cmd as new SqlCommand cmd.Connection = cn cmd.CommandType = CommandType.Text cmd.CommandText = txtStmt.Text cn.Open () cmd.Execute NonQuery () lb1Result.Text = Stmt.ToUpper & “Statement Executed” End if Catch ex as Exception Lb1Result.text = “Error:” & ex.Message Finally lstHistory.Items.Add (txtStmt.Text) End try Code for clear button ‘Clear (button_Click) txtStmt.Clear () lb1Result.Text = String.Empty dgvResults.DataSource = Nothing txtStmt.Focus () // Code for clear History button ‘ClearHistory (button_Click) ListHistory.Items.Clear () For close button_click ‘Close (button_click) End Double click on List Box Control ‘(ListBox)listHistory – selected Index changed txtStmt.Text = ListHistory.SelectedItem For Form: Accept Button: btnExecute (if pressed enter) Cancel Button: btnClose (if pressed esc) Execute: Working with Add.Net disconnected model: Performing navigations using disconnected model: Binding Manager Base: It is a class which provides the members for supporting the data navigations and manipulations. Properties to support navigation: Position: - It is used to specify the current ROW referred to by the Binding Manager base Variable. For Ex: bmb.position = 0 - The above statement refers to the first row present at the data member bound with the BMB variable. Count: Returns the number of rows present at the data member of the data set. Binding Content: It is used to assign the data member of the data set to the BMB variable. We have some tables called categories; it is in the SQL server north wind database. In public class Form1 ‘Form declarations . . . . Dim cn as SqlConnection Dim da as SqlDataAdapter Dim ds as DataSet Dim bmb as BindingManagerBase In public class sub form1 . . . ‘Form_load Cn = New SqlConnection (“user id = sa; password = ___ 123; database = northwind; data source = sekhar(server name differs)”) Da = New SqlDataAdapter (“Select * from categories”, cn) Ds = New DataSet Da.Fill (ds, “Categories”) Bmb = Binding Context (ds.tables (“Categories”)) Bmb.Position = 0 Code under ‘Form declerations Public sub ShowCategory (By Val index as integer) txtCid.Text = ds.Tables (“categories”).Rows(index) (“Category Id”) txtCname.Text = ds.Tables (“categories”).Rows(index) (“Category Name”) txtDesc.Text = ds.Tables (“categories”).Rows(index) (“Description”) For First button1_clcik ‘First (button_click) Bmb.Position = 0 ShowCategory (bmb.position) // Code for previous button ‘Prev (button_click) Bmb.position - = 1 ShowCategory (bmb.position) “Code for next button_click ‘Next (button_click) Bmb.position+ = 1 ShowCategory (bmb.position) “Code for Last button_click ‘Last (button_click) Bmb.position = bmb.count -1 ShowCategory (bmb.position) Observation Attaching Database for SQL Server 2005: Download “SQL 2000 Sample Db.msi” from the internet Install the above file. Note: The above step will create a folder in C:drive with the name “SQL Server 2000 sample Databases” and copies the sample Databases to the folder. Open SQL Server Management Studio Start Run SQL Wb Connect to the “Databases” present at the object Explorer [F8] and click on “Attach” - Click on Add and select Northwind.mdf file from the created folder. - Click on OK. Various Methods to search the data using ADD.Net Disconnected Model: Contains: It accepts the primary key value and returns true if the record is existing else returns false Syntax: DataSetName.DataTable.Rows.Contains(PrimaryKeyColumnValue) Find: It is used to return a data row if the record exists by accepting the PrimaryKeyColumnValue. Note: If the record is not existing for the provided value then this method returns nothing where “nothing” is a keyword. Let us have a table (Code for search (button_click)) Imports System.Data Imports System.Data.SqlClient Above search button_click in Form2 ‘Form declarations . . . . Dim cn as SqlConnection Dim ds as SqlDataAdapter Dim ds as DataSet Double click on form code for form_lead Cn = New Sql Connection (“user Id = sa; password = 123; database = north wind; data source = Sekhar”) da = New Sql Data Adapter (“Select * from products”, cn) ds = New DataSet da.Fill (ds, “products”) Here we are in search of pid so we have à to put primary constructor to the column. ds.Tables (“Products”).Constraints.Add (“ProductId_Pk”, ds.Tables (“Products”).Columns (Product Id), true) Code for search button_click ‘Dim pid as Integer Pid = Val (txtProductId.Text) If ds.Tables (“Products”).Rows.Contains(pid) = True Then Dim row as Integer Row = ds.Tables (“Products”).Rows.Find (pid) txtProductName.Text = row (“Product Name”) txtQuantity.Text = row (“Quantity per Unit”) txtPrice.Text = row (“UnitPrice”) Else MessageBox.Show (“NoProductFound”, Me.Text) txtProductName.Clear () txtQuantityName.Clear () txtPrice.Clear () txtProductId.Focus () // Code for close button_click End Execute Observation: Find: It accepts a primary key column value and returns the row index value if the record exists else the method returns -1. Note: In order to use the above method, it is mandatory that the data should be sorted based on the primary key column value. Syntax: DataViewName.Find (PrimaryKeyColumnValue) - When the form is loaded all the products must be shown and if you enter a specific product Id that must be highlighted. - These are the requirements. - Project [menu] à Add New Item à WINForm Sample design Code for Form3 Above the Imports System.Data Imports System.Data.Sqlclient In Form3 Dim cn as SqlConnection Dim da as SqlDataAdapter Dim ds as DataSet Dim dv as DataView Dim tmpindex as Integer In Form_Load Cn = New SqlConnection (“Userid = sa; password = 123; database = north wind; data source = sekhar”) da = New Sql Data Adapter (“Select from products”, cn) ds = New DataSet da.Fill (ds, “products”) ds.Tables (“Products”).Constraints.Add (“ProductId_Pk”, ds.Tables (“Products”).Columns (Product Id), true) dv = New DataView (ds.Tables (“Products”)) dv.Sort = “Product Id” dgv.ProductsData.DataSource = dv // Code for “Advanced Search” (button_click) dvProductsData.Rows (tmpIndex).Selected = False If ds.Tables (“Products”).Rows.Contains (txtProductId.Text) = true Then Dim index as Integer Index = dv.Find (txtProductId.Text) dgvProductsData.Rows(index).Selected = true To highlight tmpIndex = index the item (row) Else MessageBox.Show (“NoProductFound”, Me.Text) txtProduct Id.Focus () End if // Code for close End Execute Row Filter: It is used to filter the data based on the condition. Syntax: DataViewName.RowFilter = condition; Take new Form - Place flow Layout panel on form and set the property Dock = Top - Place GroupBox on Form and set the property Dock = Top - Place a label, textbox, button1, Button2 in the Groupbox For Label: Text = ProductName TextBox1: Id = txtProductName Button1: Text = Reset ID = btn Reset Button2: Text = close ID = btn Close - Place DataGridViewControl below Groupbox and set Dock = Fill For form text = Product Details In Form4_Load (above) Imports System.Data Imports System.Data.SqlClient In public class Form4 ‘Form Declarations Dim cn as SqlConnection; Dim da as DataAdapter; Dim ds as DataSet Dim dv as DataView // Form_Load ‘Form_Load FlowLayoutpanel1.Controls.Clear () For i as Integer = 65 to 90 Dim ll .as New LinkLabel ll.Text = chr (i) --- To return character or integer value Add Handler ll.click, address of ll_click à to get to the link of the label by click event. FlowLayoutPanel l.Controls.Add (ll) Cn = New SqlConnection (“userid=sa, password=123, database = northwind, data source = “sekhar”) Da = New SqlDataAdapter (“Select * from products”, cn) Ds = New DataSet Ds.Fill (ds, “Products) Dv = New DataView (ds.Tables (“Products”)) DataGridView1.DataSource = dv In form declarations Private sub ll_click (by Val sender as object, By Val e as Event Args) Dim as New LinkLabel See Dynamic Event Handling l = CType (Sender, LinkLabel) Program back dv.RowFilter = “ProductName like ““ &1.Text&”%” // Double click on TextBox ‘txtProductName.Text changed dv.RowFilter = String.Format (“ProductName like “{0}%””, txtProductName.Text) // Double click on Reset For close btn ‘Reset (button1_click) End dv.RowFilter = “ ” It is used to search the data based on the Non Primary Key column i.e; based on a condition. Note: Select the method of the dataset will return a collection of data rows if the record exists. Syntax: DataSetName.DataTable.Select (condition In code write Imports System.data Imports System.Data.SqlClient In Form5 code ‘Form declarations Dim cn as SqlConnection Dim da as SqlDataAdapter Dim ds as DataSet For Form5_Load Cn = New SqlConnection (“userid=sa, password=123, database = northwind, data source = “sekhar”) Da = New SqlDataAdapter (“Select * from products”, cn) Ds = New DataSet Ds.Fill (ds, “Products) For show Products (button1_click) ‘Show products (button_click) ListProducts.Items.Clear () Dim rows () as DataRow rows = ds.Tables (“Product”).Select (“CategoryId =” & txtCategoryId.Text) If rows.Length > 0 Then For i as Integer = 0 to rows.length -1 ListProducts.Items.Add (rows(i) (“ProductName”) Else MessageBox.Show (“No Data found”, Me.Text) End If For close btn End Execute () Consuming the .NET assembly from Windows Forms application - Select Windows Form template from VS.Net - Design the form as per the requirement Add the reference of the assembly project [menu] Add reference Bank270Transactions.dll Write code as per Real. Imports Bank270Transactions For button1_click ‘Transfer (button_click) Dim obj as New transactions If obj.Transfer (txtfromacno.Text, txtToAcno.Text; txtAmount.Text) = True Then MessageBoxShow (“Amount Transferred”, Me.Text) Else MessageBoxShow (“error while transferring amount”, Me.Text) End if When we execute and hit transfer error occurs since we used delay signed assembly and it allows us to use but not to debug or execute. So we have to add a verification entry. Go to .Net Command Prompt Sn – vr give complete path of assembly. For an in-depth knowledge, click on below - VB.Net Interview Questions - Windows Forms Based Application in vb.net - Working with typed Dataset in vb.net
https://tekslate.com/working-form-based-applications-vb-net
CC-MAIN-2021-39
en
refinedweb
Tcl_DiscardResult (3) - Linux Man Pages Tcl_DiscardResult: save and restore an interpreter's state NAME Tcl_SaveInterpState, Tcl_RestoreInterpState, Tcl_DiscardInterpState, Tcl_SaveResult, Tcl_RestoreResult, Tcl_DiscardResult - save and restore an interpreter's state SYNOPSIS #include <tcl.h> Tcl_InterpState Tcl_SaveInterpState(interp, status) int Tcl_RestoreInterpState(interp, state) Tcl_DiscardInterpState(state) Tcl_SaveResult(interp, savedPtr) Tcl_RestoreResult(interp, savedPtr) Tcl_DiscardResult(savedPtr) ARGUMENTS - - Tcl_Interp *interp (in) Interpreter for which state should be saved. - - int status (in) Return code value to save as part of interpreter state. - - Tcl_InterpState state (in) Saved state token to be restored or discarded. - - Tcl_SavedResult *savedPtr (in) Pointer to location where interpreter result should be saved or restored. DESCRIPTION These routines allows a C procedure to take a snapshot of the current state of an interpreter so that it can be restored after a call to Tcl_Eval or some other routine that modifies the interpreter state. There are two triplets of routines meant to work together. The first triplet stores the snapshot of interpreter state in an opaque token returned by Tcl_SaveInterpState. That token value may then be passed back to one of Tcl_RestoreInterpState or Tcl_DiscardInterpState, depending on whether the interp state is to be restored. So long as one of the latter two routines is called, Tcl will take care of memory management. The second triplet stores the snapshot of only the interpreter result (not its complete state) in memory allocated by the caller. These routines are passed a pointer to a Tcl_SavedResult structure that is used to store enough information to restore the interpreter result. This structure can be allocated on the stack of the calling procedure. These routines do not save the state of any error information in the interpreter (e.g. the -errorcode or -errorinfo return options, when an error is in progress). Because the routines Tcl_SaveInterpState, Tcl_RestoreInterpState, and Tcl_DiscardInterpState perform a superset of the functions provided by the other routines, any new code should only make use of the more powerful routines. The older, weaker routines Tcl_SaveResult, Tcl_RestoreResult, and Tcl_DiscardResult continue to exist only for the sake of existing programs that may already be using them. Tcl_SaveInterpState takes a snapshot of those portions of interpreter state that make up the full result of script evaluation. This include the interpreter result, the return code (passed in as the status argument, and any return options, including -errorinfo and -errorcode when an error is in progress. This snapshot is returned as an opaque token of type Tcl_InterpState. The call to Tcl_SaveInterpState does not itself change the state of the interpreter. Unlike Tcl_SaveResult, it does not reset the interpreter. Tcl_RestoreInterpState accepts a Tcl_InterpState token previously returned by Tcl_SaveInterpState and restores the state of the interp to the state held in that snapshot. The return value of Tcl_RestoreInterpState is the status value originally passed to Tcl_SaveInterpState when the snapshot token was created. Tcl_DiscardInterpState is called to release a Tcl_InterpState token previously returned by Tcl_SaveInterpState when that snapshot is not to be restored to an interp. The Tcl_InterpState token returned by Tcl_SaveInterpState must eventually be passed to either Tcl_RestoreInterpState or Tcl_DiscardInterpState to avoid a memory leak. Once the Tcl_InterpState token is passed to one of them, the token is no longer valid and should not be used anymore.. KEYWORDSresult, state, interp
https://www.systutorials.com/docs/linux/man/docs/linux/man/3-Tcl_DiscardResult/
CC-MAIN-2021-39
en
refinedweb
This blog was written by Bethany Griggs, with additional contributions from the Node.js Technical Steering Committee. We are excited to announce the release of Node.js 16 today! Highlights include the update of the V8 JavaScript engine to 9.0, prebuilt Apple Silicon binaries, and additional stable APIs. You can download the latest release from, or use Node Version Manager on UNIX to install with nvm install 16. The Node.js blog post containing the changelog is available at. Initially, Node.js 16 will replace Node.js 15 as our ‘Current’ release line. As per the release schedule, Node.js 16 will be the ‘Current’ release for the next 6 months and then promoted to Long-term Support (LTS) in October 2021. Once promoted to long-term support the release will be designated the codename ‘Gallium’. As a reminder — Node.js 12 will remain in long-term support until April 2022, and Node.js 14 will remain in long-term support until April 2023. Node.js 10 will go End-of-Life at the end of this month (April 2021). More details on our release plan/schedule can be found in the Node.js Release Working Group repository. V8 upgraded to V8 9.0 As always a new version of the V8 JavaScript engine brings performance tweaks and improvements as well as keeping Node.js up to date with JavaScript language features. In Node.js v16.0.0, the V8 engine is updated to V8 9.0 — up from V8 8.6 in Node.js 15. This update brings the ECMAScript RegExp Match Indices, which provide the start and end indices of the captured string. The indices array is available via the .indices property on match objects when the regular expression has the /d flag. > const matchObj = /(Java)(Script)/d.exec('JavaScript');undefined> matchObj.indices[ [ 0, 10 ], [ 0, 4 ], [ 4, 10 ], groups: undefined ]> matchObj.indices[0]; // Match[ 0, 10 ]> matchObj.indices[1]; // First capture group[ 0, 4 ]> matchObj.indices[2]; // Second capture group[ 4, 10 ] For more information about the new features and updates in V8 check out the V8 blog:. Stable Timers Promises API The Timers Promises API provides an alternative set of timer functions that return Promise objects, removing the need to use util.promisify(). import { setTimeout } from 'timers/promises';async function run() { await setTimeout(5000); console.log('Hello, World!');}run(); Added in Node.js v15.0.0 by James Snell (), in this release, they graduate from experimental status to stable. Other recent features The nature of our release process means that new features are released in the ‘Current’ release line approximately every two weeks. For this reason, many recent additions have already been made available in the most recent Node.js 15 releases, but are still relatively new to the runtime. Some of the recently released features in Node.js 15, which will also be available in Node.js 16, include: - Experimental implementation of the standard Web Crypto API - npm 7 (v7.10.0 in Node.js v16.0.0) - Node-API version 8 - Stable AbortControllerimplementation based on the AbortController Web API - Stable Source Maps v3 - Web platform atob ( buffer.atob(data)) and btoa ( buffer.btoa(data)) implementations for compatibility with legacy web platform APIs New compiler and platform minimums Node.js provides pre-built binaries for several different platforms. For each major release, the minimum toolchains are assessed and raised where appropriate. Node.js v16.0.0 will be the first release where we ship prebuilt binaries for Apple Silicon. While we’ll be providing separate tarballs for the Intel ( darwin-x64) and ARM ( darwin-arm64) architectures the macOS installer ( .pkg) will be shipped as a ‘fat’ (multi-architecture) binary. The production of these binaries was made possible thanks to the generosity of MacStadium donating the necessary hardware to the project. On our Linux-based platforms, the minimum GCC level for building Node.js 16 will be GCC 8.3. Details about the supported toolchains and compilers are documented in the Node.js BUILDING.md file. Deprecations As a new major release, it’s also the time where we introduce new runtime deprecations. The Node.js project aims to minimize the disruption to the ecosystem for any breaking changes. The project uses a tool named CITGM (Canary in the Goldmine), to test the impact of any breaking changes (including deprecations) on a large number of the popular ecosystem modules to provide additional insight before landing these changes. Notable deprecations in Node.js 16 include the runtime deprecation of access to process.binding() for a number of the core modules, such as process.binding(‘http_parser’). A new major release is a sum of the efforts of all of the project contributors and Node.js collaborators, so we’d like to use this opportunity to say a big thank you. In particular, we’d like to thank the Node.js Build Working Group for ensuring we have the infrastructure ready to create and test releases and making the necessary upgrades to our toolchains for Node.js 16.
https://www.tefter.io/bookmarks/626716/readable
CC-MAIN-2021-39
en
refinedweb
12514 [details] XS project which shows the linker error described in the Description. Attached is a zip file containing a project that succeeds when built with XS 5.9.5 + Xamarin.iOS 8.10.4.46, but fails the linker step when built with XS 5.9.5 (build 17) + Xamarin.iOS 8.99.3.290. My current setup: === Xamarin Studio === Version 5.9.5 (build 17) Installation UUID: bb12c0a1-844d-4ace-bbe9-508629c49e9a Runtime: Mono 4.0.3 ((detached/d6946b4) GTK+ 2.24.23 (Raleigh theme) Package version: 400030020 === Apple Developer Tools === Xcode 7.0 (8190.6) Build 7A176x === Xamarin.iOS === Version: 8.99.3.290 (Business Edition) Hash: 2628f96 Branch: master Build date: 2015-08-09 22:08:44-0400 === Build Information === Release ID: 509050017 Git revision: 7d17e84374f953da1c64d66d75fc651520528e6e Build date: 2015-07-21 20:36:20-04 Xamarin addins: 45b520f604ef71d1ad2cd3756544d45dac93867e === Operating System === Mac OS X 10.10.4 Darwin ws1799.lrscorp.net 14.4.0 Darwin Kernel Version 14.4.0 Thu May 28 11:35:04 PDT 2015 root:xnu-2782.30.5~1/RELEASE_X86_64 x86_64 This linker error is still a problem with Xamarin.iOS 8.99.4.220. === Xamarin Studio === Version 5.9.5 (build 18) Installation UUID: bb12c0a1-844d-4ace-bbe9-508629c49e9a Runtime: Mono 4.2.0 (explicit/a224653) GTK+ 2.24.23 (Raleigh theme) Package version: 402000179 === Apple Developer Tools === Xcode 7.0 (8208.9) Build 7A192o === Xamarin.iOS === Version: 8.99.4.220 (Business Edition) Hash: 52034fb Branch: master Build date: 2015-08-26 23:50:57-0400 === Build Information === Release ID: 509050018 Git revision: e9148b1cfc781f8e7751f88540c6d65cca5be410 Build date: 2015-08-24 11:44:21-04 Xamarin addins: 3b908d565411f1a7425b67926ede4359e7000172 === Operating System === Mac OS X 10.10.5 Darwin ws1799.lrscorp.net 14.5.0 Darwin Kernel Version 14.5.0 Wed Jul 29 02:26:53 PDT 2015 root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 This is the build error: > Linking SDK only for assembly /Users/rolf/Downloads/TestApp/TestApp/bin/iPhoneSimulator/Debug//TestApp.exe into /Users/rolf/Downloads/TestApp/TestApp/obj/iPhoneSimulator/Debug/mtouch-cache/PreBuild > MTOUCH: error MT2001: Could not link assemblies. Reason: Can't not find the nested type '<<.ctor>b__2c>d__34' in 'Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache/Timer/<>c__DisplayClass2f full build log: This issue persists in Xamarin.iOS 9.0.0.32. We were able to work around the problem by recompiling the 3.5.1 MvvmCross DownloadCache source with Mono, which generates slightly different types. ➜ (from nuget) monop -p -r:Cirrious.MvvmCross.Plugins.DownloadCache.dll.orig|grep Timer Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<>c__DisplayClass2f Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<>c__DisplayClass2f+<<.ctor>b__2c>d__34 Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<>c__DisplayClass2f+<>c__DisplayClass32 Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+TimerCallback ➜ (built with mono) monop -p -r:Cirrious.MvvmCross.Plugins.DownloadCache.dll|grep Timer Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<Timer>c__AnonStorey5 Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<Timer>c__AnonStorey5+<Timer>c__async3 Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+Timer+<Timer>c__AnonStorey5+<Timer>c__async3+<Timer>c__AnonStorey4 Cirrious.MvvmCross.Plugins.DownloadCache.MvxFileDownloadCache+TimerCallback Any news on this issue? When can we expect a fix? The issue seems to be with the .mdb file (copied from VS to the Mac). A release builds works fine. *** Bug 33964 has been marked as a duplicate of this bug. *** *** Bug 34063 has been marked as a duplicate of this bug. *** Created attachment 12970 [details] Test case, minimal Just on the small chance it might be useful at some point over the course of this bug's life, the following class is sufficient to reproduce the problem (when it is compiled by Microsoft's C# 5 (VS 2013) `csc.exe` compiler): > public class Class1 > { > public Class1() > { > Action x = async () => { }; > } > } The attached test case includes a `csc`-compiled version of this class in `UnifiedSingleViewIphone1/lib/PortableClassLibrary1.dll` ## Steps to reproduce > $ xbuild /t:Build /p:Platform="iPhone" /p:Configuration="Release" PortableClassLibrary1.sln (You could build on Windows instead if you wanted, but since `PortableClassLibrary1.dll` is pre-compiled, it is sufficient to build the solution on Mac.) ## Regression status: regression in Xamarin.iOS 9.0 BAD: Xamarin.iOS 9.0.1.18 (xcode7-c5: d230615) GOOD: Xamarin.iOS 8.10.5.26 (6757279) The Xamarin developers are creating a follow-up build to fix this issue that will be released within the next few days. (Small additional side note to users seeing this issue with libraries other than MvvmCross: the workaround from comment 4 is only possible if you have the source code of the library that causes the problem. There are almost certainly some closed source NuGet packages and Components that are affected by this issue. The workaround from comment 4 will not be possible with those libraries. Apart from disabling the linker entirely (which will not be suitable for App Store submissions) or downgrading, no general workarounds are known at this time.) Is there a release date for 9.0.1.20? I can't see it on the Beta or Stable channels. The latest version available is 9.0.1.18 Is there any additional information about a possible work around. My application is broken in iOS 9, but I cannot build my app with the linker turned on to submit a fixed version. Please give an ETA of the release or a workaround. This is a major bug. Upgrading to VS 2015 works fine. The catch is if the solution has any shared projects, it may complain about the MSBuild/v14.0.0/8.1/Microsoft.Windows.UI.Xaml.CSharp.targets not found. If that's the case, then just have to copy from v12.0.0/8.1 over and it should work as normal. ## Draft development build with a fix available via contact@xamarin.com The Xamarin.iOS team has now created a draft development build that reverts the change that caused this problem. For anyone who would like access to this draft build, please send an email to contact@xamarin.com and refer to Bug 33124. This draft build is under review by the engineering and QA teams to assess whether it is suitable for publication on the Stable updater channel. If all goes according to plan, it will be available on the Stable channel before the end of the week. Thanks Brendan. I got a copy of the build earlier, and I am not able to build with the linker turned on. For anyone who tries the new build and hits problems, please follow-up with the Support Team via email. You can use contact+xamarinios9.0@xamarin.com or one of the email addresses listed on. Thanks in advance! For bookkeeping, I will note that Xamarin.iOS 9.0.1.20 (that includes the fix for this bug) has now been released to the Stable channel. See comment 21 for any further follow-up on this bug. Thanks!
https://bugzilla.xamarin.com/33/33124/bug.html
CC-MAIN-2021-39
en
refinedweb
Introduction This article will focus on explaining the factory pattern in C#, which is one of the most used patterns in the object oriented programming world. People say that design patterns originally evolved in the civil construction field and were later adopted in the software development field. In simple design, patterns are defined as the solutions to the recurring problems. These are the solutions or program templates that were adopted and successfully implemented by the people for some specific problems. These standard program templates or designs are being adopted by developers to overcome similar problems. A good design pattern makes the code clean and mandates the other programmers to follow it. In this article I will be including the C# language code samples for demonstration in the form of class diagrams. Factory Pattern Gang of Four patterns is the famous design patterns that are being widely followed today. Factory pattern is one among them. These patterns are divided into three sub groups as Creational pattern, Structural Pattern and Behavioral pattern. The factory falls under the creational pattern category. The factory pattern can be sub categorized into two. - Factory Method - Abstract Factory The factory pattern is applicable in scenarios where you don’t want the clients to decide on which of your concrete classes to be used rather than making your code decide which of your concrete class objects to be returned based on the parameters provided by the client. What is Achieved by Using Factory Pattern? - The object creation can be separated from the clients deciding on what objects to be created. - The object creation of the concrete classes can be controlled through values such as: - Parameters passed by the client functions. - Configuration value stored in the .net framework configuration file. - Configuration value stored in the database. - Factory allows you to add new concrete classes and methods without breaking or modifying the existing code. Factory Method In the factory method, define an interface for multiple factory classes and handover the object creation task to those factory classes based. The respective factory class should be loaded then to create the concrete objects. Below is the class diagram of the C# application implementing the factory method pattern. Create an interface named ISnackFactory. namespace FactoryMethod { interface ISnackFactory { ISnack CreateSnack(); } } Create two factory classes IcecreamFactory and ChocolateFactory for creating the objects Icecream and Chocolate respectively. Both of the concrete factories should implement from the ISnackFactory interface. namespace FactoryMethod { class IcecreamFactory : ISnackFactory { public ISnack CreateSnack() { return new Icecream(); } } } namespace FactoryMethod { class ChocolateFactory : ISnackFactory { public ISnack CreateSnack() { return new Chocolate(); } } } Create the interface called ISnack, which should be inherited by both the concrete classes. namespace FactoryMethod { interface ISnack { bool IsRefrigirationRequired { get; } void Eat(); } } Below are the implementations for the concrete classes Icecream and Chocolate. namespace FactoryMethod { class Chocolate : ISnack { public bool IsRefrigirationRequired { get { return false; } } public void Eat() { Console.WriteLine(string.Format("Refrigiration Required? {0}", IsRefrigirationRequired)); Console.WriteLine("Chocolate is sweet and yummy"); } } } namespace FactoryMethod { class Icecream : ISnack { public bool IsRefrigirationRequired { get { return true; } } public void Eat() { Console.WriteLine(string.Format("Refrigiration Required? {0}", IsRefrigirationRequired)); Console.WriteLine("Icecream is cool and soft"); } } } Finally add the code for the client, which does the call to the factories and fetches the object of the concrete class. namespace FactoryMethod { class Program { static void Main(string[] args) { ISnackFactory snackFactory = LoadFactory("icecream"); ISnack snack = snackFactory.CreateSnack(); snack.Eat(); } private static ISnackFactory LoadFactory(string snack) { switch (snack) { case "icecream": return new IcecreamFactory(); default: return new ChocolateFactory(); break; } } } } Figure 1 As the diagram shows, below are the advantages and drawbacks. - Since the objects are created by the respective factories the reference of the concrete classes are eliminated. - The factories can still be inherited to make more specific objects. - This also mandates us to create one factory just for creating the object of one concrete class. Abstract Factory Abstract factory pattern is abstracted further that the object is created from a family or related classes without specifying the concrete class. The client wouldn’t know which factory will return which object from the family. Abstract factory can have multiple factory methods each creating different types of concrete objects belonging to the same family. Considering the code example in the previous section’s C# code, with the abstract factory implementation, the Chocolate factory would know how to create a dark chocolate and milk chocolate. For each concrete class representing a type, belonging to a family would sign the contracts to an interface. Conclusion I hope this article provides an in-depth explanation of the factory patterns and acts as a guide to the C# programmers who are striving to implement it. Please make use of the comments section to punch in your valuable
https://www.codeguru.com/csharp/guide-to-implement-the-factory-pattern-in-c/
CC-MAIN-2021-39
en
refinedweb
For reference, the following is a comprehensive list of all expressions allowed in guards: - comparison operators ( ==, !=, ===, !==, >, >=, <, <=) strictly boolean operators ( and, or, not) - arithmetic binary operators ( +, -, *, /) - arithmetic unary operators ( +, -) - binary concatenation operator ( <>) inand not inoperators (as long as the right-hand side is a list or a range) the following “type-check” functions (all documented in the Kernelmodule): the following guard-friendly functions (all documented in the Kernelmodule): the following handful of Erlang bitwise operations, if imported from the Bitwisemodule: Macros constructed out of any combination of the above guards are also valid guards - for example, Integer.is_even/1. See the section “Defining custom guard expressions” Kernel.defguard/1and Kernel.defguardp/1. A custom guard is always defined based on existing guards. Other constructs are for, with, try/rescue/catch/else, and the Kernel. Our macro would look like this: defmodule MyInteger do defmacro is_even(number) do quote do is_integer(unquote(number)) and rem(unquote(number), 2) == 0 end end. Here’s an example: defmodule MyInteger do defguard is_even(value) when is_integer(value) and rem(value, 2) == 0 end For most cases, the two forms are exactly the same. However, there exists a subtle difference in the case of failing guards, as discussed in the section above. In case of a boolean expression guard, a failed element means the whole guard fails. In case of multiple guards it means the next one will be evaluated. The difference can be highlighted with an example: def multiguard(value) when map_size(value) < 1 when tuple_size(value) < 1 do :guard_passed end def multiguard(_value) do :guard_failed end def boolean(value) when map_size(value) < 1 or tuple_size(value) < 1 do :guard_passed end def boolean(value) do :guard_failed end multiguard(%{}) #=> :guard_passed multiguard({}) #=> :guard_passed boolean(%{}) #=> :guard_passed boolean({}) #=> :guard_failed For cases where guards do not rely on the failing guard behavior the two forms are exactly the same semantically but there are cases where multiple guard clauses may be more aesthetically pleasing.
https://hexdocs.pm/elixir/guards.html
CC-MAIN-2018-43
en
refinedweb
Distributing Human Resources among Software Development Projects 1 - Monica Beasley - 2 years ago - Views: Transcription o be assigned o developmen proecs. The esimaion calculaes he opimal disribuion from he economical poin of view (i.e.: ha which less coss). To accomplish his, we buil an economical model o disribue human resources among proecs in one organisaion. The equaions and algorihms used o do his are presened in he paper. We also briefly presen a ool which does hese esimaions.. Inroducion In he las few years, an increasing demand for personnel qualified in Informaion Technologies has been observed, which grealy exceeds he offer: in 998 here was in Europe 500,000 free posiions in IT, wih an expeced increase of up o 2.4 million people for he year 2004 []. Some repors from he European Commission also confirm his endency [2][3], which is reviewed by daily newspapers almos every monh. The siuaion in he U.S.A. is quie similar: he yearly number of visas for foreigners qualified in Informaion Technologies has been increased o he poin of 300,000 ones, alhough big sofware organizaions have solicied a greaer number of licenses. Wih his siuaion, sofware organizaions mus do an adequae planning of heir human resources among he differen proecs ha hey are carrying ou. An addiional difficuly is o fix he number of developmen proecs o be acceped by a sofware organizaion, when many imes i is known ha he organizaion has no (and will no have) people enough o execue all he proecs in ime and budge. However, he reecion of a sofware proec will cause probably he loss of ha cusomer for he organizaion, and maybe a negaive impac on oher poenial cusomers. Therefore, i is imporan o accep proecs, bu i is also a basic aciviy o disribue he human resources in an adequae way among all of hem, and his one is he main goal of his paper. We propose o make an economical model of he porfolio of sofware developmen proecs, in order o esimae he opimal quaniy of human resources o be devoed o each proec, during every day of he considered period. We have successfully esed he mehod wih several proecs which use differen life cycle models, alhough we have reasons o believe ha i can be easily adapable o sill-non-sudied life cycles. This mehod is a very advanced evoluion of he work presened las year in his same forum [4]. The paper is organized as follows: Secion 2 explains he model we use o represen a porfolio of developmen proecs from an economical poin of view, which includes several equaions. In Secion 3, an algorihm o calculae he disribuion is presened, as This work is par of he MPM and MATIS proecs. MPM is developed wih Aos ODS, S.A. and parially suppored by he Miniserio de Ciencia y Tecnología, Programa de Tecnologías de la Información y las Comunicaciones (IT ); MATIS is parially suppored by he European Union and CICYT (D97-608TIC). 2 well as CRED, a ool we have developed o do esimaions. Secion 4 exposes our conclusions and fuure and immediae work lines. 2. Economical model of a porfolio of developmen proecs One of he primary goals of he sofware organizaion is o obain economical benefis from he developmens. Independenly of he life cycle seleced for a given proec, his one can be considered as he union of several subproecs. Someimes, he consecuion of a subproec will be a mus o sar anoher one (in he waerfall life cycle model, he requiremen analysis comes before he design phase), bu oher imes, he life cycle seleced allows o develop in parallel differen pars of he sysem. Depending on he conrac beween he cusomer and he supplier organizaions, maybe some of hose subproecs mus be delivered according o a previously signed scheduling. or example, a sysem developed using he Unified Process can be delivered in several releases, each one wih some incremen of funcionaliy wih respec o he previous one. These successive releases consiue parial resuls whose dae of deliver, prize and possible sancions by delay may be covenaned in he developmen conrac. 2.. Maximizaion equaion The sofware organizaion mus ake ino accoun all hese facors o do an adequae resource assignmen o each proec and subproec, wih he primary goal of maximize is economical benefis. This can be represened by he following equaion: Max B = I C = i= ( Ii Ci) Eq. In Eq.., B represens he benefis produced by he developmen proecs in he porfolio; I and C respecively are he oal incomes and coss, whereas Ii and Ci represen he incomes and coss of he i-h proec. As every proec is an aggregaion of subproecs, Eq. can be rewrien in his way: Max B i = i= = ( Ii Ci) Eq. 2 In Eq. 2, i is he number of subproecs if he i-h proec, and Ii and Ci are, respecively, he incomes and coss of he -h subproec of he i-h proec (hereinafer, he i subproec). Depending so on he conrac as on he seleced life cycle, someimes Ii will be zero, since no all subproecs will be delivered o he cusomer and hey will no produce any income. In he oher side, Ii is ypically covenaned in he developmen conrac, since he cusomer organizaion needs o know he proec budge before is assignmen o he developmen organizaion. Coss of each subproec are differen: each one akes more or less ime han anoher one, and he people in charge of hem have differen levels and salaries (analyss, programmers, ess engineers...). Anoher influencing facor on he coss of a subproec (and, herefore, on he coss of he complee proec) is he exisence of delays in he dae of delivering and of sancions by delays. Wih his assumpion, coss of he i proec can be represened in his manner: Ci = Ri + Qi Eq. 3 3 Ri are he coss of he resources devoed o he subproec, whereas Qi is he quaniy o be paid by sancions due o delays in he i subproec. We suppose ha a subproec needs only one ype of resource (analyss, for example): oherwise, i could be divided ino more subproecs. Wih his consideraion, Ri can be expressed in his way: Ri = ( Ti h) = Eq. 4 In Eq. 4, Ti is he number of hours of he -h resource needed o execue he i subproec; is he number of differen resources (analyss, programmers, es engineers, ec.); h represens he cos of one hour of resource of he ype. The second variable of Eq. 3, Qi, depends on he scheduled dae of delivering (or scheduled duraion) for he i proec and on he real dae (or real duraion). A a firs sigh, Qi can be expressed in his way: Qi = Si ( Ei Pi) Eq. 5 In Eq. 5, Si is he covenaned sancion for he i subproec (obviously, if such subproec will no be delivered, i will no have any sancion, alhough i could have some influence on nex subproecs). Ei isherealnumberofdaysusedofinishhei proec, whereas Pi is he scheduled duraion, also in days, which depends on he oal number of hours of effor required by he i proec (Ti ). = Pi was esimaed by he sofware organizaion using some esimaion mehod and is covenaned in he conrac. The value of Ei depends on he number of hours devoed o he subproec. In a given subproec, we will reach he value of Ei when he sum of he hours devoed each day o he i proec will be equal o he number of required hours of he -h resource, Ti. In oher words: Ei = m m / e ik = Ti Eq. 6 k = = = In Eq. 6, e ik is he number of hours of he -h resource devoed o he i proec in he k-h day. A his poin, we have characerized all he variables we need o rewrie Eq. : Max B = I C = = i= i i Ii ( Ti h) Si ( Ei Pi) i= = = ( Ii Ci) = i= = ( Ii Ci) = i i= = [ Ii Ri Qi] = Eq. 7 In Eq. 7, he only unknown quaniy is Ei, since he res have previously assigned values Consrains To esimae he opimal disribuion of resources, we mus idenify now he possible consrains o be applied o Eq. 7. 4 The firs consrain represens he fac ha he sum of he hours of a given resource devoed during a concree day o all subproecs canno be greaer han he available number of hours of ha kind of resource in ha day (A, A 2 ): e + e e A [, ], e 2 + e e 2 A 2 [, ],... The subindex is used o group resources by is ype (i.e.: analyss, programmers...). Eq. 8 summarizes he previous equaions: i= eik AK,, k Eq. 8 The nex consrain means ha he ime devoed o a subproec mus be equal o he ime required by i: i = eik = Ti, k, Eq. 9 The following simple consrains deermine he minimal values of he variables ha inervene in Eq. 7: Ii 0 hi 0 Si 0 T e i A ik K 0 0 0,, k, k Eq. 0o Complee model (for reference) Wih all he equaions saed in he previous subsecions, he porfolio of proecs can be modelled wih he following equaion: 5 i Max B = Ii i subec o : eik AK,, k i= eik = Ti, k, = Ii 0 h 0 Si 0 Ti 0, eik 0, k AK 0, k = = = ( Ti h) Si ( Ei Pi) The meaning of each variable in he lef side is he following: = number of proecs in he porfolio i = number of subproecs in he i-h proec Ii = incomes by he i subproec =number of differen ypes of resources Ti = number of hours required of resource ype by he i subproec h = cos of each hour of resource devoed o he i subproec Si =sancionobepaidbyeachdayofdelayinhedeliverofhei subproec Ei = real number of days used o finish he i proec Pi is he scheduled duraion (in days) of he i subproec e = resources devoed he k-h day o he i subproec i A K = available resources of he ype in he k-h day I is imporan o remember he following definiion of Ei (Eq. 6): Ei = m m / e ik = Ti. This equaion is he basis o esimae he opimal disribuion of resources. k = = = 3. Resources esimaion All we need o esimae he resources o be assigned o every developmen proec in he porfolio is o resolve he equaion shown in Secion 2.3. As no all he consrains are lineal (see Eq. 6), he simplex mehod canno be used. Some of he candidae mehods o resolve his kind of equaions are Geneic algorihms or Simulaed annealing. However, as he number of unknown values is very lile, we can find a very good approximaion o he opimal soluion esing all he possible combinaions. 6 3.. Algoryhm A coarse version of he algorihm could be he following: max =-MAXDOUBLE for k=0 o maxduraion for i= o do for = o for e=0 o A[,k] e[i,,k,]=e if Benefi()>max hen max=benefi() saveparameers(i,,k,e) endif end end end end igure. irs version of he algoryhm. Obviously, he code in igure can be very opimized hrough he applicaion of he consrains and some oher rules. or example: We will assign resources (assignmen of values o he e arrayinigure) when: o o The i proec needs hem In he k-h day, here are available resources of he ype which is being esed. We will calculae he benefi (Eq. 7) when he curren combinaion which is being esed implies he finalizaion of all he proecs in he porfolio. The e array can be a vecor (wih he consequen memory saving): only some ransformaion rules are required CRED: a ool for esimaing resources disribuion In order o do he auomaic esimaion of resources, we have developed CRED, a ool which helps in he calculus of he resources disribuion. I implemens an opimised version of he algorihm previous. igure 2 shows one he CRED s screens, in he momen of doing an esimaion. igure 2. CRED during he calculus of a disribuion. 7 CRED allows he racking of he proecs, o change he assignaion, ec. In his manner, several kind of repors, saisics and graphics can be obained, in order o analyze effors, resources, ec., and compare hem agains a baseline. igure 3 shows he screen used o change he emporary availabiliy of resources. igure 3. Changing resources availabiliy. igure 4 shows a view of he class diagrams used o build he ool: resources needed by a subproec are deermined by he associaion wih he Resourceecessiy class. Porfolio 0..* -mproecs Proec -m Subproecs..* Subproec -mrequired Resourceecessiy -mresource Resource..* igure 4. Concepual diagram Conex of CRED CRED is now being adaped o allow is full inegraion in MATIS [5], an inegral environmen for managing mainenance proecs. MATIS uses XMI o exchange informaion among he ools ha compose i. In his conex, CRED should be capable of sending is informaion o MATIS in a sandardized forma based on XML, which is one of our curren works. 8 4. Conclusions and fuure work This paper has presened a mehod and a ool o esimae he disribuion of resources o be assigned o developmen proecs. The esimaion aemps o assure an opimal disribuion from he economical poin of view. Boh he mehod and he ool mus be modified in order o ake ino accoun: Dependencies (i.e.: a subproec canno begin before he end of anoher one) Indirec coss (i.e.: he organizaion canno accep a proec because i will no have available resources) 5. References [] Bourke, T.M. (999). Seven maor ICT companies oin he European Commission o work owards closing he skills gap in Europe (Press Release). Available also a (ov., 7, 2000): hp:// [2] European Commission (999). The compeiiveness of European enerprises in he face of globalisaion. Available a (ov, 7, 2000): hp://europa.eu.in/comm/research/pdf/com98-78en.pdf [3] European Commission (2000). Employmen in Europe Available a (ov., 7, 2000): hp://europa.eu.in/comm/employmen_social/empl&esf/docs/empleurope2000_en.pdf [4] Polo, M., Piaini, M. and Ruiz,. (2000). Planning he non-planneable mainenance. Proec Conrol, The Human acor: Proceedings of he ESCOM-SCOPE 2000 Combined Conference. Munich, Germany. [5] Ruiz,., Piaini,. and Polo, M. (200). An Inegraed Environmen for Managing Sofware Mainenance Proecs. In van Bon (Ed.): World Class IT Service Managemen Guide, 2 nd ediion. Addison Wesley.) Reporting to Management CHAPTER 31 Reporing o Managemen Inroducion The success or oherwise of any business underaking depends primarily on earning revenue ha would generae sufficien resources for sound growh. To achieve his objec Capacity Planning and Performance Benchmark Reference Guide v. 1.8 Environmenal Sysems Research Insiue, Inc., 380 New York S., Redlands, CA 92373-8100 USA TEL 909-793-2853 FAX 909-307-3014 Capaciy Planning and Performance Benchmark Reference Guide v. 1.8 Prepared by:
http://docplayer.net/17876001-Distributing-human-resources-among-software-development-projects-1.html
CC-MAIN-2018-43
en
refinedweb
Read a card register #include <hw/dcmd_sim_sdmmc.h> #define DCMD_SDMMC_CARD_REGISTER __DIOTF(_DCMD_CAM, _SIM_SDMMC + 4, struct _sdmmc_card_register) The command reads a card register. The SDMMC_CARD_REGISTER structure is defined as follows: typedef struct _sdmmc_card_register { uint32_t action; uint32_t type; uint32_t address; uint32_t length; uint32_t rsvd[2]; /* uint8_t data[ length ]; variable length data */ } SDMMC_CARD_REGISTER; The members include: It's set to one of the following on return: Set the action and type members. The filled-in structure.
http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.devctl/topic/sdmmc/dcmd_sdmmc_card_register.html
CC-MAIN-2018-43
en
refinedweb
I keep getting "SyntaxError: Unexpected token else. I've looked around on the forums and I still can't figure out what's wrong. Please help. Thanks. var compare = function(choice1, choice2) { if (choice1 === choice2) { return("The result is a tie!"); } else if (choice1 === "rock") { if (choice2 === "scissors") { return ("rock wins"); }} else { return ("paper wins"); } else if (choice1 === "paper") { if (choice2 === "rock"){ return("paper wins"); }} else{ return("sciscors win"); } }
https://discuss.codecademy.com/t/trouble-with-rock-paper-scissors-pt-7/71334
CC-MAIN-2018-43
en
refinedweb
Question: I wanted to bring this challenge to the attention of the stackoverflow community. The original problem and answers are here. BTW, if you did not follow it before, you should try to read Eric's blog, it is pure wisdom. Summary: Write a function that takes a non-null IEnumerable and returns a string with the following characteristics: - If the sequence is empty!) As you can see even our very own Jon Skeet (yes, it is well known that he can be in two places at the same time) has posted a solution but his (IMHO) is not the most elegant although probably you can not beat its performance. What do you think? There are pretty good options there. I really like one of the solutions that involves the select and aggregate methods (from Fernando Nicolet). Linq is very powerful and dedicating some time to challenges like this make you learn a lot. I twisted it a bit so it is a bit more performant and clear (by using Count and avoiding Reverse): public static string CommaQuibbling(IEnumerable<string> items) { int last = items.Count() - 1; Func<int, string> getSeparator = (i) => i == 0 ? string.Empty : (i == last ? " and " : ", "); string answer = string.Empty; return "{" + items.Select((s, i) => new { Index = i, Value = s }) .Aggregate(answer, (s, a) => s + getSeparator(a.Index) + a.Value) + "}"; } Solution:1 How about this approach? Purely cumulative - no back-tracking, and only iterates once. For raw performance, I'm not sure you'll do better with LINQ etc, regardless of how "pretty" a LINQ answer might be. using System; using System.Collections.Generic; using System.Text; static class Program { public static string CommaQuibbling(IEnumerable<string> items) { StringBuilder sb = new StringBuilder('{'); using (var iter = items.GetEnumerator()) { if (iter.MoveNext()) { // first item can be appended directly sb.Append(iter.Current); if (iter.MoveNext()) { // more than one; only add each // term when we know there is another string lastItem = iter.Current; while (iter.MoveNext()) { // middle term; use ", " sb.Append(", ").Append(lastItem); lastItem = iter.Current; } // add the final term; since we are on at least the // second term, always use " and " sb.Append(" and ").Append(lastItem); } } } return sb.Append('}').ToString(); } static void Main() { Console.WriteLine(CommaQuibbling(new string[] { })); Console.WriteLine(CommaQuibbling(new string[] { "ABC" })); Console.WriteLine(CommaQuibbling(new string[] { "ABC", "DEF" })); Console.WriteLine(CommaQuibbling(new string[] { "ABC", "DEF", "G", "H" })); } } Solution:2 Inefficient, but I think clear. public static string CommaQuibbling(IEnumerable<string> items) { List<String> list = new List<String>(items); if (list.Count == 0) { return "{}"; } if (list.Count == 1) { return "{" + list[0] + "}"; } String[] initial = list.GetRange(0, list.Count - 1).ToArray(); return "{" + String.Join(", ", initial) + " and " + list[list.Count - 1] + "}"; } If I was maintaining the code, I'd prefer this to more clever versions. Solution:3 If I was doing a lot with streams which required first/last information, I'd have thid extension: [Flags] public enum StreamPosition { First = 1, Last = 2 } public static IEnumerable<R> MapWithPositions<T, R> (this IEnumerable<T> stream, Func<StreamPosition, T, R> map) { using (var enumerator = stream.GetEnumerator ()) { if (!enumerator.MoveNext ()) yield break ; var cur = enumerator.Current ; var flags = StreamPosition.First ; while (true) { if (!enumerator.MoveNext ()) flags |= StreamPosition.Last ; yield return map (flags, cur) ; if ((flags & StreamPosition.Last) != 0) yield break ; cur = enumerator.Current ; flags = 0 ; } } } Then the simplest (not the quickest, that would need a couple more handy extension methods) solution will be: public static string Quibble (IEnumerable<string> strings) { return "{" + String.Join ("", strings.MapWithPositions ((pos, item) => ( (pos & StreamPosition.First) != 0 ? "" : pos == StreamPosition.Last ? " and " : ", ") + item)) + "}" ; } Solution:4 Here as a Python one liner >>> f=lambda s:"{%s}"%", ".join(s)[::-1].replace(',','dna ',1)[::-1] >>> f([]) '{}' >>> f(["ABC"]) '{ABC}' >>> f(["ABC","DEF"]) '{ABC and DEF}' >>> f(["ABC","DEF","G","H"]) '{ABC, DEF, G and H}' This version might be easier to understand >>> f=lambda s:"{%s}"%" and ".join(s).replace(' and',',',len(s)-2) >>> f([]) '{}' >>> f(["ABC"]) '{ABC}' >>> f(["ABC","DEF"]) '{ABC and DEF}' >>> f(["ABC","DEF","G","H"]) '{ABC, DEF, G and H}' Solution:5 Here's a simple F# solution, that only does one forward iteration: let CommaQuibble items = let sb = System.Text.StringBuilder("{") // pp is 2 previous, p is previous let pp,p = items |> Seq.fold (fun (pp:string option,p) s -> if pp <> None then sb.Append(pp.Value).Append(", ") |> ignore (p, Some(s))) (None,None) if pp <> None then sb.Append(pp.Value).Append(" and ") |> ignore if p <> None then sb.Append(p.Value) |> ignore sb.Append("}").ToString() (EDIT: Turns out this is very similar to Skeet's.) The test code: let Test l = printfn "%s" (CommaQuibble l) Test [] Test ["ABC"] Test ["ABC";"DEF"] Test ["ABC";"DEF";"G"] Test ["ABC";"DEF";"G";"H"] Test ["ABC";null;"G";"H"] Solution:6 I'm a fan of the serial comma: I eat, shoot, and leave. I continually need a solution to this problem and have solved it in 3 languages (though not C#). I would adapt the following solution (in Lua, does not wrap answer in curly braces) by writing a concat method that works on any IEnumerable: function commafy(t, andword) andword = andword or 'and' local n = #t -- number of elements in the numeration if n == 1 then return t[1] elseif n == 2 then return concat { t[1], ' ', andword, ' ', t[2] } else local last = t[n] t[n] = andword .. ' ' .. t[n] local answer = concat(t, ', ') t[n] = last return answer end end Solution:7 This isn't brilliantly readable, but it scales well up to tens of millions of strings. I'm developing on an old Pentium 4 workstation and it does 1,000,000 strings of average length 8 in about 350ms. public static string CreateLippertString(IEnumerable<string> strings) { char[] combinedString; char[] commaSeparator = new char[] { ',', ' ' }; char[] andSeparator = new char[] { ' ', 'A', 'N', 'D', ' ' }; int totalLength = 2; //'{' and '}' int numEntries = 0; int currentEntry = 0; int currentPosition = 0; int secondToLast; int last; int commaLength= commaSeparator.Length; int andLength = andSeparator.Length; int cbComma = commaLength * sizeof(char); int cbAnd = andLength * sizeof(char); //calculate the sum of the lengths of the strings foreach (string s in strings) { totalLength += s.Length; ++numEntries; } //add to the total length the length of the constant characters if (numEntries >= 2) totalLength += 5; // " AND " if (numEntries > 2) totalLength += (2 * (numEntries - 2)); // ", " between items //setup some meta-variables to help later secondToLast = numEntries - 2; last = numEntries - 1; //allocate the memory for the combined string combinedString = new char[totalLength]; //set the first character to { combinedString[0] = '{'; currentPosition = 1; if (numEntries > 0) { //now copy each string into its place foreach (string s in strings) { Buffer.BlockCopy(s.ToCharArray(), 0, combinedString, currentPosition * sizeof(char), s.Length * sizeof(char)); currentPosition += s.Length; if (currentEntry == secondToLast) { Buffer.BlockCopy(andSeparator, 0, combinedString, currentPosition * sizeof(char), cbAnd); currentPosition += andLength; } else if (currentEntry == last) { combinedString[currentPosition] = '}'; //set the last character to '}' break; //don't bother making that last call to the enumerator } else if (currentEntry < secondToLast) { Buffer.BlockCopy(commaSeparator, 0, combinedString, currentPosition * sizeof(char), cbComma); currentPosition += commaLength; } ++currentEntry; } } else { //set the last character to '}' combinedString[1] = '}'; } return new string(combinedString); } Solution:8 Another variant - separating punctuation and iteration logic for the sake of code clarity. And still thinking about perfomrance. Works as requested with pure IEnumerable/string/ and strings in the list cannot be null. public static string Concat(IEnumerable<string> strings) { return "{" + strings.reduce("", (acc, prev, cur, next) => acc.Append(punctuation(prev, cur, next)).Append(cur)) + "}"; } private static string punctuation(string prev, string cur, string next) { if (null == prev || null == cur) return ""; if (null == next) return " and "; return ", "; } private static string reduce(this IEnumerable<string> strings, string acc, Func<StringBuilder, string, string, string, StringBuilder> func) { if (null == strings) return ""; var accumulatorBuilder = new StringBuilder(acc); string cur = null; string prev = null; foreach (var next in strings) { func(accumulatorBuilder, prev, cur, next); prev = cur; cur = next; } func(accumulatorBuilder, prev, cur, null); return accumulatorBuilder.ToString(); } F# surely looks much better: let rec reduce list = match list with | [] -> "" | head::curr::[] -> head + " and " + curr | head::curr::tail -> head + ", " + curr :: tail |> reduce | head::[] -> head let concat list = "{" + (list |> reduce ) + "}" Solution:9 Disclaimer: I used this as an excuse to play around with new technologies, so my solutions don't really live up to the Eric's original demands for clarity and maintainability. Naive Enumerator Solution (I concede that the foreach variant of this is superior, as it doesn't require manually messing about with the enumerator.) public static string NaiveConcatenate(IEnumerable<string> sequence) { StringBuilder sb = new StringBuilder(); sb.Append('{'); IEnumerator<string> enumerator = sequence.GetEnumerator(); if (enumerator.MoveNext()) { string a = enumerator.Current; if (!enumerator.MoveNext()) { sb.Append(a); } else { string b = enumerator.Current; while (enumerator.MoveNext()) { sb.Append(a); sb.Append(", "); a = b; b = enumerator.Current; } sb.AppendFormat("{0} and {1}", a, b); } } sb.Append('}'); return sb.ToString(); } Solution using LINQ public static string ConcatenateWithLinq(IEnumerable<string> sequence) { return (from item in sequence select item) .Aggregate( new {sb = new StringBuilder("{"), a = (string) null, b = (string) null}, (s, x) => { if (s.a != null) { s.sb.Append(s.a); s.sb.Append(", "); } return new {s.sb, a = s.b, b = x}; }, (s) => { if (s.b != null) if (s.a != null) s.sb.AppendFormat("{0} and {1}", s.a, s.b); else s.sb.Append(s.b); s.sb.Append("}"); return s.sb.ToString(); }); } Solution with TPL This solution uses a producer-consumer queue to feed the input sequence to the processor, whilst keeping at least two elements buffered in the queue. Once the producer has reached the end of the input sequence, the last two elements can be processed with special treatment. In hindsight there is no reason to have the consumer operate asynchronously, which would eliminate the need for a concurrent queue, but as I said previously, I was just using this as an excuse to play around with new technologies :-) public static string ConcatenateWithTpl(IEnumerable<string> sequence) { var queue = new ConcurrentQueue<string>(); bool stop = false; var consumer = Future.Create( () => { var sb = new StringBuilder("{"); while (!stop || queue.Count > 2) { string s; if (queue.Count > 2 && queue.TryDequeue(out s)) sb.AppendFormat("{0}, ", s); } return sb; }); // Producer foreach (var item in sequence) queue.Enqueue(item); stop = true; StringBuilder result = consumer.Value; string a; string b; if (queue.TryDequeue(out a)) if (queue.TryDequeue(out b)) result.AppendFormat("{0} and {1}", a, b); else result.Append(a); result.Append("}"); return result.ToString(); } Unit tests elided for brevity. Solution:10 Late entry: public static string CommaQuibbling(IEnumerable<string> items) { string[] parts = items.ToArray(); StringBuilder result = new StringBuilder('{'); for (int i = 0; i < parts.Length; i++) { if (i > 0) result.Append(i == parts.Length - 1 ? " and " : ", "); result.Append(parts[i]); } return result.Append('}').ToString(); } Solution:11 public static string CommaQuibbling(IEnumerable<string> items) { int count = items.Count(); string answer = string.Empty; return "{" + (count==0) ? "" : ( items[0] + (count == 1 ? "" : items.Range(1,count-1). Aggregate(answer, (s,a)=> s += ", " + a) + items.Range(count-1,1). Aggregate(answer, (s,a)=> s += " AND " + a) ))+ "}"; } It is implemented as, if count == 0 , then return empty, if count == 1 , then return only element, if count > 1 , then take two ranges, first 2nd element to 2nd last element last element Solution:12 Here's mine, but I realize it's pretty much like Marc's, some minor differences in the order of things, and I added unit-tests as well. using System; using NUnit.Framework; using NUnit.Framework.Extensions; using System.Collections.Generic; using System.Text; using NUnit.Framework.SyntaxHelpers; namespace StringChallengeProject { [TestFixture] public class StringChallenge { [RowTest] [Row(new String[] { }, "{}")] [Row(new[] { "ABC" }, "{ABC}")] [Row(new[] { "ABC", "DEF" }, "{ABC and DEF}")] [Row(new[] { "ABC", "DEF", "G", "H" }, "{ABC, DEF, G and H}")] public void Test(String[] input, String expectedOutput) { Assert.That(FormatString(input), Is.EqualTo(expectedOutput)); } //codesnippet:93458590-3182-11de-8c30-0800200c9a66 public static String FormatString(IEnumerable<String> input) { if (input == null) return "{}"; using (var iterator = input.GetEnumerator()) { // Guard-clause for empty source if (!iterator.MoveNext()) return "{}"; // Take care of first value var output = new StringBuilder(); output.Append('{').Append(iterator.Current); // Grab next if (iterator.MoveNext()) { // Grab the next value, but don't process it // we don't know whether to use comma or "and" // until we've grabbed the next after it as well String nextValue = iterator.Current; while (iterator.MoveNext()) { output.Append(", "); output.Append(nextValue); nextValue = iterator.Current; } output.Append(" and "); output.Append(nextValue); } output.Append('}'); return output.ToString(); } } } } Solution:13 How about skipping complicated aggregation code and just cleaning up the string after you build it? public static string CommaQuibbling(IEnumerable<string> items) { var aggregate = items.Aggregate<string, StringBuilder>( new StringBuilder(), (b,s) => b.AppendFormat(", {0}", s)); var trimmed = Regex.Replace(aggregate.ToString(), "^, ", string.Empty); return string.Format( "{{{0}}}", Regex.Replace(trimmed, ", (?<last>[^,]*)$", @" and ${last}")); } UPDATED: This won't work with strings with commas, as pointed out in the comments. I tried some other variations, but without definite rules about what the strings can contain, I'm going to have real problems matching any possible last item with a regular expression, which makes this a nice lesson for me on their limitations. Solution:14 I quite liked Jon's answer, but that's because it's much like how I approached the problem. Rather than specifically coding in the two variables, I implemented them inside of a FIFO queue. It's strange because I just assumed that there would be 15 posts that all did exactly the same thing, but it looks like we were the only two to do it that way. Oh, looking at these answers, Marc Gravell's answer is quite close to the approach we used as well, but he's using two 'loops', rather than holding on to values. But all those answers with LINQ and regex and joining arrays just seem like crazy-talk! :-) Solution:15 I don't think that using a good old array is a restriction. Here is my version using an array and an extension method: public static string CommaQuibbling(IEnumerable<string> list) { string[] array = list.ToArray(); if (array.Length == 0) return string.Empty.PutCurlyBraces(); if (array.Length == 1) return array[0].PutCurlyBraces(); string allExceptLast = string.Join(", ", array, 0, array.Length - 1); string theLast = array[array.Length - 1]; return string.Format("{0} and {1}", allExceptLast, theLast) .PutCurlyBraces(); } public static string PutCurlyBraces(this string str) { return "{" + str + "}"; } I am using an array because of the string.Join method and because if the possibility of accessing the last element via an index. The extension method is here because of DRY. I think that the performance penalities come from the list.ToArray() and string.Join calls, but all in one I hope that piece of code is pleasent to read and maintain. Solution:16 I think Linq provides fairly readable code. This version handles a million "ABC" in .89 seconds: using System.Collections.Generic; using System.Linq; namespace CommaQuibbling { internal class Translator { public string Translate(IEnumerable<string> items) { return "{" + Join(items) + "}"; } private static string Join(IEnumerable<string> items) { var leadingItems = LeadingItemsFrom(items); var lastItem = LastItemFrom(items); return JoinLeading(leadingItems) + lastItem; } private static IEnumerable<string> LeadingItemsFrom(IEnumerable<string> items) { return items.Reverse().Skip(1).Reverse(); } private static string LastItemFrom(IEnumerable<string> items) { return items.LastOrDefault(); } private static string JoinLeading(IEnumerable<string> items) { if (items.Any() == false) return ""; return string.Join(", ", items.ToArray()) + " and "; } } } Solution:17 You can use a foreach, without LINQ, delegates, closures, lists or arrays, and still have understandable code. Use a bool and a string, like so: public static string CommaQuibbling(IEnumerable items) { StringBuilder sb = new StringBuilder("{"); bool empty = true; string prev = null; foreach (string s in items) { if (prev!=null) { if (!empty) sb.Append(", "); else empty = false; sb.Append(prev); } prev = s; } if (prev!=null) { if (!empty) sb.Append(" and "); sb.Append(prev); } return sb.Append('}').ToString(); } Solution:18 public static string CommaQuibbling(IEnumerable<string> items) { var itemArray = items.ToArray(); var commaSeparated = String.Join(", ", itemArray, 0, Math.Max(itemArray.Length - 1, 0)); if (commaSeparated.Length > 0) commaSeparated += " and "; return "{" + commaSeparated + itemArray.LastOrDefault() + "}"; } Solution:19 Here's my submission. Modified the signature a bit to make it more generic. Using .NET 4 features ( String.Join() using IEnumerable<T>), otherwise works with .NET 3.5. Goal was to use LINQ with drastically simplified logic. static string CommaQuibbling<T>(IEnumerable<T> items) { int count = items.Count(); var quibbled = items.Select((Item, index) => new { Item, Group = (count - index - 2) > 0}) .GroupBy(item => item.Group, item => item.Item) .Select(g => g.Key ? String.Join(", ", g) : String.Join(" and ", g)); return "{" + String.Join(", ", quibbled) + "}"; } Solution:20 There's a couple non-C# answers, and the original post did ask for answers in any language, so I thought I'd show another way to do it that none of the C# programmers seems to have touched upon: a DSL! (defun quibble-comma (words) (format nil "~{~#[~;~a~;~a and ~a~:;~@{~a~#[~; and ~:;, ~]~}~]~}" words)) The astute will note that Common Lisp doesn't really have an IEnumerable<T> built-in, and hence FORMAT here will only work on a proper list. But if you made an IEnumerable, you certainly could extend FORMAT to work on that, as well. (Does Clojure have this?) Also, anyone reading this who has taste (including Lisp programmers!) will probably be offended by the literal "~{~#[~;~a~;~a and ~a~:;~@{~a~#[~; and ~:;, ~]~}~]~}" there. I won't claim that FORMAT implements a good DSL, but I do believe that it is tremendously useful to have some powerful DSL for putting strings together. Regex is a powerful DSL for tearing strings apart, and string.Format is a DSL (kind of) for putting strings together but it's stupidly weak. I think everybody writes these kind of things all the time. Why the heck isn't there some built-in universal tasteful DSL for this yet? I think the closest we have is "Perl", maybe. Solution:21 Just for fun, using the new Zip extension method from C# 4.0: private static string CommaQuibbling(IEnumerable<string> list) { IEnumerable<string> separators = GetSeparators(list.Count()); var finalList = list.Zip(separators, (w, s) => w + s); return string.Concat("{", string.Join(string.Empty, finalList), "}"); } private static IEnumerable<string> GetSeparators(int itemCount) { while (itemCount-- > 2) yield return ", "; if (itemCount == 1) yield return " and "; yield return string.Empty; } Solution:22 return String.Concat( "{", input.Length > 2 ? String.Concat( String.Join(", ", input.Take(input.Length - 1)), " and ", input.Last()) : String.Join(" and ", input), "}"); Solution:23 I have tried using foreach. Please let me know your opinions. private static string CommaQuibble(IEnumerable<string> input) { var val = string.Concat(input.Process( p => p, p => string.Format(" and {0}", p), p => string.Format(", {0}", p))); return string.Format("{{{0}}}", val); } public static IEnumerable<T> Process<T>(this IEnumerable<T> input, Func<T, T> firstItemFunc, Func<T, T> lastItemFunc, Func<T, T> otherItemFunc) { //break on empty sequence if (!input.Any()) yield break; //return first elem var first = input.First(); yield return firstItemFunc(first); //break if there was only one elem var rest = input.Skip(1); if (!rest.Any()) yield break; //start looping the rest of the elements T prevItem = first; bool isFirstIteration = true; foreach (var item in rest) { if (isFirstIteration) isFirstIteration = false; else { yield return otherItemFunc(prevItem); } prevItem = item; } //last element yield return lastItemFunc(prevItem); } Solution:24 Here are a couple of solutions and testing code written in Perl based on the replies at. #!/usr/bin/perl use 5.14.0; use warnings; use strict; use Test::More qw{no_plan}; sub comma_quibbling1 { my (@words) = @_; return "" unless @words; return $words[0] if @words == 1; return join(", ", @words[0 .. $#words - 1]) . " and $words[-1]"; } sub comma_quibbling2 { return "" unless @_; my $last = pop @_; return $last unless @_; return join(", ", @_) . " and $last"; } is comma_quibbling1(qw{}), "", "1-0"; is comma_quibbling1(qw{one}), "one", "1-1"; is comma_quibbling1(qw{one two}), "one and two", "1-2"; is comma_quibbling1(qw{one two three}), "one, two and three", "1-3"; is comma_quibbling1(qw{one two three four}), "one, two, three and four", "1-4"; is comma_quibbling2(qw{}), "", "2-0"; is comma_quibbling2(qw{one}), "one", "2-1"; is comma_quibbling2(qw{one two}), "one and two", "2-2"; is comma_quibbling2(qw{one two three}), "one, two and three", "2-3"; is comma_quibbling2(qw{one two three four}), "one, two, three and four", "2-4"; Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2018/05/tutorial-eric-lipperts-challenge-acomma.html
CC-MAIN-2018-43
en
refinedweb
by Sam Selikoff November 1, 2016 Last month I hosted Ember NYC's project night, and the audience and I built a sticky chatbox component together. My goal wasn't to end up with a prefabricated solution for everyone to use; instead, I wanted to work through the problem as a group, discussing our thought process and opinions as we went along. I built this component last summer as part of a client prototype and found it an interesting and fun challenge. It looks like this: Normally, when a scrollable <div> gets new content, its scrollbar is unaffected. You can see on the left that to keep reading, the user must scroll the chatbox each time a new message comes in. The sticky chatbox on the right is different. When the user is scrolled to the bottom, new messages appear and "push" the chat log up, so the user doesn't have to keep scrolling. But, if the user scrolls back up to read through older messages, the chat box doesn't snap to the bottom. The scrollbar stays in place, so the user isn't interrupted while reading through the log. This is the behavior found in most modern chat apps like Slack and Twitch. When writing complex components, I like to start by identifying the various states in which the component can exist. Identifying state can be tricky; sometimes I find it helpful to try to explain how the interface should behave as if I were talking to a non-technical business or product person. How might we talk about this component together? When the user is scrolled to the bottom, new messages should show up. If they scroll up to read old messages, the chat should stay still. From this plain-English description, the states of the component jump out at us: Of course, we can think of other states in which the component could exist -- for example, if the user was scrolled to the top -- but those states aren't relevant here, since they don't affect behavior. The only states that affect behavior are the two we've listed. Given these possible states, I gave my component an isScrolledToBottom boolean property that I could use to adjust the component's scrolling behavior. I then needed to update this property every time the state of the component changed. How might I achieve this? The first thing that came to mind was an addon I had used in previous projects: DockYard's Ember In Viewport. This addon lets you render a component to the screen that fires an action whenever that component enters or exits the viewport. Sounds like just what I needed. If I rendered this component at the end of the chat list, I'd then be able to know whenever the user reached the bottom, and set the state accordingly. If they started scrolling up to read old messages, the component would leave the viewport, and I'd be able to use another action to update the state. So, I wrote a simple {{in-viewport}} component using the mixin from the addon. You can see the full implementation of that component in the Twiddle below. I then used it in my component's template: <!-- chat-box.hbs --> <ul> {{#each messages as |message|}} <li>{{message.text}}</li> {{/each}} {{in-viewport did-enter=(action (mut isScrolledToBottom) true) did-exit=(action (mut isScrolledToBottom) false)}} </ul> All that remained was to write the component's behavior. If the user was scrolled to the bottom, the component's <ul> should scroll down each time a new message was rendered. The scrolling should happen after the new message was appended to the DOM — sounds like a perfect use case for the didRender hook: // chat-box.js import Ember from 'ember'; export default Ember.Component.extend({ didRender() { this._super(...arguments); if (this.get('isScrolledToBottom')) { this.$('ul')[0].scrollTop = this.$('ul')[0].scrollHeight; } } }); Et voilà! Our chat box lets the user read through the backlog, and then autoscrolls when they're all caught up. To my delight, several members of the group from the project night suggested a completely different strategy. The idea was simple: check the state of the scrollbar the moment a new message arrives. If the scrollbar was at the bottom, autoscroll the chatbox; otherwise, leave it alone. We still needed to store the state of the scrollbar, so we kept the isScrolledToBottom property; but now, we needed to set this property whenever the component was about to re-render. It took a bit of experimentation. We started out by trying to calculate the scroll position at the beginning of the didRender hook. The problem here is that in didRender, the chatbox had already been updated -- so even if the user had been scrolled to the bottom, the fact that the new message had already been appended meant they no longer were. Eventually we realized that we needed to calculate the scroll position just before the new message was added to the DOM. We pulled up the guides for a component's re-render lifecycle hooks: Both willUpdate and willRender seemed like good candidates. Looking at the documentation for each, we found that willRender is called on both initial render and re-renders, while willUpdate is only called on re-renders. Since we only cared about new messages, we went with willUpdate. After a little more experimentation, we were able to write a formula to calculate the state of the scrollbar. We then used this formula to set the component's state in willUpdate: import Ember from 'ember'; export default Ember.Component.extend({ willUpdate() { this._super(...arguments); let box = this.$('ul')[0]; let isScrolledToBottom = box.scrollTop + box.clientHeight === box.scrollHeight; this.set('isScrolledToBottom', isScrolledToBottom); }, didRender() { this._super(...arguments); if (this.get('isScrolledToBottom')) { this.$('ul')[0].scrollTop = this.$('ul')[0].scrollHeight; } } }); Now the state would be correct even after the new messages were appended, so the code in didRender worked just as before. Cool! Here's the Twiddle: After going through both solutions, Luke Melia pointed out that spying on scroll behavior is quite expensive (which is why the Ember In Viewport addon makes you explicitly opt-in to this behavior). He said that using the first approach could significantly affect performance, especially on mobile. In many cases, then, the willUpdate solution would be the superior choice. For our demo app, the willUpdate solution was sufficient — the only time we used the isScrolledToBottom property was when re-rendering the list. If you open the Twiddle, however, you'll notice that the state of our component can "lie": If you scroll the chatbox after a new message has been rendered, you'll notice that the isScrolledToBottom property won't change right away; in fact, it won't update to reflect the "true" state of the scrollbar until the next message arrives. If we were to add additional behavior to this component that relied on isScrolledToBottom being accurate, we could run into some issues. How might this happen? You could imagine updating the interface to show an indicator that new messages had arrived. You'd want that indicator to clear once the user had read through all the messages. In this case, there could be a long time between when the user had caught up and when the next message arrived, so the interface could fall "out of sync" with the actual state of the user's behavior. This is just one example of something that could affect your decision. Different approaches often favor competing goals, like performance versus accuracy. It's up to you to decide which strategy is most appropriate based on the unique priorities and needs of your application. Building the sticky chatbox as a group helped us all see the problem with a bit more clarity. We learned: willRenderand willUpdatehooks are a great place to take measurements or perform visual calculations on a component's DOM before Ember re-renders it. didRenderhook is useful if you need to update a component's DOM in response to a re-render, for example after the component receives new attrs. So reference the API docs often, keep pairing, and if you're in New York be sure to join us at Ember NYC's next Project Night!
https://embermap.com/notes/63-building-a-sticky-chatbox
CC-MAIN-2018-43
en
refinedweb
Avoiding .Net — which, of course, contradicts the rules of COM. So while RCWs indeed mostly follow the rules of COM reference counting, they obviously do not do follow the rules in their entirety. Once I spotted this difference, it was easy to find an explanation of this very topic by Ian Griffiths, which is worth quoting [reformatted by me]: […] And by the way, the reference counting is kind of similarish to COM, in that, as you point out, things get addrefed when they are passed to you. But they’re actually not the same. Consider this C# class that implements a COM interface:public class Foo : ISomeComInterface { public void Spong(ISomeOtherComInterface bar) { bar.Quux(); } } Suppose that Spong is the only member of ISomeComInterface. (Other than the basic IUnknown members, obviously.) This Spong method is passed another COM interface as a parameter. And let’s suppose that some non-.NET client is going to call this Spong method on our .NET object via COM interop. The reference counting rules for COM are not the same as those for the RCW in this case. For COM, the rule here is that the interface is AddRefed for you before it gets passed in, and is Released for you after you return. In other words, you are not required to do any AddRefing or Releasing on a COM object passed to you in this way *unless* you want to keep hold of a reference to it after the call returns. In that case you would AddRef it. Compare this with the RCW reference count. As with COM, the RCW’s reference count will be incremented for you when the parameter is passed in. But unlike in COM, it won’t be decremented for you automatically when you return. You could sum up the difference like this: - COM assumes you won’t be holding onto the object reference when the method returns - The RCW assumes you *will* be holding onto the object reference when the method returns. So if you don’t plan to keep hold of the object reference, then the method should really look like this:public void Spong(ISomeOtherComInterface bar) { bar.Quux(); Marshal.ReleaseComObject(bar); } According to the COM rules of reference counting, this would be a programming error. But with RCWs, it’s how you tell the system you’re not holding onto the object after the method returns. Pretty counter-intuitive… Plus, I am not aware of any official documentation on this topic.
https://jpassing.com/2009/03/
CC-MAIN-2018-43
en
refinedweb
Documentation General Use Resetting the render path Adding Another Write Node Profile Promoting Write Knobs Render Farm Integration Convert Shotgun write nodes to standard Nuke write nodes Enabling the convert menu options Using the API to Convert Bootstrap the Shotgun Pipeline Toolkit engine using init.py 1. Pre-flight submission script 2. Shotgun authentication 3. The init.py script Deadline-specific steps Technical Details get_write_nodes() get_node_name() get_node_profile_name() get_node_render_path() get_node_render_files() get_node_render_template() get_node_publish_template() get_node_proxy_render_path() get_node_proxy_render_files() get_node_proxy_render_template() get_node_proxy_publish_template() get_node_published_file_type() generate_node_thumbnail() reset_node_render_path() is_node_render_path_locked() convert_to_write_nodes() convert_from_write_nodes() process_placeholder_nodes() Installation, Updates and Development Configuration Options Release Notes History This app contains a custom Write Node gizmo for Nuke, abstracting away the file system paths from the user, allowing them to focus on just naming the particular output. Shotgun takes care of the rest! This app is typically used in conjunction with the Publish app and if you install the publish app for nuke, you most likely want to install this one too! Documentation The Nuke Write Node App provides a custom Shotgun Write node which makes it easy to standardise the location where images are rendered to. It can be configured for each environment. In addition to the path, the configuration will also determine the render format to be used. General Use In order to use the Shotgun Write Node, save your script as a Toolkit work file first and then create a new node via the Nuke menu. This will create a node which looks similar to a normal write node: Rather than entering a path by hand, you just specify an output name and Toolkit will then compute the rest of the path automatically. You can see the computed path in the UI and open up the location on disk by clicking the Show in File System button. The location where the renders are written to depends on the Toolkit configuration. The renders will be versioned and the version number will always follow the current nuke script version which will be incremented automatically when you publish using Multi Publish. Resetting the render path The Write Node will cache the current path so that it is still valid if the file is opened outside a Toolkit Work Area. Occasionally, this can mean that the path becomes out of sync and 'locked'. If the render path is locked then renders created with this Write Node cannot be published. To reset a render path, either version-up the scene using the Work-files app's 'Version Up Scene' command or select the Write node individually and in the properties, click Reset Path: Adding Another Write Node Profile The Shotgun Write Node wraps Nuke's built-in write node, so any format supported by Nuke can be used with the app and additional nodes can be added via configuration. The simplest way to start is to set up a simple Nuke write node with the parameters you want. For the example, let's imagine you are doing 16-bit tifs with LZW compression. If you look at your Nuke script in a text editor, the write node will look something like this: ... Write { file /Users/ryanmayeda/Desktop/test.%04d.tif file_type tiff datatype "16 bit" compression LZW checkHashOnRead false name Write1 xpos -145 ypos -61 } ... The text will tell you what the parameter names and values you need are. In this case it's datatype and compression. Next, go into your environment configuration (for example: /path/to/pipeline/config/env/shot_step.yml) and find the area where the tk-nuke-writenode app is configured. Add another Write Node, with these two parameters in the settings: ... tk-nuke-writenode: location: {name: tk-nuke-writenode, type: app_store, version: v0.1.6} template_script_work: nuke_shot_work ... write_nodes: - file_type: exr ... - file_type: dpx ... - file_type: tiff name: Mono Tif publish_template: nuke_shot_render_pub_mono_tif render_template: nuke_shot_render_mono_tif proxy_publish_template: null proxy_render_template: null settings: {datatype: 16 bit, compression: LZW} tank_type: Rendered Image tile_color: [] promote_write_knobs: [] ... The updated configuration will then result in the additional Shotgun Write Node appearing in Nuke: Note: Be sure to add any new templates (e.g. nuke_shot_render_mono_tif) to your templates.yml file which can be found in your project's configuration ( <configuration root>/config/core/templates.yml). Another example, showing how to add a Shotgun Write Node that outputs to JPEG with 0.5 compression and a 4:2:2 sub-sampling is shown below. This profile also makes use of the "promote_write_knobs" option to promote the jpeg quality knob to the gizmo's user interface. This allows the profile to set the default value for quality, but also provide the user the slider to alter that setting themselves: ... tk-nuke-writenode: ... write_nodes: - file_type: jpeg name: Compressed JPEG publish_template: nuke_shot_render_pub_jpeg render_template: nuke_shot_render_jpeg proxy_publish_template: null proxy_render_template: null settings: {_jpeg_quality: 0.5, _jpeg_sub_sampling: "4:2:2"} tank_type: Rendered Image tile_color: [] promote_write_knobs: [_jpeg_quality] ... Promoting Write Knobs As shown in the profile example above, knobs from the encapsulated write node can be promoted to become visible in the Shotgun Write Node's properties panel. The promoted write knobs are defined as part of a profile and are identified by knob name. Multiple knobs may be promoted. Render Farm Integration It's common for studios to use a render farm that runs job management tools such as Deadline, which typically launch Nuke directly when rendering. Because these tools do not launch Nuke in a Shotgun-aware way (e.g., via Desktop or the tank command), the Shotgun write node does not have the information it needs to run. We offer a couple options to get around this limitation. Convert Shotgun write nodes to standard Nuke write nodes A simple solution is to convert the Shotgun write nodes to regular Nuke write nodes before sending the script to be rendered. There are two options 1. you can enable and use the convert menu options, 2. you can use the API convert methods on the app. Enabling the convert menu options There is a configuration option called show_convert_actions that can be added to the app's settings in the environment yml files. When you add the setting show_convert_actions: True, the Convert SG Write Nodes to Write Nodes... and Convert Write Nodes back to SG format... menu options become available. However if you have any Shotgun Write node profiles defined that promote write knobs, then these menu options will be hidden even if the show_convert_actions is set to True. This is because at present the convert back functionality does not support promoted knobs. Using the API to Convert There is a convert_to_write_nodes() method that performs this conversion available on the tk-nuke-writenode app. To convert all Shotgun write nodes in a script to regular Nuke write nodes, run the following code inside Nuke: import sgtk eng = sgtk.platform.current_engine() app = eng.apps["tk-nuke-writenode"] if app: app.convert_to_write_nodes() This will remove the Shotgun write nodes from the scene, so our suggested workflow is that you make a copy of the script to be rendered, perform the conversions on the copy, and submit the copy to the farm. The scene no longer has any Toolkit references and thus Toolkit is not required when the nuke script opened on the render farm. Note: There is a corresponding convert_from_write_nodes() method available, but to ensure data integrity, we recommend that it only be used for debugging and not as part of your pipeline. Bootstrap the Shotgun Pipeline Toolkit engine using init.py Nuke will run any init.py scripts found in its plugin path. This option consists of adding code to init.py that will perform a minimal bootstrap of the tk-nuke engine, so that Shotgun write nodes behave as expected on the render farm. There are a few steps to this workflow: First, a “pre-flight” submission script that runs in a Shotgun-aware Nuke session gets data that will be used to set the environment for your farm job. Next, additional environment variables used to authenticate the Shotgun session on the render farm are set by render farm administrators. Finally, an init.py with the Shotgun bootstrap code is placed in a location where the Nuke session on the render farm will detect and run it, bootstrapping the tk-nuke engine within the session, and allowing the Shotgun write nodes to function properly. 1. Pre-flight submission script This approach assumes that artists are submitting farm jobs within a Shotgun-aware session of Nuke. At submission time, the following code should run. It pulls environment information like Toolkit context, Pipeline Configuration URI, Toolkit Core API location, etc. from the current Nuke session to populate a dictionary that will be passed to the render job, where it will be used to set environment variables. # Populating environment variables from running Nuke: # current_engine = sgtk.platform.current_engine() launcher = sgtk.platform.create_engine_launcher( current_engine.sgtk, current_engine.context, current_engine.name ) # Get a dictionary with the following keys: # SHOTGUN_SITE: The Shotgun site url # SHOTGUN_ENTITY_TYPE: The Shotgun Entity type, e.g. Shot # SHOTGUN_ENTITY_ID: The Shotgun Entity id, e.g. 1234 environment = launcher.get_standard_plugin_environment() # Get the current pipeline config descriptor environment["SHOTGUN_CONFIG_URI"] = os.path.join(current_engine.sgtk.configuration_descriptor.get_uri(),"config") # Get the current tk-core installation path environment["SHOTGUN_SGTK_MODULE_PATH"] = sgtk.get_sgtk_module_path() Once you’ve gathered this information, you can pass it to your render submission tool. This process will vary depending on the render farm management system you’re using. Consult your farm management system documentation for more information on how to write render submission scripts. 2. Shotgun authentication The bootstrap API’s ToolkitManager requires a script user in order to initialize. In our example, we’re assuming that your site name, script user, and script key exist as environment variables on the farm machine. Typically this is managed by the render farm administrator. Here are the environment variable names our code is expecting, with sample values: SHOTGUN_SITE = “” SHOTGUN_FARM_SCRIPT_USER = “sg_api_user” SHOTGUN_FARM_SCRIPT_KEY = “xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx” For more information on authentication, see our developer documentation. A note on securing your script user: It’s good practice to lock down the script user you use on the farm so that it doesn’t have admin-level permissions. You can learn more about API user permissions here. 3. The init.py script At this point, Toolkit environment data is being passed from the render submission tool, and authentication data is in environment variables on the render farm machine. The final piece to bootstrapping Toolkit within your render job is to place the following example init.py code in Nuke’s plugin path, so that Nuke will launch it at startup time. (See the Foundry’s documentation on startup scripts for more details.) # This script shows how a Toolkit as a plugin approach could be used to bootstrap # Toolkit in Nuke on the render farm. # import sys import os # If your render nodes can access the same tk-core install location as # artist workstations, retrieve its path from the environment and ensure # it is in the PYTHONPATH TK_CORE_PATH = os.environ["SHOTGUN_SGTK_MODULE_PATH"] if TK_CORE_PATH not in sys.path: sys.path.append(TK_CORE_PATH) # If your render nodes don’t have access to the Toolkit Core API in the same filesystem location as artist workstations, you have to make sure that it is available in the PYTHONPATH, so that render nodes can import it. An easy way # to install tk-core in a centralized location is with pip. You can read more # about it here: # import sgtk # Authenticate using a pre-defined script user. sa = sgtk.authentication.ShotgunAuthenticator() # Here we retrieve credentials from environment variables, assuming a script user # will be used when rendering. This should be typically be handled by your render # farm administrators. SG_SITE_URL = os.environ["SHOTGUN_SITE"] SG_SCRIPT_USER = os.environ["SHOTGUN_FARM_SCRIPT_USER"] SG_SCRIPT_KEY = os.environ["SHOTGUN_FARM_SCRIPT_KEY"] user = sa.create_script_user( api_script=SG_SCRIPT_USER, api_key=SG_SCRIPT_KEY, host=SG_SITE_URL ) # Start up a Toolkit Manager with our script user mgr = sgtk.bootstrap.ToolkitManager(sg_user=user) # Set the base pipeline configuration from the environment variable: mgr.base_configuration = os.environ["SHOTGUN_CONFIG_URI"] # Disable Shotgun lookup to ensure that we are getting the Pipeline # Configuration defined in SHOTGUN_CONFIG_URI, and not a dev or override # Pipeline Configuration defined in Shotgun. mgr.do_shotgun_config_lookup = False # Set a plugin id to indicate to the bootstrap that we are starting # up a standard Nuke integration mgr.plugin_id = "basic.nuke" # Retrieve the Toolkit context from environment variables: # SHOTGUN_SITE: The Shotgun site url # SHOTGUN_ENTITY_TYPE: The Shotgun Entity type, e.g. Shot # SHOTGUN_ENTITY_ID: The Shotgun Entity id, e.g. 1234 sg_entity = mgr.get_entity_from_environment() # Now start up the Nuke engine for a given Shotgun Entity nuke_engine = mgr.bootstrap_engine("tk-nuke", entity=sg_entity) You may need to extend this if your configuration is more complex than this example or if you are passing a Python script to the command line using the -t flag instead of a nuke ( .nk) script. Deadline-specific steps Deadline can copy Nuke scripts to a temporary location when rendering. This will cause problems with Toolkit as the files will no longer be in a disk location that it recognizes. To disable this behavior and load the scripts from their original location: - In Deadline, navigate to Tools > Configure Plugin (In the super user mode) - Disable the 'Enable Path Mapping' option Technical Details The following API methods are available on the App: get_write_nodes() Return a list of all Shotgun Write Nodes in the current scene. list app.get_write_nodes() Parameters & Return Value - Returns: list- a list of Toolkit Write nodes found in the scene Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() get_node_name() Return the name of the specified Write Node. string get_node_name( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: string- the name of the node. Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_name(nodes[0]) get_node_profile_name() Get the name of the configuration profile used by the specified Write node. string get_node_profile_name( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: string- the profile name for this Write Node as defined by the configuration Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_profile_name(nodes[0]) get_node_render_path() Get the path that the specified Write node will render images to. string get_node_render_path( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: string- the render path for this node Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_render_path(nodes[0]) get_node_render_files() Get a list of all image files that have been rendered for the specified Write Node. list get_node_render_files( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: list- a list of the image files rendered by this Write node. Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_render_files(nodes[0]) get_node_render_template() Get the template that determines where rendered images will be written to for the specified Write Node as defined in the configuration. template get_node_render_template( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: template- the render template this node is configured to use. Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_render_template(nodes[0]) get_node_publish_template() Get the template that determines where rendered images will be published to for the specified Write Node as defined in the configuration. template get_node_publish_template( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: template- the publish template this node is configured to use. Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_publish_template(nodes[0]) get_node_proxy_render_path() Get the path that the specified Write node will render proxy images to. string get_node_proxy_render_path( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: string- the proxy render path for this node Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_proxy_render_path(nodes[0]) get_node_proxy_render_files() Get a list of all proxy image files that have been rendered for the specified Write Node. list get_node_proxy_render_files( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: list- a list of the proxy image files rendered by this Write node. Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_proxy_render_files(nodes[0]) get_node_proxy_render_template() Get the template that determines where proxy rendered images will be written to for the specified Write Node as defined in the configuration. If there is no proxy render template configured for the specified node then this will return the regular render template instead. template get_node_proxy_render_template( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: template- the proxy render template this node is configured to use. Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_proxy_render_template(nodes[0]) get_node_proxy_publish_template() Get the template that determines where proxy rendered images will be published to for the specified Write Node as defined in the configuration. If there is no proxy publish template configured for the specified node then this will return the regular publish template instead. template get_node_proxy_publish_template( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: template- the proxy publish template this node is configured to use. Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_proxy_publish_template(nodes[0]) get_node_published_file_type() Get the Published File Type to be used when Published files are created for images rendered by the specified Write node as defined in the configuration. string get_node_published_file_type( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: string- the Published File Type this node is configured to use Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.get_node_published_file_type(nodes[0]) generate_node_thumbnail() Generate a thumbnail for the specified Write Node. This will render a frame from the middle of the sequence with a maximum size of 800x800px to a temp file (.png). It is the responsibility of the caller to clean up this file when it is no longer needed. string generate_node_thumbnail( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: string- the path to the rendered thumbnail image on disk Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.generate_node_thumbnail(nodes[0]) reset_node_render_path() Reset the render path for the specified Write Node to match the current script. None reset_node_render_path( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: None- no value is returned Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.reset_node_render_path(nodes[0]) is_node_render_path_locked() Determine if the render path for the specified Write node is locked or not. bool is_node_render_path_locked( node node) Parameters & Return Value nodenode - the Write Node to query - Returns: bool- True if the render path is locked, otherwise False Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> nodes = app.get_write_nodes() >>> app.is_node_render_path_locked(nodes[0]) convert_to_write_nodes() Convert all Shotgun write nodes found in the current Script to regular Nuke Write nodes. Additional toolkit information will be stored on user knobs named 'tk_*' None convert_to_write_nodes() Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> app.convert_to_write_nodes() convert_from_write_nodes() Convert all regular Nuke Write nodes that have previously been converted from Shotgun Write nodes, back into Shotgun Write nodes. None convert_from_write_nodes() Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> app.convert_from_write_nodes() process_placeholder_nodes() Convert any placeholder nodes into full Shotgun Write Nodes. This is primarily used to convert placeholder nodes created by the Hiero Toolkit script exporter when a script is first opened in Nuke. None process_placeholder_nodes() Example >>> import sgtk >>> eng = sgtk.platform.current_engine() >>> app = eng.apps["tk-nuke-writenode"] >>> app.process_placeholder_nodes()-writen.37 or higher to use this. - You need Engine version v0.2.3 or higher to use this. Configuration Below is a summary of all the configuration settings used. These settings need to be defined in the environment file where you want to enable this App or Engine. template_script_work Type: template Description: A reference to a template which locates a nuke script work file on disk. This is used to drive the version and optionally the name of renders. write_nodes Type: list Description: A list of dictionaries in which you define the Shotgun Write nodes that are supported in this configuration. Each dictionary entry needs to have the following keys: 'name' - a descriptive name for this node. 'file_type' - the file type to use for the renders (exr, cin, dpx etc). This will be passed to the Nuke write node when rendering. 'settings' - configuration settings for the given file type, as a dictionary. This too will be passed to the write node when rendering. Next, you need two entries named 'render_template' and 'publish_template' - these control the locations where data is written to at various stages of the workflow. These templates need to include the 'version' field and can optionally include the fields 'name', 'width', 'height' (which reflect the image resolution of the render) and 'output' (which differenciates different write nodes). If you are doing stereo rendering and want to use Nuke's %V flag, include an 'eye' field. This will be replaced by %V in the paths when the Shotgun Write node computes them. Finally, you need the templates 'proxy_render_template' and 'proxy_publish_template' these have the same requirements as the regular render and pubish templates but are used when rendering in proxy mode. If these are missing (set to null) then the regular templates will be used instead. show_convert_actions Type: bool Description: Setting this to True will add actions for converting to and from Shotgun write nodes. The actions will be displayed as options in the Shotgun -> Context menu.4.1 2018-Sep-14 A cleaning up of code in relation to the convert menu item feature added previously. v1.4.0 2018-Sep-14 Adds a new configurable option that exposes the API's convert methods as UI actions in the Shotgun Menu. Details: -. v1.3.2 2018-Jun-04 Addresses problems that arise from opening files in Nuke via the "recent files" menu option, plus a promoted write knobs fix. Details: When opening files from the "recent files" menu in Nuke, if the writenode app isn't configured in the current environment, the app would fail to properly set profiles on the gizmos on read. In addition, a flaw in the logic related to initializing promoted write knob values on launch was discovered and fixed. v1.3.1 2018-Jan-19 Reverts a portion of the v1.3.0 release related to resolving a ValueError raised by Nuke in some situation on file open. Details: The portion of the 1.3.0 release that resolved a PythonObject ValueError from Nuke in some situation on file read has been reverted. The change required to fix that issue had unforeseen consequences for some clients when rendering in a farm context. v1.3.0 2018-Jan-15 Fixes preservation of promoted knob settings between sessions, and adds a new public method to the API. Details: - Promoted write knobs maintain their settings across Nuke sessions. - Addressing the "a pythonObject is not attached to a node" ValueError exception using a generator. - Adds a new public method to the API for creating node instances programmatically. v1.2.0 Adds a Python tab that's the equivalent of the encapsulated Write node's. v1.1.6 Re-caches the work file template on context change. v1.1.5 Resolves bug related to node profile refreshing during context change. This resulted in the profile menu items in Nuke not being properly rebuilt. v1.1.4 Fixes render path regression. Handles menu rebuilding on context change. v1.1.3 Resolves a ValueError raised by Nuke's colorspaces.py module in Nuke 10.0. Details: Recomputing the render paths after restoring cached knob settings caused the exception to be raised. Rather than recompute the paths, we're now removing the the file and proxy knob settings from the cache before applying it. This means we don't need to do the recompute and therefore we no longer see the error. v1.1.2 Updates output name during paste operations. Details: When copy/pasting a node that has the "use node name" option checked on to force the output name to match the node name, the output name will now be updated to match the new name of the pasted node. v1.1.1 Resolves regression related to file and proxy paths. Details: The file and proxy paths of the encapsulated write node lost their connections to the top-level Gizmo's cached paths. This resulted in paths with baked-in frame numbers instead of typical %04d tokens. v1.1.0 Adds support for Nuke Studio context changes. v1.0.19 Bug fix for promoted write nobs not retaining their user-set values after relaunch. Details: Promoted write knobs would revert to their preset's default value for that knob when the file was reloaded. These knobs will now be treated as other top-level tk-nuke-writenode knobs that are user controlled such that the user-set values are no longer lost. v1.0.18.17.16.15 Stopped un-handled exception from being raised when computing the render path in an un-saved script v1.0.14 Fix for incorrect proxy path setting when converting to Write nodes Details: If no render_proxy_template is specified in the settings, this fix ensures that when converting to Nuke Write nodes, the proxy value is empty. Previously it was being set to the render_template path. v1.0.13 Bug fixes Details: - Fixed problem where the conversion from Nuke write nodes back to Shotgun write nodes wasn't updating the profile drop-down menu correctly. This made it look like the wrong profile was being used. - Fixed an issue that stopped the 'Reset Path' button and warning being visible when the render path no longer matches the cached render path! This made it difficult to reset the render path if for some reason it had become locked. v1.0.12 Fixed render path update when it contains width or height. Details - There was a bug that was stopping the render path from updating the width and height keys when the render size was changed for a script. This has now been fixed. v1.0.11 Fixed various initialisation issues. Details - Found and fixed a few edge cases that would throw 'PythonObject is not attached to a node' errors in Nuke, resulting in the app not being loaded and the script being left in an error'd state. v1.0.10 Top level knobs/settings are no longer reset on file open! Details: - Previously specifying a setting for a profile that is represented by a top-level knob on the Shotgun Write node would result in that knob being reset every time the script is loaded. This has bnow been fixed so that the knob will only get set at node creation or when the profile on the node is changed. v1.0.9 add date fields YYYY, DD, MM for use in render path templates Details: allows users to use the template fields YYYY, MM, DD in their render path template. These fields will be ignored in the validation that checks to see if the render path has changed since date fields kind of imply that the path will change. v1.0.8 File type & settings are now retained after node creation Details: Previously this information would be lost after the script was first saved and never re-populated from the profile in the environment. This is now fixed so that the file type and settings are always updated to match the current profile! v1.0.7 User experience improvements Details: - Removed the 'Shotgun Write' prefix for the node creation commands to make it easier to create nodes from the tab shortcut menu - Added a tile_color setting to the profile which can be used to set the background colour in the node graph for a node based on the profile v1.0.6 Renamed 'channel' to 'output' as channel conflicts with other Nuke terminology Details: - The channel key in templates is still supported for backwards compatibility but the new 'output' key can now be used instead. - The behaviour remains unchanged. v1.0.5 Added debug functions to the app to forcibly enable/disable path evaluation. v1.0.4 Performance improvements during rendering and playback v1.0.3 Fixed issue when rendering in proxy mode v1.0.2 Updated to require core v0.14.37 v1.0.1 Fixed issue when rendering in proxy mode v1.0.0 Numerous changes, improvements and fixes Details: - Added ability to switch between profiles without having to delete/recreate the node - New conversion methods on the app to convert to/from regular Nuke Write nodes - Added support for separate proxy render & publish templates - Addition of copy-path button - Improvements to the behaviour of the channel knob including addition of the option to drive it from the node name - Nuke scripts are no longer left in a modified state immediately after opening - Fixed issues with threading evaluation during rendering that could cause frames not to render - Added missing knobs from regular Write nodes (e.g. Reading) - Removed the counter-intuitive auto-node-naming feature - new Shotgun Write nodes are now just named ShotgunWrite# - Disabling a Shotgun Write node now behaves as expected - Fixed issues stopping cacheing from working effectively in Nuke 7+ - Relaxed template field requirements to allow additional fields to be used from the work file template in the render path - Improved warning messages/handling when something is wrong v0.1.11 Added support for ShotgunWriteNodePlaceholder Details: Any ShotgunWriteNodePlaceholders will be converted to Shotgun Write Nodes using the config setting for passed in name and channel name. v0.1.10 Channel knob is now hidden if the render template doesn't contain a channel key v0.1.9 Spaces in Shotgun Write Node names are now replaced with underscores to conform to the Nuke standard v0.1.8 Removed unnecessary template validation from app that was forcing 'name' to be required in the render and publish templates v0.1.7 Fixed a bug preventing the jump to file system to work with UNC paths on Windows. v0.1.6 Addition of get_node_published_file_type method to app v0.1.5 Renames and support for the new name Sgtk. v0.1.4 - Tank Write Nodes now use the current render & publish templates specified in the configuration. Previously they were using the templates specified when the node was first created. - The local node render resolution is now used when constructing the render path rather than the global resolution. v0.1.3 Added interface to app to check if a write node's render path is currently locked v0.1.2 First release to the Tank Store. 22 Comments think it should be: Hi Henry, You'll notice with the latest docs that this has been addressed - the command you highlighted is also only for core versions earlier than v0.13. With v0.13 and the new 'tank' command, the process of installing and updating apps has changed. See the documentation above for further details. Thanks The write node doesn't take proxy into account. We have scripts setup as 4K, with proxy on the read nodes at 2K. For internal reviews we don't need the 4K so we render out the proxies. The Tank write node output path doesn't take this into account so it will output into a 4K folder. Are there any plans to add the support for proxy mode? Also we find there's a couple of options which are missing like "read file" which compers use quite often on regular write nodes. Hi Benoit, Thanks for the feedback - I've submitted a bug for the proxy issue and a feature request for the missing options. How urgent are these issues for you? Thanks Alan Hi Alan I have tried adding another arite node profile, i tried a simple Jpg, and i could not get it to work. 1st i couldn't find in /path/to/pipeline/config/env/shot.yml any entry like "tk-nuke-writenode:" i was able to find this entry /path/to/pipeline/config/env/shot_step.yml file. Here i follow the instructions on how to create 16-bit tiffs with LZW compression, and even that did not work. Would it be possible to get a little more info on how to do this Thanks Bruno Hi Bruno, My apologies as it looks like our documentation was slightly out of date and was missing some settings that we recently added to the write nodes. This then stops the app from loading at all! I've updated the example and have also added another example specifically for jpeg support. One other thing to mention is that the examples above use new templates that aren't included in our default configuration. This means that you will need to add the new templates to your templates.yml file and potentially adjust the folder schema if you introduce new folders with the template (although this won't be a problem if you just change the file name/suffix). I've included a link above to the templates documentation if you need it but please ask if you get stuck. Again, if the templates don't exist it will prevent the app from loading! Thanks Alan Oh, and I also changed it to reference the shot\_step.ymlenvironment as that is the standard environment that the Shotgun Write Node app is installed into! Thanks Alan Thank you so much! This worked perfectly. I am trying to setup a write node, and the artist told me they needed targa, 8 bit, RLE. So I setup my node like this: name: TGA, 8 bit publish_template: nuke_shot_render_pub_mono_tga render_template: nuke_shot_render_mono_tga proxy_publish_template: null proxy_render_template: null settings: {datatype: 8 bit, compression: RLE} tank_type: Rendered Image Everything is good except the datatype, it says: Shotgun Error: Invalid setting for file format targa - datatype: 8 bit. This will be ignored. Can you point me out to where the file types are defined and what are the valid settings for each? I could not find this valuable information in the documentation. same question for exr 8 bit integer, it's not accepting my setting, setting it to 16 bit half Hi Kevin, You can only provide settings that are available for the file type you've chosen in a regular Nuke Write node. For tga, it looks like it only allows you to set the compression and for exr, the only two settings available for the data type are '16 bit half' & '32 bit float'. You can check this by creating a regular Write node and playing with the file type - you'll see it will dynamically create the extra knobs that are available for each type in the property editor as you change the type. The settings item in the configuration for a Shotgun Write node should then be a dictionary of Nuke knob name to values (as you've done with the compression for tga's) which you can determine using the method described in the docs above (by saving the nuke script out with the Write node set to the settings you want and then opening it in a text editor). Does that help?\ Thanks Alan yeah that helps a lot. Actually we figured out with him that the settings he was giving me simply do not exist! thank for the help, today is the first day we have the shotgun pipeline toolkit up and running in nuke, and the publish with the rendered files works! woohoo :) That's brilliant - well done :) Let us know if you have any more questions? Thanks Alan I'm not seeing any write icons in the Shotgun node menu for write node profiles with the latest release. From what I can tell, everything in the code seems to be working. I'm running into this bug with version 8.0v3 of Nuke. Mitch Hi Mitch, Do you get any errors in the script editor when the engine is starting? Also, if you go into Work Area Info and look at the active apps, is the write node app listed there? If that all looks ok, can you try enabling debug_logging for the tk-nuke engine in the environment you are running and paste the output - might be worth creating a ticket with this info in as it'll be easier for us to track as well. Thanks Alan Hi Alan, I don't get any errors and the app is definitely loaded and active. I wish I could send you guys the debug info, but it seems that when I launch Nuke from toolkit, the cmd console window that usually launches is never present. I'll create a ticket for this issue. Mitch Hi Alan, For rendering on the farm we first Convert the nodes to regular write nodes. Which works fine. However, when we convert back to Tank Write nodes, it fails to pick up the previously set profile. And it just reverts back to the first profile it finds. Is this a bug? Cheers, David Hi David, Yes, this is most likely a bug - could you submit a ticket to toolkitsupport@shotgunsoftware.com please? I'd be generally cautious about that workflow though as the conversion to regular write nodes and back again may not cope with everything that is possible within a Nuke script (this is almost impossible to test as well!) and there may well be complex cases that it doesn't accommodate. Instead, I'd recommend the conversion to regular write nodes as a throw-away step that is just used for rendering and any future work is continued from the script that contains the Toolkit Write nodes. The conversion back to Toolkit Write nodes is more for debugging purposes. I'd still like to fix this if it is a bug though! Thanks Alan Hi Alan, Hence is there any recommended way to submit the current nuke script to deadline for rendering? Seems it's impossible for Nuke to process the TankWrite node if the script is saved to another location. Hey, Doing some work in NukeStudio, and it seems that if you try to load a .nk file into NukeStudio as a BinItem, it normally reads the write paths from the script 'metadata'. From trial and error, it seems that when you have a standard write node in the script, Nuke writes to the script file a line at the top containing a list of write nodes and their output paths, which NukeStudio then uses to create comp tracks. The Shotgun Write node doesn't do this, so it can't work out where to look for output. I wondered if anyone is looking at this? Daniel, if you're submitting the script to deadline, you can use the convert_to_write_nodes method before the script is sent. Hi Andrew, May I know how you revert those write nodes back to Shotgun Write Nodes afterwards? AFAIK except reloading the Nuke Script, there's no reliable way to do so. It's very inconvenient for artists to do this repeatedly during the shot production. Or do you have other workflow submitting your script to Deadline? Hello! I'm currently trying to add a new write node that outputs a quicktime movie and am hitting the following error when loading Nuke : The Template 'my_template_name' referred to by the setting 'write_nodes' does not validate. The following problems were reported: The mandatory field 'SEQ' is missing Now obviously for an image sequence, adding {SEQ} (which resolves to '%04d') is vital, but with a quicktime movie it's not. Is there anyway that I can tell Shotgun to allow a template that doesn't contain {SEQ}, or alternatively has anyone got an example of a working, quicktime write node? For reference, this is my current write node setup in shot_step.yml : write_nodes: - file_type: mov name: My New Write Node publish_template: my_template_name_for_publish render_template: my_template_name proxy_publish_template: null proxy_render_template: null settings: {meta_codec: AVdn} tank_type: Quicktime tile_color: [] promote_write_knobs: [] And my template paths in templates.yml look as follows : my_template_name: definition: '@shot_root/work/mov/{Shot}_{Step}_v{version_four}.mov' root_name: 'primary' my_template_name_for_publish: definition: '@shot_root/publish/nuke/mov/{Shot}_{Step}_v{version_four}.mov' root_name: 'primary' Thanks!
https://support.shotgunsoftware.com/hc/en-us/articles/219032848?page=1
CC-MAIN-2018-43
en
refinedweb
> David Abrahams wrote: > >> That sounds great. I think we're almost there. Thanks for your work! > > My pleasure! Here is a new version, with the latest changes. I think > that it is now good enough to commit. Regards, Nicodemus. gcc 3.2.2 on Linux only stopped complaining after I inserted a "typename" directive as shown: >// Copyright David Abrahams 2002. Permission to copy, use, >// modify, sell and distribute this software is granted provided this >// copyright notice appears in all copies. This software is provided >// "as is" without express or implied warranty, and with no claim as >// to its suitability for any purpose. >#ifndef REGISTER_PTR_TO_PYTHON_HPP >#define REGISTER_PTR_TO_PYTHON_HPP > >#include <boost/python/pointee.hpp> >#include <boost/python/object.hpp> > >namespace boost { namespace python { > >template <class P> >void register_ptr_to_python(P* = 0) >{ > typedef typename boost::python::pointee<P>::type X; ^^^^^^^^ > objects::class_value_wrapper< > P > , objects::make_ptr_instance< > X > , objects::pointer_holder<P,X> > > > >(); >} > >}} // namespace boost::python > >#endif // REGISTER_PTR_TO_PYTHON_HPP > Regards, Oliver
https://mail.python.org/pipermail/cplusplus-sig/2003-June/004213.html
CC-MAIN-2016-36
en
refinedweb
Md. Marufuzzaman wrote:something like a general approach nothing but using ssl and data encryption cause when you transmit over the wire data could be readable. What do you think on that. static bool prime_6(int a, int B)/>/> { int x; x = 2; // this number will divide number a and b to check if they are prime numbs. if (a < 1000 && b < 1000) //setting an exception { if (a % x != 0 && x < a) // Checking if number a is a prime number. { } if (b % x != 0 && x < B)/>/> // Checking if number b is a prime number. { x++; // Adding +1 to x return is_sexy(a, B)/>/>; // Launching back the function } return (a - b == 6); // Once it's done (if number a and b are prime, test if a - b = 6) } else { return false; // Otherwise false } }.
http://www.codeproject.com/Messages/4434838/Re-alpha-numeric-char.aspx
CC-MAIN-2016-36
en
refinedweb
- II Last updated Jan 1, 2003. C++/CLI introduces many features that are new to a C++ programmer. However, very of few of them if any, are original. Most of them have been available for years in other programming languages e.g., C#, Java or even Borland’s VCL. Here is my take on the usefulness and engineering merits of some of these features. Classes of Classes In an ideal programming language, a class would look exactly as they do in C++. Should an object o1 of class A be allocated on the static memory, the stack or the free store? This is an implementation decision that is made when you instantiate the object. Similarly, whether objects are passed by value, as pointer or by reference is an implementation decision that should be made on a per-object basis. these properties aren’t and shouldn’t be enforced on a per class basis, except for highly-specialized cases. C++/CLI however is different. It forces you to determine at design time, i.e., in the class declaration, where the objects will be stored and how they will be passed. Design-wise, this policy is a step or two backwards, no matter what the C++/CLI rationale tells you. But why is C++/CLI different? Because it’s very difficult to design a programming language that allows objects of the same class to have different runtime properties, e.g., object a1 is garbage collected whereas object a2 is allocated on the stack. Consequently, C++/CLI doesn’t allow to write something like this: string *pstr = new string; string str; string & sr=str; Java designers solved this problem in a brute-force, though consistent manner: all Java objects are GC-bound, whereas primitive types are allocated on the stack. C++/CLI tries (not very successfully) to combine the two models by permitting different storage types for objects. However, you have to decide at design time how the objects of that class will be represented. ref Classes A ref class is a class whose objects must be allocated using gcnew. They are garbage collected so you can’t create them on the stack or the native heap; needless to say, you should never delete them: public ref class Stack { //.. }; Stack ^ps = gcnew Stack;//OK ps->Push(Element); Stack s; //error Stack *p = new Stack; //error The access specifier before the class declaration is called "top-level type visibility". It can be either public, meaning the class is visible from other assemblies, or it can be private, which means that the class is visible only inside its assembly. An assembly is roughly equivalent to an .obj file produced from a C++ translation unit. value Classes A value class is used for small objects that should have value semantics. Candidates for value classes are a complex number class, smart pointers and handles: public value class Point { int x; int y; //.. }; Point p1; interface Classes An interface class is equivalent to an abstract class in C++, with some syntactic sugar and the inevitable verbiage of the "new programming languages": interface class IControl { virtual void Pain(); }; public ref class : IControl, IDataBound //multiple interface inheritance { //..implement the interfaces }; The ref and value specifiers are necessary for .Net’s internal bookkeeping, not because they really add anything to the powerfulness of this language. When a class is declared as ref or value, the .Net machinery generates the necessary runtime metadata and additional code scaffolding for it. Delegates A little bit of history: the litigation between Sun and Microsoft in the late 1990s can be ascribed to a single keyword: delegate. Sun’s original Java specification didn’t have this keyword. Microsoft added it to its Java implementation, thereby breaking source code compatibility with Sun’s Java. This litigation ended in a settlement but the impact was tremendous: In 2000, Microsoft came up with its Java clone, a language called C#. Now Microsoft was free to do as it saw fit, including the reintroduction of delegates. It’s no surprise that C++/CLI has delegates, too. A delegate is an object that encapsulates a callable entity. It’s roughly equivalent to TR1 std::tr1::function class template, which allows you to treat freestanding functions, static member functions, nonstatic member functions etc., uniformly as long as they have the same signature. Yet unlike std::tr1::function, a C++/CLI delegate can have an invocation list of multiple functions, not just one. Oddly enough, delegates are GC-bound objects. Therefore, you must create them using gcnew: public ref class A { public: static void F(int); void G(int); }; delegate void MyF(int); //define a delegate type A ^a= gcnew A; //create a delegate object and add A::F to its invocation list MyF ^d=gcnew MyF(&A::F); d+=gcnew MyF(a, &A::G);//add A::G to invocation list This feature seems cute and harmless, and I’m not going to say that isn’t except that in C++ binders and tr1::function do the same job, in a more efficient and less verbose manner. Furthermore, forcing delegates to be GC-bound seems a questionable design decision. If delegates are such a fundamental feature, it would have been better to make them value objects. Finally, delegates expose the Windows-biased nature of C++/CLI. Namespaces C++/CLI namespaces are an odd beast. They are misleadingly called namespaces, but in reality they are much closer to Java’s packages. So why aren’t they called packages? Historically, C# designers tried to impart the impression they were creating a language that was completely independent of Java, for legal and marketing reasons. Therefore, features borrowed from Java were renamed. C++/CLI borrowed its namespaces from C#, not C++. However, it uses term namespaces not just for historical reasons but probably because it might suggest that C++/CLI and ISO C++ are closer than they truly are. The ECMA standard as usual doesn’t disclose too much information about the syntax and the semantics. However, the scanty examples it does provide will convince you that C++/CLI namespaces are not the namespaces you know. Notice also that in the example cited below (with minor modifications), the file extension is .cpp. I wonder how many ISO compliant C++ compilers will accept this code: //DisplayMessageLibrary.cpp namespace MyLibrary { public ref struct DisplayMessage { static void Display(); }; } //DisplayMessageApp.cpp using <DisplayMessageLibrary.dll> int main() { MyLibrary::DiaplayMessage::Display(); } You’ve never seen something like this in ISO C++, have you? C++/CLI namespaces are used for loading assemblies, or in a more down to earth example, they are the syntactic sugar that hides the onerous Win32 API call: LoadLibrary("DisplayMessageLibrary.dll"); Once more, it’s obvious that while C++/CLI pretends to be a general purpose, platform neutral language, it really is a Windows-only game. Generics When C++ programmers speak of generic programming, they mean template-based programming. However, in the "newer programming languages" generics mean something quite different. C++/CLI Generics are instantiated by the Virtual Execution System (the .Net equivalent of Java’s JVM) at runtime, whereas C++ templates are instantiated at compile-time. Performance-wise, generics are slower than templates by magnitudes. Considering the fat interface of a typical C++/CLI object and the dynamic nature of this language, it’s even slower than you think. Furthermore, since C++/CLI has a unified type system whereby every type is ultimately derived from System::Object (surprise!) generics are ultimately functions and classes that operate on references to System::Object objects, even when you apply them to built-in types such as int. To recover the actual type from a generic class or function, C++/CLI uses runtime type identification and conversions. C++/CLI support constraints -- a feature that may look tempting at first, until you discover that they are enforced at runtime. I will not go through all the gory details here, but the bottom line is that a C++/CLI generic container is a performance killer compared to a homogeneous C++ container such as std::vector<int>. To get a hint of the overhead, think of a homogeneous container in C++ implemented as a collection of pointers whose dynamic types have to be recovered at runtime. While in certain cases this overhead is inevitable (i.e., when you really need a heterogeneous container), most of the time you need a homogeneous container, with no dynamic typing whatsoever. My prediction is that in the next 10 years, we’ll be hearing miraculous reports about a performance breakthrough of the new VES, quantum leap optimizations of generics, and carefully doctored benchmark results that "prove" that C++/CLI code performs better than native C++ code. What about real templates? The ECMA standard says that C++/CLI supports them as well. However, it’s underspecified as ever with respect to template subtleties. For instance, is it possible to provide default parameter types? What about member templates? Are they permitted only in native classes or can a ref class have them too? Answers on postcards please. Conclusions I believe that by now you are convinced as I am that C++/CLI is neither a "set of extensions to C++" (in many aspects it’s actually a subset of C++), nor is it related to C++ more than any other language with semicolons and curly braces. Furthermore, C++/CLI is definitely a Windows-oriented programming language; it’s definitely not a language that a Solaris 10 server or a Nokia mobile phone will be happy to run. What does it have anything to do with C++? When C# was launched six years ago, I asked "why does the world need another proprietary language?" C# has certainly remained a proprietary language. The only thing that has changed is that now we have two proprietary languages from the same vendor. Wouldn’t it have been a better idea to finish the C# overhaul first instead of spawning yet another language that looks surprisingly similar and suffers from the same imperfections? While Microsoft is free to exert its marketing strategies and ruses (including the anointment of a new proprietary language every once in awhile) C++/CLI marks a dangerous move. It tries to hijack a well-designed and reputable programming language that has stood the test of time more than any other language owned by commercial bodies. I’m truly concerned that interested parties might tamper with ISO C++, trying to push spurious and unnecessary C++/CLI features into it in order to make it "more compatible" with their implementation. As far as ECMA is concerned, there’s no nice way to put it: if it has ratified this underspecified and incoherent draft as an International Standard, it says quite a lot about the credibility, reputation and expertise of this body. I wouldn’t buy a used standard from them. In terms of its engineering merits, C++/CLI fails to impress me. I can’t think of a single feature thereof (except perhaps the nullptr keyword) that I would like to see added to ISO C++. On the other hand, I’m quite surprised that it repeats so many of the design mistakes previously seen in Java and C#. Education wise, there’s no arguing that Java and C# have become complex and hard to learn. Yet C++/CLI is even worse. Much worse. In many areas, it has a dual or triple interface that is the result of an unsuccessful attempt to combine C++ concepts with the dynamic (so-called "managed") nature of C#. Think of the notion of ^ pointers as opposed to native * pointers, generics versus templates, finalizers versus destructors -- and this is just the first version of this language! I’m happy and proud to be using ISO C++ -- perhaps more than ever before. In every aspect, I find it a mature, well-balanced and ingeniously designed language. My only hope it remains true to its nature: a general purpose, platform neutral, efficient, multi-paradigm language with the best generic facilities and libraries ever designed. In this respect, the grass isn’t greener over there.
http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=275
CC-MAIN-2016-36
en
refinedweb
#include <SimpleVariableScope.h> Inheritance diagram for SimpleVariableScope:: This is expected to be the main VariableScope mechanism, but I didn't want to collapse things together more because there may be lots of TripleSource objects that don't really want a whole SymbolTable allocated for them. We COULD do a lazy-eval trick and just keep a pointer until some Variable's been asked for... Definition at line 19 of file SimpleVariableScope.h.
http://www.w3.org/2001/06/blindfold/api/classSimpleVariableScope.html
CC-MAIN-2016-36
en
refinedweb
1234567891011121314151617 #include <iostream> using namespace std; int main() { int amount; cout << "Enter amount of dollars. "; cin >> amount; int twenties, tens, fives, singles; amount = 137; 137 / 20 = 6 twenties; the remainder 137 % 20 = 17; 17 / 10 = 1 tens; the remainder 17 % 10 = 7; 7 / 5 = 1 fives; the remainder amount 7 % 5 = 2; cout << "For the amount of ">> amount >> "we need ">> twenties, tens, fives, singles; return 0; }
http://www.cplusplus.com/forum/beginner/111394/
CC-MAIN-2016-36
en
refinedweb
KnockoutJS (KO) is a JavaScript library written by Steve Sanderson who works for Microsoft and is the author of Pro ASP.NET MVC Framework. It is a JavaScript library that helps build apps conforming to the Model View View-Model pattern. KO makes it easier to create rich, desktop-like user interfaces with JavaScript and HTML. KO uses observers to make your user interface automatically stay in sync with an underlying data model, along with a powerful and extensible set of declarative bindings to enable productive development. In short, if you want your website to have a snappy, slick user interface, with less code, then Knockout is certainly a good library to use. This article is published from the DNC .NET Magazine – A Free High Quality Digital Magazine for .NET professionals published once every two months Knockout works along-side any JavaScript library so you can use jQuery and other JavaScript libraries along with KO as your application starts to grow. We first take a look at the features of KO and then dive into a sample that helps explain these. KnockoutJS gives us declarative bindings which means we can easily get access to HTML DOM elements on our page and use it with any web framework including PHP, Ruby on Rails and even with ASP. It’s free, open source with no dependencies and supports all mainstream browsers including IE6+, Firefox 2+, chrome, Opera and Safari. Dependency Tracking comes with KnockoutJS so that you can chain relationships between your model data which lets you transform your data when a part of the relationship change gets updated. KnockoutJS has inbuilt templating as well as the ability to use custom templating such as JQuery templating or your very own custom templating. KnockoutJS comes with a number of built-in bindings and these make life really easy and straight forward, these include bindings for controlling text and appearance, control flow, and form fields. With KnockoutJS you can create your own custom bindings and this really could not be easier, so with KnockoutJS we extend it with using custom bindings and templates and these templates can include JQuery templates or plain old HTML templates. Note that using external templates can slow down the rendering slightly and it’s advisable to try to use inline templating where possible. As developers, we are always looking for something that adds that little-bit-more to our websites to make them snappier, have a nicer user interface, with less code. KnockoutJS gives us the ability to load our data for the page and have KnockoutJS bind the data to the user interface using simple elegant bindings. If the data changes or the user makes a change to the webpage, KnockoutJS has 2-way binding that updates the page and reflect the changes on the user interface. If you have used JQuery, you might be thinking - hang on JQuery does this for me already! Yes you’d be correct, but KnockoutJS can simplify things even further. With KO you can use a number of bindings for a whole range of things and also extend these to create your own easily. Shown below is how you define a custom binding. ko.bindingHandlers.myCustomBinding = { init: function(element, valueAccessor, allBindingsAccessor, viewModel) { //init logic }, update: function(element, valueAccessor, allBindingsAccessor, viewModel) { //update logic }}; Microsoft is shipping KO as a part of the ASP.NET project templates in Visual Studio. So any new project will have KO added to it via a NuGet package reference. If you’re not working on a new MVC 4 application, you can add it into any web application by using NuGet as below:- Or by downloading the .js file from its home on Github , add the knockout .js file to your application and reference it in your webpage and your good to go. When using KnockoutJS within a web application, there are a couple of things you can do to make life easier and these are some best practice points from using it, which I have found useful. If you create a new MVC web application in VS 2012, the solution loads and if you take a look at the contents of the packages.config file, we can see that as part of the solution we are pulling in a number of NuGet packages, one of which is:- Here we are using NuGet to pull in version 2.1.0 of the KnockoutJS library which at the time of writing this article was the latest version, so straight out of the box our solution has a reference to KnockoutJS and we haven’t had to do anything – were off to a great start. Update: At the time of posting this article, the version is 2.2.1. Our demo application is a simple shopping cart order page where we can update the quantity of items in our shopping basket and see a running total being updated. The page has the data passed in from our MVC Controller Action Method and this is then presented to the page. Demo has a ProductController, a set of entities like CartItem, Category, Model, OrderItems and Product, and the Index.cshtml for the UI. Custom scripts are in the JS folder and we have ajaxservice.js, dataservice.shopping.js, index.js and utils.js. To start with, when using KnockoutJS, it’s a good idea to create a namespace and use it within your JavaScript to keep things nice and tidy, same as you would within your c# codebase. To create a namespace in JavaScript is as simple as: var OrdersApp = OrdersApp || {}; Next we need to create our ViewModel. It’s in the index.js file. A typical example of the ViewModel we might use for our shopping cart is as follows:- $(function () { OrdersApp.Product = function () { var self = this; self.id = ko.observable(); self.price = ko.observable(); self.category = ko.observable(); self.description = ko.observable(); }; // The ViewModel OrdersApp.vm = function () { products = ko.observableArray([]), shoppingCart = ko.observableArray([]), addToCart = function (product) { // Stub }, removeFromCart = function (cartItem) { // Stub } }, grandTotal = ko.computed(function () { // Stub }), loadProducts = function () { // Stub }; return { products: products, loadProducts: loadProducts, shoppingCart: shoppingCart, addToCart: addToCart, removeFromCart: removeFromCart, grandTotal: grandTotal }; }(); OrdersApp.vm.loadProducts(); ko.applyBindings(OrdersApp.vm);}); This is gist of the complete ViewModel, but what we have here is a pure JavaScript representation of the model data (i.e. products and a shoppingCart) and actions to be performed. The ko.applyBindings() statement is used to tell KnockoutJS to use the object as the ViewModel for the page. Products are defined as an observableArray and this means that KnockoutJS will track what’s in this array. For example we can push() and pop() items onto the products observableArray and the front end will automatically show us the updated data due to the 2-way binding KnockoutJS has – you can even use the console in your Chrome browser to manipulate the items in your ViewModel and KnockoutJS will take care of the updating of the user interface for you - it’s that simple. In order to display a list of Products, we would be able to use the built in for-each binding and display our products as follows. Details Make: Model: Price: Add Item Note that KO also uses the data-bind tag to specify the type and field to bind to in the ViewModel. The elements inside the foreach data-bind are treated as the ‘row-template’ and repeated for each ‘product’ in the Products collection. As we can see above, each of the ‘Add Item’ buttons invoke the $root.addToCart method. However on addition the Total gets updated automatically. If we look at the markup for rendering the cart total, we will see the following Total ItemsTotal Price 0, click: $root.placeOrder">Place Order As we can see here, Total Items and Total Price is bound to the calculated fields of shoppingCart().length and value returned by the grandTotal() function. In Index.js we will see the grandTotal method is defined as follows. grandTotal = ko.computed(function () {var total = 0;$.each(shoppingCart(), function () { total += this.extPrice();});return total;}) Essentially we have defined it as a KO computed value that’s calculated for all the elements in the shopping cart. KO ‘observes’ for the changes in the number of shoppingCart items and on change, computes the grandTotal. Once the grandTotal value is computed, KO updates the UI because of the binding. If we put a breakpoint in the above function and add an item to the cart, we can see how KO call the computed function automatically. Thus change tracking and two way data binding provide a rich and responsive behavior where changes in one area of the application gets reflected immediately elsewhere. The demo application to show the basic flow using KnockoutJS code can be found on Github here: The complete demo shows a shopping cart webpage as shown above where you add products to your basket and can update the totals and a basket total is calculated, you can also remove items from the basket and the totals are all kept in sync using KnockoutJS. The example code covers use of observables and observable Arrays, computed functions, namespaces in your JavaScript and how to use callbacks. KnockoutJS is perfect for creating a user interface that responds immediately to the user and that includes adding and removing data. You don’t have to wait on server postbacks and hack away with viewstate or use any of the older tricks such as Update Panels and similar ones. You set up your ViewModel, then fill it with your data and then the user interface is updated using the 2-way binding done for by KnockoutJS. It’s quick, responds immediately and makes the user experience a whole lot better than before the introduction of KnockoutJS. KnockoutJS gives us the added benefit of separation of concerns and the fact that it allows for better unit testing of the user interface code – we can now test our JavaScript with a tool such as QUnit allowing us to add into our build server so we can run the user interface unit tests before deployment. Having discussed what KnockoutJS is good at, we should cover what problems you might run into using KnockoutJS - it’s actually something to be careful of within JavaScript. Scoping in JavaScript is a minefield and can really give you sleepless nights if you’re not careful, so a little discipline is required in order to try to avoid falling into some scoping issues when using JavaScript. When using JavaScript, there are thankfully a few JavaScript patterns you can use and one is the Module Pattern which is useful for organizing independent, self-containing pieces of JavaScript – you can read more about the Module Pattern here Be careful when using the ‘this’ keyword in JavaScript as you can run into issues easily with the context of the keyword this depending on how you structure your JavaScript. For a very good tutorial on how to go about structuring your JavaScript that you will use when working with KnockoutJS as well as other great tips I recommend, you take a look at Rob Connery’s Tekpub course here: In this article I covered an introduction to KnockoutJS; although we covered a fair amount it hopefully left you wanting to know more. I have listed some additional learning resources over here bit.ly/dncmag-snapko in the Readme section. In summary KnockoutJS is a fantastic addition to your arsenal as a web-developer when trying to create a slick user interface that responds immediately. You can use it with any web framework and it’s a one file addition to your solution which makes it easy to update if newer versions come out. With KnockoutJS, you get superb tutorials and there are more and more articles and tutorials popping up all the time. There is no reason not to give it a try as its super easy to get going with, just be mindful of scoping. Also try to refactor your JavaScript code at all times. If you find yourself writing a lot of JavaScript when using KnockoutJS, the chances are there is a better cleaner way (normally methods are only a couple of lines). Spend a couple of hours using KnockoutJS and you’ll wonder why you’re not using it on every web project – you can even go back and add it into your old web applications and improve the user experience with ease. Download the entire source code of this article (Github) Gregor Suttie is a developer who has been working on mostly Microsoft technologies for the past 14 years, he is 35 and from near Glasgow, Scotland. You can Follow him on twitter @gsuttie and read his articles at bit.ly/z8oUjM
http://www.dotnetcurry.com/aspnet-mvc/905/shopping-cart-ui-aspnet-mvc-knockoutjs
CC-MAIN-2016-36
en
refinedweb
J. : 7 thoughts on “A puzzle puzzle” A grid of 10^n points has (n + 1)^2 possible aspect ratios (Assuming A:B and B:A are each unique), and the closest ratio will be 1:1 (A square of sides 10^(n-1). public class _Main_ { private java.util.Vector _factors; private java.util.Iterator _factorsI; private double _diff; private int _num; private int _total; private int _theOne; public static final double _GoldenRatio = 1.618034; public _Main_(int total){ _total = total; _diff=100; _factors = new java.util.Vector(); for(int k =1; k<java.lang.Math.sqrt((double)_total);k++){ if(_total%k == 0){ _factors.add(k); } } _num=0; _factorsI = _factors.iterator(); while(_factorsI.hasNext()){ int temp = _factorsI.next(); if(java.lang.Math.abs(_GoldenRatio-(double)(temp/(_total/temp)))<_diff ||java.lang.Math.abs(_GoldenRatio-(double)((_total/temp)/temp))<_diff){ if(java.lang.Math.abs((double)(temp/(_total/temp)))<java.lang.Math.abs((double)((_total/temp)/temp))){ _diff = java.lang.Math.abs(_GoldenRatio-(double)(temp/(_total/temp))); }else{ _diff = java.lang.Math.abs(_GoldenRatio-(double)((_total/temp)/temp)); } _theOne=temp; } _num+=1; } System.out.println("there are "+_num+" arrangements for "+_total+" pieces."); System.out.println(_theOne+"x"+_total/_theOne+" is the closest to the golden ratio"); } public static void main(String[] argvs){ new _Main_(1000); //There are 8 arrangements for 1000 pieces. //25×40 is the closest to the golden ratio } } For the generalization of the first question, it’s ceil((n+1)^2/2). The prime factorization of 10 is 2*5, so the prime factorization of 10^n is 2^n * 5^n. For a 2 dimensional jigsaw puzzle, what we want to do is find the number of ways to partition those 2’s and 5’s among the two different axes, so there are n+1 ways to divide up the 2’s, and n+1 ways to divide up the 5’s. However, a 25×40 jigsaw puzzle has the same aspect ratio as a 40×25 puzzle, so you divide by 2, except a 10×10 puzzle is the same as a 10×10 puzzle, so we take the ceiling, since for even n, we would otherwise end up with a fractional number of aspect ratios. Incidentally, I’m pretty sure most 1000 piece puzzles I’ve done are 25×40, but perhaps I’m misremembering the 1500 piece 30×50 puzzles, which is much closer to phi. For 10^3, 25 x 40 isn’t so bad. I will check the next 1000 piece puzzle I do and report back 🙂 There are exceptions such as circular puzzles or puzzles that throw in a couple small pieces that throw off the grid regularity. Actually, quite a few commercial jigsaw puzzles have non-rectangular arrangements, not-fully-interlocking (or even non-interlocking) pieces, and other nefarious ways of making it more difficult to spot a piece’s correct orientation. Of the 2 dozen or so rectangular jigsaw puzzles in my closet, probably about 1/3 depart from a strictly rectangular grid. A well known non rectangle jigsaw puzzle is the Eternity puzzle I prefer Robbie’s point of view that A:B and B:A are unique. I’ll label the following as conjecture for the ratio closest to the golden ratio for some given n. For odd n >= 3: 40 x 10^(n-3) :: 25 x 10^(n-3) For even n >= 4 125 x 10^(n-4) :: 80 x 10^(n-4) A proof of this might arise from examining a fraction composed of a bunch of 2’s and 5’s. It doesn’t appear possible to make such a fraction whose value is close to 1.011 or 1.036. That is what would be required to produce a different aspect ratio that is closer to phi than the 40×25 or 125×80 (respectively)
http://www.johndcook.com/blog/2014/07/07/a-puzzle-puzzle/
CC-MAIN-2016-36
en
refinedweb
Java JTable Question or Datagrid March 14, 2010 at 2:38 PM Hello Sir How I can Display data in to JTable or grid which is Stored in MS Access Databse with Add,Update,Deletetion of Records.plz Help Me ... View Questions/Answers java netbean March 14, 2010 at 7:32 AM To develop an airline reservation system.The system should be able to provide the following:- Accept passenger information.- Assign an appropriate seat.- Display the boarding pass.Outcomes:1. Apply the object-oriented programming techniques of Java.... View Questions/Answers jsp runtime error March 13, 2010 at 10:51 PM sir, when i am running ur prog...from this website.... did the same as per guidelines...but i got error..!!!!org.apache.jasper.JasperException: Unable to compile class for JSPNote: sun.tools.javac.Main has been... View Questions/Answers Swings Menu Bar March 13, 2010 at 6:10 PM Hello,I created a menu bar using Java Swings...n New Record, Edit Record etc are the menu items.Now, I want to display the appropriate fields according the menu Item selected..below it..i.e.If we selected New Record,then Desired TextFields and new butt... View Questions/Answers session concept March 12, 2010 at 7:45 PM Hello friends, How can we track unexpectedly closing a window when a jsp project running with session concept. And this tracking should update the log in status in data base. Thanks. ... View Questions/Answers jsp March 12, 2010 at 2:43 PM My code is not working please anyone help me to store retrieved data from database in a string variable.Data cannot be stored into a String variable.<%@page contentType="text/html" pageEncoding="windows-1252"%><%@page import="java.sql.*"... View Questions/Answers about java swing March 12, 2010 at 10:12 AM How to send date in data base if i use the combobox like as dd,mm,yyyy.plz replythanx a lot ... View Questions/Answers Multiple session problem March 12, 2010 at 8:55 AM I am working in a Linux based java swing application.Problem:I have optimized JDialog and JPanel for my use .A JPanel can only have one JDialog at a time.But mupltiple JPanel(on different JFrames) can be launched.While switching between JPanels I am facing issue that JDialog... View Questions/Answers FileIO Java Compilation March 11, 2010 at 10:07 PM Expert:ChelseaI want to write a complete java program that catenates the file named "first" with the named "second" and produces a file named "third". If either input file "first" or "second" does not exist in the current working directory, giv... View Questions/Answers Java class method March 11, 2010 at 10:02 PM I have created a method that's supposed to return a basetime value, returning a string value, the problem is that i need to have it return a basetime to use it later. How can i modify this code to have it returning the basetime value im looking for? public String plus(BaseTime that) {... View Questions/Answers resize a panel slowly March 11, 2010 at 9:21 PM How to code to resize a panel slowly with netbeans ... View Questions/Answers jsp March 11, 2010 at 7:39 PM hi i need coding for sending a mail using jsp with file attachment. ... View Questions/Answers jsp March 11, 2010 at 7:26 PM hi friends how to get a table which contain many rows but it should be shown five by five at a time using next and previous button for example like emails in the inbox. ... View Questions/Answers Pager taglib March 11, 2010 at 5:51 PM how to use pager taglib for index provide the example,jar files ... View Questions/Answers jsp image problem March 11, 2010 at 3:25 PM hi everyone,How to display multiple images from mysql database to jsp page.please help me........... ... View Questions/Answers hi March 11, 2010 at 3:19 PM hi sir,my table is this, SQL> desc introducer; Name Null? Type ----------------------------------------- -------- ------------------ INTRODUCERNAME NOT NULL VARCHAR2(20) PHONENO ... View Questions/Answers java code March 11, 2010 at 2:38 PM there is an error like the headerfiles does not exists(poi header files)...when i compiled insertingTextInShape.java.....what should i do for including such headerfiles....plzzzz do replyyy........i downloaded poi package........what to do with this package for including such header files???????????... View Questions/Answers convert .txt file in .csv format March 11, 2010 at 2:37 PM Dear all,I hope you are doing good.I am looking to convert .txt file in .csv format.The contents might have different names and values.e.g.start:id:XXXXname:abcaddress:xyzstart:age:29height:5'9start:... View Questions/Answers Pagination March 11, 2010 at 2:36 PM I want to display only 10 records but my arraylist contains for example about 100 records...My jsp page should contain 10 records along with the pagination below....likeprev 1 2 3 4 5 nextI am using struts1.2 and tomcat 5.5Any help would be appre... View Questions/Answers java http 404 status error March 11, 2010 at 1:55 PM I have tried to call a servlet from html page.. but it is showing http 404 status error..and the description is "The requested resource is not available".I hav tried in all ways.. Plz give me the solution..Thank you.. ... View Questions/Answers j2ee March 11, 2010 at 1:13 PM hi guys can any send me the spring mvc example,thanks in advance ... View Questions/Answers Java Compilation March 11, 2010 at 12:37 PM I want to write a complete java program that catenates the file named "first" with the named "second" and produces a file named "third". If either input file "first" or "second" does not exist in the current working directory, give an appropriate err... View Questions/Answers Java Compilation March 11, 2010 at 12:32 PM How would i write a complete java program which, using a do-while loop, reads integers from the keyboard and sums them until the sum hits or exceeds 30. Then print out the sum.is it like this:import java.util.Scanner;public class Sum{ public static void ma... View Questions/Answers Infix to Prefix March 11, 2010 at 11:31 AM Hello, I needed help on converting from an infix expression to a prefix expression using stacks. ... View Questions/Answers JSP March 11, 2010 at 10:24 AM Hi,Can you please tell me how to load the values in selectbox which are stored in arraylist using struts-taglibsNote:I am neither using form nor bean...I want the arraylist values to be dispalyed in selectbox..thanks in advance ... View Questions/Answers avoid java code in jsp March 11, 2010 at 9:31 AM i want to show the arrayList values in a drop down box in struts the front page is jsp ,i am using struts1.3 ,i want to avoid java code in my jsp ... View Questions/Answers how to get the next option March 11, 2010 at 9:27 AM i was getting values from the database it was bulk so i want to keep the next option how to do in the jsp ... View Questions/Answers jsp problem March 11, 2010 at 12:24 jsp problem March 11, 2010 at 12:10 regarding-update the Jsp page itself March 10, 2010 at 11:09 PM Joined: Aug 17, 2009Messages: 12 [Post New]posted Today 10:39:52 PMQuote Edit [Up]I have two Jsp pages named bestschool.jsp and schoolnames.jsp.I am calling schoolnames.jsp from bestschool.jsp.I want when I click the link on INDUS Sch... View Questions/Answers creating a java bean application March 10, 2010 at 8:07 PM hi,i want to create a java bean using BDK. please tell me all the steps for creating a simple java bean application.i have also a confusion about the properties of java bean.please explain me the properties of java bean in detail with a suitable example.please help me to solv... View Questions/Answers java and oracle March 10, 2010 at 5:46 PM hai i am doing a project in which i have stored some data (think as 50 records) in database. Now i want to retrive the data from the database(20 records at each time but should be different for each time)..........please help me for this.............. ... View Questions/Answers IO File March 10, 2010 at 5:09 PM Write a java program which will read an input file & will produce an output file which will extract errors & warnings. It shall exclude the standard errors & warnings. The standard errors & warnings you can find out from the log file.This is the task ass... View Questions/Answers Synchronization March 10, 2010 at 3:43 PM If in a class a method is synchronised then the accessing thread is going to get the object lock to the whole class or lock is given only on the synchronized method ... View Questions/Answers Multiline graphs in java March 10, 2010 at 3:39 PM How to draw a multiline graph in java, One will b constant straight line and the other is changing ... View Questions/Answers servlet code problem March 10, 2010 at 3:11 PM This is my JSP code index.jsp<html><head><meta http-<title>Sync Data</title></head><body><h1>Sync Data</h1><... View Questions/Answers servlet code problem March 10, 2010 at 2:45 PM This is my JSP code index.jsp<html><head><meta http-<title>Sync Data</title></head><body><h1>Sync Data</h1><... View Questions/Answers swings March 10, 2010 at 2:03 PM i created one button in jframe. i want to call a method when i am pressing the button how can i do this............. ... View Questions/Answers Java Compilation error March 10, 2010 at 1:55 PM hi i m vipul chauhan , i having a problem with this package i download package from the location and i got file commons-fileupload-1.2.1-bin.zip and when i extract it i got file folder lib,site then i to into the lib folder an... View Questions/Answers ftp March 10, 2010 at 1:27 PM Hello Friends, I got image upload coding from rose india at the url doubt is weather it requires ftp for uploading or it sends image only in data format. Help me. Thanks. ... View Questions/Answers Spring MyEclipse Code March 10, 2010 at 11:47 AM How To Configure Spring in MyEclipse ? ... View Questions/Answers java swing March 10, 2010 at 11:39 AM How to set the rang validation on textfield, compare validation textfields , and also if we create a groupbutton like male female, if we want to send the selected item in database , how to send , plz help .........thanx a lot ... View Questions/Answers Missing output from associative array March 10, 2010 at 11:19 AM the following foreach loop does not print all of the keys from the array, any ideas as to why?<?php $Salespeople = array( "Hiroshi Morninaga"=>57, "Judith Stein"=>44, "Jose Martinez"=>26, "... View Questions/Answers java programming March 10, 2010 at 11:14 AM asking for java code, solving three unknowns in three equations..I need the thx ahed ... View Questions/Answers swings March 10, 2010 at 11:00 AM i created one jframe, in that i added ok button. i want to add some images with name in that frame .By selecting the image and press ok button it will display on the jpannel, how can i do this.......... ... View Questions/Answers Remote System OS name in JAVA March 10, 2010 at 10:26 AM I need to print the different os names of the all computers in my local network. I know that with System.getProperty("os.name"), I get the os name of my computer but for the other computers, I have no idea.Could you help me please ?Do you have any idea ? ... View Questions/Answers Multi line graph March 10, 2010 at 9:53 AM Hi,I want a multi line graph on a single chart using jfree in java....Can you please let me know the code...thanks in advance ... View Questions/Answers how to generate the pdf file with scroolbar from jsp age March 10, 2010 at 9:35 AM How to generate the pdf file with scroolbar from jsp.i am not able to see the all the columns in pdf file now .it is very urgent for me plz help ... View Questions/Answers Struts + HTML:Button not workin March 9, 2010 at 11:00 PM Hi,I am new to struts. So pls bare with me if my question is fundamental.I am trying to add 2 Submit buttons in same JSP page.As a start, i want to display a message when my actionclass is called.JSP code --------<td><html:submit property=&quo... View Questions/Answers Servlet error March 9, 2010 at 7:22 PM Can't we place the java files instead of class files in classes folder of our application in webapps..so that there's no need to compile the java files separately and then place the class files in classes folder..?? ... View Questions/Answers Servlet error March 9, 2010 at 7:03 PM I installed Jdk and tomcat successfully and all the examples provided by the tomcat are running successfully...But i'm unable to run my own servlet program..The error i'm facing is...exceptionjavax.servlet.ServletException: Wrapper cannot find servlet class Ex1 or... View Questions/Answers struts March 9, 2010 at 4:46 PM Tag libs in struts ... View Questions/Answers Jsp March 9, 2010 at 4:08 PM Hi,if i declare <%!string="a"%> in declaration & same in <%string="a"%> scriptlets, then can anybody please tell me what is the difference between these two??, why should i make use of declaration tag if i can declare variable in scriptlets tag??... View Questions/Answers swings March 9, 2010 at 2:28 PM i want to display a window while pressing a button, the window contains only ok,cancel buttons ... View Questions/Answers Hi guysam a 4th year student at University of Namibia (doing computer science major and IT minor), i would love to ask anyone to tell me what should i do and what language is best to use in my 4th year project. i want to program an e-mail system for our department.. to conclude, what do ... View Questions/Answers swings March 9, 2010 at 10:39 AM i want to display a texture in a particular polygon......... ... View Questions/Answers How to use find method in Java March 9, 2010 at 10:34 AM Hello,I want to write a class that gets a web page, parses it, and tries to find an object with certain properties. How can I do this?I was thinking, maybe get child objects of the browser class, then somehow parse through the class's objects/sub-objects, and use find().... View Questions/Answers Java - search/find a word in a text file March 9, 2010 at 10:27 AM Hello,I would like to know how to find from a list of lets say 10 but could be more, .txt files, how to search them for a word. The word will be PASS or it can be FAIL.If PASS I want to do nothing, if FAIL, I want to record in 1 master output file, the name of the file that ... View Questions/Answers swings March 9, 2010 at 10:14 AM i want to display a window while pressing a button .In that window i want to add images, by selecting the image it will display on the jpannel.how can i do this........ ... View Questions/Answers how i can add an horizontal scrollbar at my PdfAnnotation ? March 9, 2010 at 9:47 AM How to add the an horizontal scrollbar at my PdfAnnotation ?for exampledocument d=new document(0.document.open();...... ... View Questions/Answers how to add the scrollbar to the pdf page when generating the pdf file from jsp March 9, 2010 at 8:50 AM I am not able to see all the columns when i generated the pdf file from jsp.i have 12 colums so how to add the scrollbar ? ... View Questions/Answers Java with OOP assignment (Eclipse) March 9, 2010 at 4:16 AM "THREE Ts GAME"*Description*A "tic-tac-toe" game is a two player's board game where the game board is made of 3 X 3 grids. The two players, one being assigned with 'O' and the other with 'X', where each if the two players take turns marking the spaces in a... View Questions/Answers spring March 9, 2010 at 12:15 AM Hi sir how compile spring application on jboss5.1.0 server how run spring application on jboss5.1.0 serverthanks ... View Questions/Answers java code March 8, 2010 at 11:17 PM This program generates a list of repeat permutation result. For example, ifgiven a string "abc", the program prints all the possible permutations like:aaaaabaacabaabbabcacaacbaccbaababbac....tot... View Questions/Answers Java Program March 8, 2010 at 10:05 PM Write a Program that display JFileChooser that open a JPG Image and After Loading it Saves that Image using JFileChooser ... View Questions/Answers threads March 8, 2010 at 9:29 PM how one thread will know that another thread is being processing ... View Questions/Answers java code March 8, 2010 at 9:27 PM write a program for immutable class ... View Questions/Answers Java Code March 8, 2010 at 7:20 PM Write a Program using Swings to Display JFileChooser that Display the Naem of Selected File and Also opens that File ... View Questions/Answers java March 8, 2010 at 5:45 PM why we use classpath.? ... View Questions/Answers swings March 8, 2010 at 4:24 PM i want to display a background pattern in mappannel, which we can change the color . i dont have background patterns too.how can i get the background patterns, is there any inbuilt background patterns in java and how can i display it in a shape(polygon).. ... View Questions/Answers java programming March 8, 2010 at 4:01 PM asking for the java code for solving mathematical equation with two unknown .thnx ahead.. ... View Questions/Answers swings March 8, 2010 at 3:35 PM I know how to add an image in a Panel with Netbeans but I want to add a background pattern with Netbeans in a Panel - you just have to provide the pattern and the background is filled with it whatever the dimensions of the Panel. ... View Questions/Answers java run time error March 8, 2010 at 3:23 PM when i compile my program it didnt show any error in my program.but i cant run my program, if i run my program means it will show error like as followingException in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0 at Armstrong.main(Armstrong.java:22)Java ... View Questions/Answers swings March 8, 2010 at 2:56 PM how to add different patterns in java ... View Questions/Answers Tree Grid using JSF March 8, 2010 at 2:40 PM Hi All,I am using Trinidad TLD for JSF.I have implemented simple table, but having problem with Tree grid using same.I have read all document on Trinidad site for Tree grid, but they didn't mention about how we can fill data in tree grid by using Backing Bean.Next I dow... View Questions/Answers swings March 8, 2010 at 2:33 PM hii am using jcolorchooser in my application, in that there are no patterns and it have swatches,hsb,rgb options, just i want to add some different style patterns for back ground appearence. If any inbuilt patterns are there in java please send me the information........... ... View Questions/Answers jsp code problem March 8, 2010 at 12:35 PM hi, I am going to execute the following code which has been given your jsp tutorial.retrive_image.jsp:<%@ page import="java.sql.*" %><%@ page import="java.io.*" %><% // declare a connection by using Connection interface... View Questions/Answers Travelling Sales MAn Using GA March 8, 2010 at 11:50 AM travelling sales man problem using gafirst i have randomly select few cities which will be called population sizeex 10 cities are there then all the possible combination will be 10!,that will be too time consumin so let say 4 wii be population sizeafet that 1 selection: i had... View Questions/Answers dbms March 8, 2010 at 11:20 AM what is the process of creating new user in sql?tell me the process... ... View Questions/Answers java beginners March 8, 2010 at 10:10 AM thanks for the suggestion so I am sending the patteren in place of dots i want blank spaceaaaaaaaaaaaaaaaa a............a a........a a.....a a...a athanks regards ... View Questions/Answers caading March 8, 2010 at 5:22 AM how i create a soft ware wit Java code. can u please give me an example. ... View Questions/Answers Java interval problem March 8, 2010 at 2:26 AM I want to create a problem that finds the common interval of some periods of hours.For example if i have the time period: 10.00-12.00 and i have the time period: 11.00-13.00, i want to find the common interval of these two (which in the example is 11.00-12.00).Which is the best way to do... View Questions/Answers C program March 7, 2010 at 7:54 PM Thanks!!!!!!!!!!Please reply me some programs for DATA STRUCTURES (LINKED LISTS,ETC) ... View Questions/Answers jsp March 7, 2010 at 6:27 PM hi.. how to check the employee status in an organisation using the login and logout concepts in jsp. ... View Questions/Answers jsp March 7, 2010 at 6:24 PM hi.. get me the coding for logout in session while login. ... View Questions/Answers java March 7, 2010 at 3:05 Java March 7, 2010 at 3:03 How to Validate JRadioButton and How to Save data from JRadioButton to MS Access Database March 7, 2010 at 11:21 AM Hello Sir I want Store Corse Type that contains Two JRadioButton I want StoreInformation from JRadioButton to MS Access Database,and I want Select any One JRadioButton At a Time out of Two.plz Give Me Code Sir. ... View Questions/Answers I need your help March 7, 2010 at 10:43 AM For this one I need to create delivery class for a delivery service .however, this class should contain (delivery number which contains eight digits. The first four digits represent the year; the last four represent the area. E.g, the 76th delivery in 2010 has a complete delivery number of 20100076... View Questions/Answers server problem March 7, 2010 at 1:35 AM dear sir please give me best soloution how run hibernate and spring application on jboss and weblogic10.1.0 sever and compilethanks.. ... View Questions/Answers server problem March 7, 2010 at 1:28 AM Dear sir please give me full deatil how intall jboss5.1.0 and how compile application -ejb,hibernate,spring for admestration consoleand how intall jboss5.1.0GA server and i not found ant(compiler)thanks.. ... View Questions/Answers java code March 6, 2010 at 8:28 PM i m doing aproject on tspcan you provide me the entire source code for the problem in javafor 10 cities ... View Questions/Answers loading value into combo box by selecting value from other combo box March 6, 2010 at 6:19 PM I am doing the project "server management in online voting system".I create table "state" it contains state and code.each state table contain districts and code;each district table contain constituency and code.MY problem is that as i select st... View Questions/Answers C program March 6, 2010 at 3:25 PM Thank You .Write some C programs using pointers .Write some C programs using files.Please reply me . ... View Questions/Answers dynamic method dispatch March 6, 2010 at 3:22 PM can you give a good example for dynamic method dispatch (run time polymorphism) ... View Questions/Answers JDBC 4.0 March 6, 2010 at 3:16 PM Dear sir can you give me the connection string of oracle8.0 databaseand i know that we need of set classpath for classes12.zip for oracle 9ibut i dont know for oracle 8.0 whaich jar or zip i need to set?similarly can you give me code to connect microsoft sql server usi... View Questions/Answers core java March 6, 2010 at 3:09 PM why sun soft people introduced wrapper classes?do we have any other means to achieve the functionality of what a Wrapper class provides. ... View Questions/Answers swings March 6, 2010 at 2:14 PM how can i use the inbuilt color palette in java ... View Questions/Answers swings March 6, 2010 at 12:38 PM hii am using jcolorchooser in my application, in that there are no palettes and it have swatches,hsb,rgb options, just i want to add some palettes in my application. If any inbuilt palettes are there in java please send me the information........... ... View Questions/Answers java file upload in struts March 6, 2010 at 12:34 PM i need code for upload and download file using struts in flex.plese help me ... View Questions/Answers
http://www.roseindia.net/answers/questions/235
CC-MAIN-2016-36
en
refinedweb
To define a group, you should put the \defgroup command in a special comment block. The first argument of the command is a label that should uniquely identify the group. You can make an entity a member of a specific group by putting a \ingroup command inside its documentation block. The second argument is the title of the group.> */ /*\@{*/ /*\@}*/ Note that compound entities (like classes, files and namespaces) can be put into multiple groups, but members (like variable, functions, typedefs and enums) can only be a member of one group (this restriction is to avoid ambiguous linking targets). Doxygen will put members into that group where the grouping definition had the highest priority: f.i. \ingroup overrides any automatic A member group is defined by a //@{ ... //@} /*@{*/ ... /*@}*/(); //@} Here Group1 is displayed as a subsection of the "Public Members". And Group2 is a separate section because it contains members with different protection levels (i.e. public and protected). Go to the next section or return to the index.
http://www.star.bnl.gov/public/comp/sofi/doxygen/grouping.html
CC-MAIN-2016-36
en
refinedweb
Analyzing crash dumps can be complicated. Although Visual Studio supports viewing managed crash dumps, you often have to resort to more specialized tools like the SOS debugging extensions or WinDbg. In today’s post, Lee Culver, software developer on the .NET Runtime team, will introduce you to a new managed library that allows you to automate inspection tasks and access even more debugging information. –Immo Today are we excited to announce the beta release of the Microsoft.Diagnostics.Runtime component (called ClrMD for short) through the NuGet Package Manager. ClrMD is a set of advanced APIs for programmatically inspecting a crash dump of a .NET program much in the same way as the SOS Debugging Extensions (SOS). It allows you to write automated crash analysis for your applications and automate many common debugger tasks. We understand that this API won’t be for everyone — hopefully debugging .NET crash dumps is a rare thing for you. However, our .NET Runtime team has had so much success automating complex diagnostics tasks with this API that we wanted to release it publicly. One last, quick note, before we get started:. Getting Started Let’s dive right into an example of what can be done with ClrMD. The API was designed to be as discoverable as possible, so IntelliSense will be your primary guide. As an initial example, we will show you how to collect a set of heap statistics (objects, sizes, and counts) similar to what SOS reports when you run the command !dumpheap –stat. The “root” object of ClrMD to start with is the DataTarget class. A DataTarget represents either a crash dump or a live .NET process. In this example, we will attach to a live process that has the name “HelloWorld.exe” with a timeout of 5 seconds to attempt to attach: int pid = Process.GetProcessesByName("HelloWorld")[0].Id; using (DataTarget dataTarget = DataTarget.AttachToProcess(pid, 5000)) { string dacLocation = dataTarget.ClrVersions[0].TryGetDacLocation(); ClrRuntime runtime = dataTarget.CreateRuntime(dacLocation); // ... } You may wonder what the TryGetDacLocation method does. The CLR is a managed runtime, which means that it provides additional abstractions, such as garbage collection and JIT compilation, over what the operating system provides. The bookkeeping for those abstractions is done via internal data structures that live within the process. Those data structures are specific to the CPU architecture and the CLR version. In order to decouple debuggers from the internal data structures, the CLR provides a data access component (DAC), implemented in mscordacwks.dll. The DAC has a standardized interface and is used by the debugger to obtain information about the state of those abstractions, for example, the managed heap. It is essential to use the DAC that matches the CLR version and the architecture of the process or crash dump you want to inspect. For a given CLR version, the TryGetDacLocation method tries to find a matching DAC on the same machine. If you need to inspect a process for which you do not have a matching CLR installed, you have another option: you can copy the DAC from a machine that has that version of the CLR installed. In that case, you provide the path to the alternate mscordacwks.dll to the CreateRuntime method manually. You can read more about the DAC on MSDN. Note that the DAC is a native DLL and must be loaded into the program that uses ClrMD. If the dump or the live process is 32-bit, you must use the 32-bit version of the DAC, which, in turn, means that your inspection program needs to be 32-bit as well. The same is true for 64-bit processes. Make sure that your program’s platform matches what you are debugging. Analyzing the Heap Once you have attached to the process, you can use the runtime object to inspect the contents of the GC heap:); } This produces output similar to the following: However, the original goal was to output a set of heap statistics. Using the data above, you can use a LINQ query to group the heap by type and sort by total object size: var stats = from o in heap.EnumerateObjects() let t = heap.GetObjectType(o) group o by t into g let size = g.Sum(o => (uint)g.Key.GetSize(o)) orderby size select new { Name = g.Key.Name, Size = size, Count = g.Count() }; foreach (var item in stats) Console.WriteLine("{0,12:n0} {1,12:n0} {2}", item.Size, item.Count, item.Name); This will output data like the following — a collection of statistics about what objects are taking up the most space on the GC heap for your process: 564 11 System.Int32[] 616 2 System.Globalization.CultureData 680 18 System.String[] 728 26 System.RuntimeType 790 7 System.Char[] 5,788 165 System.String 17,252 6 System.Object[] ClrMD Features and Functionality Of course, there’s a lot more to this API than simply printing out heap statistics. You can also walk every managed thread in a process or crash dump and print out a managed callstack. For example, this code prints the managed stack trace for each thread, similar to what the SOS !clrstack command would report (and similar to the output in the Visual Studio stack trace window): foreach (ClrThread thread in runtime.Threads) { Console.WriteLine("ThreadID: {0:X}", thread.OSThreadId); Console.WriteLine("Callstack:"); foreach (ClrStackFrame frame in thread.StackTrace) Console.WriteLine("{0,12:X} {1,12:X} {2}", frame.InstructionPointer, frame.StackPointer, frame.DisplayString); Console.WriteLine(); } This produces output similar to the following: ThreadID: 2D90 Callstack: 0 90F168 HelperMethodFrame 660E3365 90F1DC System.Threading.Thread.Sleep(Int32) C70089 90F1E0 HelloWorld.Program.Main(System.String[]) 0 90F36C GCFrame Each ClrThread object also contains a CurrentException property, which may be null, but if not, contains the last thrown exception on this thread. This exception object contains the full stack trace, message, and type of the exception thrown. ClrMD also provides the following features: - Gets general information about the GC heap: - Whether the GC is workstation or server - The number of logical GC heaps in the process - Data about the bounds of GC segments - Walks the CLR’s handle table (similar to !gchandles in SOS). - Walks the application domains in the process and identifies which modules are loaded into them. - Enumerates threads, callstacks of those threads, the last thrown exception on threads, etc. - Enumerates the object roots of the process (as the GC sees them for our mark-and-sweep algorithm). - Walks the fields of objects. - Gets data about the various heaps that the .NET runtime uses to see where memory is going in the process (see ClrRuntime.EnumerateMemoryRegions in the ClrMD package). All of this functionality can generally be found on the ClrRuntime or the ClrHeap objects, as seen above. IntelliSense can help you explore the various properties and functions when you install the ClrMD package. In addition, you can also use the attached sample code. Please use the comments under this post to let us know if you have any feedback! Join the conversationAdd Comment To answer my own comment – it seems you can self-debug. Seems to work – would be nice to know if it is supported. BTW, there is an error in the sample – I assume it should be: // If we don't have the dac installed, we will use the long-name dac in the same folder. if (string.IsNullOrEmpty(dacLocation)) // ***** without '!' ? ****** dacLocation = version.DacInfo.FileName; @Hrvoje, yep that's an error, sorry about that, it should not have the '!'. 🙁 The self-debug case is not a supported scenario because there's not a sensible way to make it work. For example, you can attempt to inspect your own heap with it, the ClrMD api itself will be allocating objects, which will trigger GCs, which in turn will cause your heap walk to fail when a GC rearranges the heap as you were walking it. This should always be used to inspect another process (or crash dump). How can I call this to dump all objects under a given class or namespace from code? I want to dump all objects under a given dialogue window when that window is supposedly closed and deallocated. This would greatly help in finding objects that have not been garbage collected. I also want to do a memory snapshot of allocated ojbects by full type name and object id and then at a later time compare that to the current memory snapshot. I'd want only the objects in the second snapshot that do not exist in the first one to be printed. This helps for code that should clean up all of its resources when it exits. I've used this in C++ in the past to put in automatic debug only checks for memory leaks (e.g., snapshot, call method A, snapshot, compare snapshots, if snapshots differ, break in debug mode). @Tom ClrType instances have a .Name which contains the namespace as well as the typename. You can use this to separate out the heap by namespace (though I suppose it would be better to provide a Namespace property instead of making you parse out the name…that's not currently in the API). As to your second question about doing heap diffs, the main obstacle to doing this is that the GC relocates objects, and an object can still be alive between two snapshots, but the object got moved…so you don't know the instance is the same. To solve this, we use a heuristic which basically does a diff of the type statistics (100 Foo objects in snapshot 1, 110 Foo objects in snapshot 2, 10 Foo objects difference). In fact, perfview's memory diagnostics already does this today:…/details.aspx (Memory diagnostic in PerfView is actually built on top of ClrMD.) is there any limitation on the kind of process we can attach to? e.g not runnning as admin and more cause i have tried to attach to one of my own processes and got exception Could not attach to pid 514, HRESULT: 0xd00000bb hiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii i write a dll in project ATL with a function witch function inclusive a input and output parameter is that byte* type STDMETHODIMP CMSDllServer::sum22(BYTE* aa,SHORT len) { return S_OK; } in use during from this function in a windowes application c# doesnot exist no problem and array values returned to truth //out put 1,2,3,4,5 but the same function in a webservice returned to only first index array and exist very big problem byte[] Packet = new byte[5]; //out put 1,0,0,0,0 help me pleaseeeeeeeeeeeeeeeeeeeeeeeeeeeee thanx Seems my original comment never made it through. Our use case seems to be a bit simpler than this dll was intended to be used. We produce mission critical software (high availability and fault tolerance required, low installation count) which sometimes presents a challenge to monitor and diagnose. I view ClrMD as a possibility to implement a miniature adtools-like component that would always be present with the deployment of our software to cover these use cases: monitor the target process and take a "memory object count" if memory usage > X monitor the target process (GUI application) and take a "call stack dump" if the UI thread is blocked for more than 250ms (there was a MS visual studio extension that did the same thing, but produced minidumps – there was a nice video about it on channel9, but I can't seem to find it. Basically, microsoft was using it to detect usability problems in production). However, I would also like to see the following: integrated ability to take usable minidump and full dumps of attached process, something like clrdump (google it, first result – seems the comments delete any of my posts that contains a link without warning…) ability to take a snapshot of native call stacks – 95% of our threads are .net, but there are a couple of native threads we integrate with through C++/CLI and we would really like to see both native and managed callstacks. There should be an easy way to convert the call stack to a readable format with the matching symbols (symbol server support). I know this may not be your primary use case, but it would complete our needed feature set. Hi, Nice ! 🙂 It's really nice to see this kind of things released publicly since all analysis needs (automated or not) are not covered by SOS or even by other extensions like Steve Johnson's SOSEX (). Gaël @Hrvoje You can do this using the IDebug* interfaces provided by DbgEng (full details too long for a comment here, but you can search for IDebugControl::GetStackTrace). ClrMD provides a (mostly) complete wrapper around those interfaces: using Microsoft.Diagnostics.Runtime; … IDebugControl control = (IDebugControl)dataTarget.DebuggerInterface; control.GetStackTrace(…); IDebugClient client = (IDebugClient)dataTarget.DebuggerInterface; client.WriteDumpFile(…); You can use the IDebug* interfaces to fully control your process (go, step, etc), but again…that's more detail than I can put in this comment. The API is still in beta too. =) @li-raz I should have pointed out in the post, attaching to a process requires debugger privileges (in this case, that almost certainly means running the program as administrator). You do not need admin privileges to load a crash dump. Is the clrCmd .net library on a path to be fully supported and part of .NET 4.x or later? We can use beta version code in our development environment but not in our production environment given the production environment has many different long running server processes. Here is the quote from the blog post:. @Tom: The current version of ClrMD is a pre-release, which means its license doesn’t allow usage in production environments. Once we turn ClrMD into a stable release it will allow production use. Is there any plan to provide one for .Net 3.5 framework (CLR 2)? Thanks for this Immo. I've actually spent the past few months writing exactly the same library using the CorDebug, MetaData and Symbol server APIs. I've actually got it all up and running, although I've targeted crash dumps (full and partial) as opposed to a live process. Do you have any thoughts on ClrMD vs CorDebug? Obviously CorDebug is geared towards debugging and happens to support crash dumps as an extra bonus, while ClrMD is focussed on analysis and not debugging. It's great that ClrMD takes away all of the grunt labour I've had to do in order to get CorDebug up and running for crash dumps, like implementing ICorDebugDataTarget (great fun for partial dumps!) and parsing MetaData binary signatures which is a truely painful experience. It's awesome to have an officially supported way of doing this now, but any thoughts on whether CorDebug will continue to support crash dumps? And any future plans for ClrMD, is this just the beginning ofClrMD ? Really excited by this so would love to hear anything you have to say 🙂 ps – is the team hiring? I've got experience in CorDebug, MetaData and the Symbol Server API's 😀 @kevinlo2: It's on our list but we don't have a timeline yet. This is great! I can see this going pretty big in just overall debugging. I still seem to be getting the "unable to attached to pid" error, though. Of course, I've only tried on the calc, notepad, and issexpress processes. Keep up the awesome work! Finally got past an issue attaching to a running process! I'm not seeing any stack traces in the managed threads though. This is awesome stuff – I can't remember the last time I installed a framework and had so many "are you serious???" moments. Very cool – keep up the good work! This is awesome! Looking forward for further samples. I was looking to automate IIS app pool process memory dump / analysis and make it as self-service tool on our shared web farm. This sounds very promising for that. Very cool stuff – looking forward to reading more about this. I'd be interested to see a way of grabbing more information about the objects or even the whole objects themselves. I'm seeing odd behavior when trying to use retrieve native call stacks. When I try to use IDebugControl::GetStackTrace, it appears to not return all the frames in the stack. For example, if I retrieve the stack for thread zero in a sample dump file via IDebugControl::GetStackTrace in ClrMD, I get the following frames: ############ Frames for thread 0 [B128] ############ [0]: 7C82845C ntdll!KiFastSystemCallRet [1]: 77E61C8D kernel32!WaitForSingleObject [2]: 5A364662 w3dt!IPM_MESSAGE_PIPE::operator= [3]: 0100187C w3wp [4]: 01001A27 w3wp [5]: 77E6F23B kernel32!ProcessIdToSessionId If I look at the stack in Visual Studio or WinDbg, or if I retrieve it using IDebugControl::GetStackTrace in a WinDbg extension, I get the following: ############ Frames for thread 0 [B128] ############ [0]: 7C82845C ntdll!KiFastSystemCallRet [1]: 7C827B79 ntdll!ZwWaitForSingleObject <<<< Skipped by ClrMD [2]: 77E61D1E kernel32!WaitForSingleObjectEx <<<< Skipped by ClrMD [3]: 77E61C8D kernel32!WaitForSingleObject [4]: 5A364662 w3dt!WP_CONTEXT::RunMainThreadLoop [5]: 5A366E3F w3dt!UlAtqStartListen <<<< Skipped by ClrMD [6]: 5A3AF42D w3core!W3_SERVER::StartListen <<<< Skipped by ClrMD [7]: 5A3BC335 w3core!UlW3Start <<<< Skipped by ClrMD [8]: 0100187C w3wp!wmain [9]: 01001A27 w3wp!wmainCRTStartup [10]: 77E6F23B kernel32!BaseProcessStart Note that all the frames listed by ClrMD exist in the true call stack, but it has skipped the indicated frames in between. Have you seen this behavior before? The code I'm using looks like this: DataTarget dataTarget = DataTarget.LoadCrashDump(@"c:scratchmydump.dmp"); // Not actually using the ClrRuntime in this snippet, but my actual code is doing this, // so I'm including it in case it matters. ClrInfo clrInfo = dataTarget.ClrVersions[0]; string dacLocation = clrInfo.TryGetDacLocation(); ClrRuntime clrRuntime = dataTarget.CreateRuntime(dacLocation) ; // Retrieve the required debugging interfaces IDebugControl4 control = (IDebugControl4) dataTarget; IDebugSystemObjects3 sysObjs = (IDebugSystemObjects3) dataTarget; sysObjs.SetCurrentThreadId(0); DEBUG_STACK_FRAME[] frames = new DEBUG_STACK_FRAME[100]; uint frameCount = 0; control.GetStackTrace(0, 0, 0, frames, 100, out frameCount); // Note: after the call, frameCount is set to 6, instead of 11, like it should be As you may have noticed from my output above, I'm also having some issues with symbol resolution via IDebugSymbols::GetNameByOffset where occasionally I'm getting incorrect or incomplete names; but, I'm hoping that this is just something I have wrong in the code that sets up the symbol path. Slight typo in the above code sample. These two lines: IDebugControl4 control = (IDebugControl4) dataTarget; IDebugSystemObjects3 sysObjs = (IDebugSystemObjects3) dataTarget; should have read IDebugControl4 control = (IDebugControl4) dataTarget.DebuggerInterface; IDebugSystemObjects3 sysObjs = (IDebugSystemObjects3) dataTarget.DebuggerInterface; Great… Love this. One question. I have a simple program that allocates a List<> and adds instances of a class called PayLoad (see code below). After the first time through the while loop, one Payload instance is allocated and added to the List<PayLoad>. When I dump all the heap allocated objects from my test program's namespace I see: Name: Total Size: Total Number: TestProgram.PayLoad[] 96 2 TestProgram.PayLoad 24 1 Does anybody have any idea where the TestProgram.PayLoad[] instances come from? I'm trying to measure memory usage of my namespace object and this seems to skew it a bit. Thanks. class Program { static void Main(string[] args) { List<PayLoad> payloadList = new List<PayLoad>(); while (true) { Console.WriteLine("Adding new payload"); payloadList.Add(new PayLoad()); Console.ReadLine(); } } } class PayLoad { public int a; public int b; } I tried to run the sample to analyze .dmp file which was taken from a program running on the same machine as the sample. but i keep getting the following exception when trying to create the runtime object: Message: Failure loading DAC: CreateDacInstance failed 0x80131c30 at Microsoft.Diagnostics.Runtime.Desktop.DacLibrary.Init(String dll) at Microsoft.Diagnostics.Runtime.Desktop.DacLibrary..ctor(DbgEngTarget dataTarget, String dll) at Microsoft.Diagnostics.Runtime.DbgEngTarget.CreateRuntime(String dacFilename) at DumpFetch.App..ctor() at DumpFetch.App() Any ideas? I'm analyzing 11 GB dmp file and top size type is "Free". What does "Free" means? Free is a pseudo object that represents a free space (a hole) in the GC heap. They exist when the GC happens but decides not to compact. Having large amounts of fee space is not necessarily bad (especially if they are a few big chunks), since these to get reused. When these free areas get too numerous/small, the GC will compact. Large objects (> 85K), are treated differently, and placed in their own region of memory and currently are not ever compacted (however in V4.5.1 we have added the ability to do this explicitly) Is any way to find using ClrMD that some object is reachable from roots or not reachable from roots? Yes. In fact, this is how PerfView uses ClrMD, but you have to calculate the object graph manually to find that information. There's not a simple function call to do this. The functions which you use to do most of the work are: ClrHeap.GetObjectType – To get the type of the object in the root. ClrType.EnumerateRefsOfObject – To enumerate what objects the current object references. With these functions you build the full heap graph…and any object not in the graph is considered dead. Any object you do reach is considered live. (There are false positives and negatives from this approach, but they are rare. We unfortunately aren't 100% accurate in the root reporting for all versions of the runtime.) Thank you, Lee Culver! It helped me find cause of memory leak. @Alexey: Can you provide me your code for calculating the heap graph? (Mail: toni.wenzel@googlemail.com) I'm currently investigating a memory leak of our own application. It would be interesting how you managed this. THX! What the ClrRoot.Address used for? Points this to the same as ClrRoot.Object? How can I receive following informations: The object which is pinned by the root (I guess ClrRoot.Object) I would like to know which object prevent which object from being collected (GC relocated). What is the ClrType.GetFieldForOffset() "inner" parameter used for? Great work! But when I tried it out on a production dump I don’t get the same answer from the sample code as I got from WinDbg for the command "!dump heap –stat". Example for strings The sample code returns: 16 318 082 199 815 System.String But in WinDbg I get : 21004872 191564 System.String I miss 3 Mb of string objects?! And when I trying to search for “ClaimsPrincipal” objects it’s possible to locate 46 of them with WinDbg but none with ClrMD? Is it something I have missed? Wow, I wish I knew about this weeks ago. This is a fantastic little library and it's making my deep investigations into many millions of objects much more bearable. Thanks kindly 🙂 So, I might be doing something wrong, but I'm having a hard time working with array fields while trying to browse an object. I'm currently using ClrInstanceField.GetFieldValue(parentObjectAddress) which I was hoping would give me the address of the Array, since that is what it does for objects. Instead it seems to be returning something else? It also seems like it thinks the array in every generic List<T> is an Object[] but this would imply that Generic collections don't prevent boxing, which I know to be false. I'm also curious that when I use GetFieldValue on an Object type, the address it gives back seems to work fine with field.Type, but heap.GetObjectType for the same address returns null or sometimes strange values. I only stumbled this way when trying to account for polymorphism while browsing referenced objects deeper than my starting point, since I figured ClrInstanceField.Type would reflect the general type definition, not necessarily the actual type stored in a particular instance (e.g. field definition type: IEnumerable, instance reference: ArrayList). Maybe you could provide some more sample code now that this has been in the wild for a while? Without documentation it has been hard to infer how one might dig deep into an object graph, especially regarding fields that aren't primitive values (structs/arrays/objects/etc.). There are very few deep resources online, though the ScriptCs module and a few other blogs have been helpful, I am encountering plenty of things that require a lot of trial and error, which is costing me more time than I was hoping this tool would save me. I still think the knowledge will benefit me in the long run, but a followup would be nice. Maybe some of those internal automated diagnostics might be safe to share with the public? On a positive note, I've had great success combining some work I did automating against dbgeng and SOS with this library and they appear to be complimenting each other well (since I already have some SOS parsing implemented). I love this tool, but would also like to use an app written with it against some dumps containing unmanaged code from old legacy apps to automate large numbers of repetitive actions. I'm thinking the tool can do it because DebugDiag v2 uses ClrMD and it can open unmanaged dumps. But I can't figure out how to load the required unmanaged-compatible clr10sos from ClrMD-based code. The code seems to required the FindDAC step and, of course, there are no CLR versions in the dump at all. How can I get ClrMd to use the Clr10Sos and let me use the ReadMemory, thread-related, and other convenient debugging commands? Thanks! -Bob I realize now that I didn't put my name with my question, but I've further detailed the question above on StackOverflow. Sadly, I don't think there are many people using this extensively yet, so I'm concerned by the fact that the question is already well below the average number of viewers for a new question. I'm posting the link here both for experts that might see this as well as others who might have the same question: stackoverflow.com/…/how-to-properly-work-with-non-primitive-clrinstancefield-values-using-clrmd Hi, We need to parse dictionary of type <string, List<objects>> using ClrMD. Dictionary are being stored as System.Collections.Hashtable+bucket[] into memory as per our understanding. We have tried to parse dictionary of type<string, List<objects>> by enumerating internal Hashtable+bucket[] objects, but we aren’t successful. We are able to parse dictionary values(List<objects>>) as individual arrays[]. But we aren’t able to correlate these individual arrays[] belongs to which Keys. To know this, we need to parse dictionary of type <string, List<objects>>. Can you please provide us pointers/direction on how to parse dictionary using ClrMD ? Sample of code will be helpful. This is amazing ! This is going to help me automating the debugging of W3WP on certain situations, I can't explain how thrilled I am, previously I would have had to create a memory dump using procdump, then pull up windbg and start issuing commands to gather the desired info, all of this manually and error prone ! Now I can make automatically from my APP, with a LIVE PROCESS !! We do a lot of dump analysis where I work and we started to use your library a lot, it's awesome. I recently made an extension library for ClrMD which allows to use ClrMD with LINQPad; being able to interactively navigate in the dump with ClrMD and LINQPad is great for finding unknown issue. When we spot a particular issue pattern, we take the code we did in LINQPad and put it in an analysis rule in DebugDiag. The project is on GitHub if you want to take a look: github.com/…/ClrMD.Extensions It would be great to hear your thoughts about it, does it fit with your vision about where ClrMD is evolving? My main concern is about my 'ClrObject' class which take care of most of the LINQPad integration, I saw that you created one in the DebugDiag API (which is also available in the ClrMD samples). Do you plan to include the ClrObject class from DebugDiag directly in ClrMD? Do you plan to change ClrMD in a way that I would not be able to create my own ClrObject class? Thanks for your time, Jeff Cyr There is a memory leak when calling the dataTarget.CreateRuntime method. Where can I report this? Hi, I have posted a question about finding root of object using CLR MD at stackoverflow.com/…/trying-to-find-object-roots-using-clr-md Can you please provide any suggestion about it. Hi, maybe someone would be interested, I wrote an application that expose CLRMD library via GUI: github.com/…/LiveProcessInspector Posted a question at stackoverflow.com/…/finding-a-types-instance-data-in-net-heap. Can you help? How do I start a process using the debugger API? I have been able to use CLRMD to attach to live proceesses that are already running and monitor them for exceptions and other debug events. Now I want to be able to launch an app under the debugger API so I can capture debug events that occur during the startup sequence. CLRMD does not expose this functionality so in a C++/CLI dll I wrote code that uses the unmanaged API to launch the process and then use DataTarget::CreateFromDebuggerInterface() to be able to use the CLRMD functionality. (error handling omitted): PDEBUG_CLIENT debugClient = nullptr; hr = DebugCreate( __uuidof( ::IDebugClient ), (void**)&debugClient ); System::Object^ obj = Marshal::GetObjectForIUnknown( ( System::IntPtr )debugClient ); Interop::IDebugClient^ pdc = (Interop::IDebugClient^)obj Interop::DEBUG_CREATE_PROCESS createFlags = (Interop::DEBUG_CREATE_PROCESS)1; // this value seems to work, don't know why; other values failed Interop::DEBUG_ATTACH attachFlags = Interop::DEBUG_ATTACH::INVASIVE_RESUME_PROCESS; hr = pdc->CreateProcessAndAttach( 0, exePath, createFlags, 0, attachFlags ); Interop::IDebugControl^ idc = ( Interop::IDebugControl^ )pdc; hr = idc->WaitForEvent( Interop::DEBUG_WAIT::DEFAULT, 1000 ); After calling WaitForEvent() the debugger should be attached and the process should be running. If I call idc->WaitForEvent( Interop::DEBUG_WAIT::DEFAULT, 1000 ); in a loop it will display the UI and run normally. However, when I try to connect the session to CLRMD I get errors. DataTarget^ target = DataTarget::CreateFromDebuggerInterface( pdc ); This always throws an Microsoft::Diagnostics::Runtime::ClrDiagnosticsException "Failed to get proessor type, HRESULT: 8000ffff"" at Microsoft.Diagnostics.Runtime.DbgEngDataReader.GetArchitecture() in c:workprojectsProjectsProcessMonitorsamplesdotnetsamplesMicrosoft.Diagnostics.RuntimeCLRMDClrMemDiagdbgengdatatarget.cs:line 164 at Microsoft.Diagnostics.Runtime.DataTargetImpl..ctor(IDataReader dataReader, IDebugClient client) in c:workprojectsProjectsProcessMonitorsamplesdotnetsamplesMicrosoft.Diagnostics.RuntimeCLRMDClrMemDiagdatatargetimpl.cs:line 30 at Microsoft.Diagnostics.Runtime.DataTarget.CreateFromDebuggerInterface(IDebugClient client) in c:workprojectsProjectsProcessMonitorsamplesdotnetsamplesMicrosoft.Diagnostics.RuntimeCLRMDClrMemDiagpublic.cs:line 2797 Inside the exception object it reports that the _HResult=0x81250002 I tried calling DataTarget::CreateFromDebuggerInterface() both before and after the target is connected and before and after WaitForEvent() is called – all fail the same way. Any help getting this to work is appreciated. Thanks. Exception thrown: 'Microsoft.Diagnostics.Runtime.ClrDiagnosticsException' in Microsoft.Diagnostics.Runtime.dll Additional information: This runtime is not initialized and contains no data. Any ideas?
https://blogs.msdn.microsoft.com/dotnet/2013/05/01/net-crash-dump-and-live-process-inspection/
CC-MAIN-2016-36
en
refinedweb
All Papervision applications have the following elements in common. They can be thought of as the guts of Papervision. Scene The scene is where objects are placed. It contains the 3D environment and manages all objects rendered in Papervision3D. It extends the DisplayObjectContainer3D class to arrange the display objects. Viewport The viewport displays the rendered scene. It’s a sprite that is painted with a snapshot of all the stuff contained in the view frustum. The viewport sprite inherits all the sprite methods and can be moved, positioned, and have effects added just like any normal sprite in flash. Camera The camera is your eye inside of the 3D scene and defines the view from which a scene will be rendered. Different camera settings will present a scene from different points of view. When rendering, the scene is drawn as if you were looking through the camera lens. Object An object is a combination of vertices, edges, and faces which provide a meaningful form for display, such as a car, avatar, or box. Objects are created in Papervision using primitives, or embedded or imported into Papervision from a modeling application such as Blender, Swift3D, or 3DSMax. Material The material is the texture which is applied to an object. Textures can consist of various formats such as bitmaps, swfs, or video, and interact with light sources creating bump map effects and shaders. Renderer The renderer draws a sequence of faces onto the viewport so that they make visual sense. Typically the renderer is set into a loop using onEnterFrame or Timer methods native to the flash player. Running a Papervision application is like simultaneously recording and projecting your 3D scene as shown in the image below. The scene is the middle man. It’s what the camera films and what the projector projects. You can think of the whole process as a combination of a movie camera and projector which both films and projects your 3D scene simultaneously. The scene is what the camera is pointed at and the objects are the items that the camera films. The materials are what your objects are dressed in and determine how the light interacts with the scene elements. The reels on your projector are your renderer and as your reels turn (using onEnterFrame or Timer methods) they cast the recorded scene onto your screen which is your viewport. The base code distributed with Chapter 1 of the book provides a great example of these six quantities in action. Click on the links below to see a demo, or download the code. Demo: Download: (BaseCodeMotion.zip) YouTube Video: Click more below to see the code example. package { //Flash imports import flash.display.Sprite; import flash.events.Event; import org.papervision3d.cameras.Camera3D; import org.papervision3d.core.proto.MaterialObject3D; import org.papervision3d.materials.WireframeMaterial; import org.papervision3d.objects.primitives.Sphere; import org.papervision3d.render.BasicRenderEngine; import org.papervision3d.scenes.Scene3D; import org.papervision3d.view.Viewport3D; //Define your class public class PapervisionClassMotion extends Sprite { //Define your properties private var viewport:Viewport3D; private var camera:Camera3D; private var scene:Scene3D; private var renderer:BasicRenderEngine; //Define your sphere variable. private var sphere:Sphere; private var sphereMaterial:MaterialObject3D; private var myTheta:Number=0; private var myX:Number=0; private var myY:Number=0; private var myZ:Number=0; public function PapervisionClassMotion() { trace(“Hello World”); //Intiate Papervision initPV3D(); //Create Your Objects createObjects(); //Create Renderer createRenderer(); } //Define your methods //Initialize Papervision private function initPV3D():void { // Create the viewport viewport = new Viewport3D(0, 0, true, false); addChild(viewport); // Create the camera camera = new Camera3D(); // Create the scene scene = new Scene3D(); // Create the renderer renderer = new BasicRenderEngine(); } //Create your objects private function createObjects():void { // Create a material for the sphere sphereMaterial = new WireframeMaterial(0xFFFFFF); // Create use wireframe material, radius 100, default 0,0,0 sphere = new Sphere(sphereMaterial, 100, 10, 10); sphere.x=-300; sphere.y=130; // Add Your Sphere to the Scene scene.addChild(sphere); } //Loop Renderer private function createRenderer():void{ addEventListener(Event.ENTER_FRAME, myLoop); } //Single Loop private function myLoop(evt:Event):void { //Rotate around x-axis sphere.rotationX+=2; //Increase angle myTheta += 2; //plot a circular orbit sin and cos myX = Math.cos(myTheta * Math.PI / 180) * 100-300; myZ = Math.sin(myTheta * Math.PI / 180) * 100; //Set X and Z of spher sphere.x = myX; sphere.z = myZ; //Fattening and scrinking myY = 1+Math.cos(myTheta * Math.PI / 180)*.7; sphere.scaleX=sphere.scaleZ= 1+Math.sin(myTheta * Math.PI / 180)*.7; //Elongate sphere.scaleY =myY; //Render Scene renderer.renderScene(scene, camera, viewport); } } } Mike, This is entirely new to me, so I am sure I am missing something obvious. When I attempt to import the zip into Flex Builder 3 it says it is invalid, when i unzip it and try to import it, it says its not a valid project. What am i doing wrong. I am talking about BaseCodeMotion.zip thank you
https://professionalpapervision.wordpress.com/2008/11/18/guts-of-a-papervision-application/
CC-MAIN-2016-36
en
refinedweb
On Tue, 14 Dec 2010, Reimar D?ffinger wrote: > On Tue, Dec 14, 2010 at 05:24:22PM +0100, Guennadi Liakhovetski wrote: > > +#ifndef SUM8_MACS > > +#define SUM8_MACS(sum, w, p) SUM8(MACS, sum, w, p) > > +#endif > > + > > +#ifndef SUM8_MLSS > > +#define SUM8_MLSS(sum, w, p) SUM8(MLSS, sum, w, p) > > +#endif > > + > > +#ifndef SUM8P2_MACS_MLSS > > +#define SUM8P2_MACS_MLSS(sum, sum2, w, w2, p) SUM8P2(sum, MACS, sum2, MLSS, w, w2, p) > > +#endif > > + > > +#ifndef SUM8P2_MLSS_MLSS > > +#define SUM8P2_MLSS_MLSS(sum, sum2, w, w2, p) SUM8P2(sum, MLSS, sum2, MLSS, w, w2, p) > > +#endif > > Can't you instead do something like (note I usually get the syntax > wrong, plus I don't know if there are issues if the generated > name is a macro as well): > #define SUM8(op, sum, w, p) SUM8_#op(sum, w, p) Sorry, I don't see how this can work here. SUM8(...) is already defined and that's what most architectures will keep using, or maybe I misunderstood you here? > > + union {int64_t x; int32_t u32[2];} u = \ > > + {.x = (sum),}; \ > > Does using a union create faster code that doing it "properly" > with shifts/ors/...? > To might knowledge that this works is just something that > gcc "currently" promises, but is not part of the C standard. Haven't looked, which code is faster, but I don't think there is a way for gcc to change this - a lot of software would break, if gcc starts interpreting a union of a 64-bit integer and an array of two 32-bit ints differently, than now, IMHO. > > + : [hi] "+r" (u.u32[1]), \ > > + [lo] "+r" (u.u32[0]), \ > > + [wp] "+r" (wp), \ > > + [pp] "+r" (pp) \ > > Do all compilers/compiler versions that are in use support > named asm arguments? Do you know any ones, used for SH, where this is not supported? I don't. > I also think as a paranoia measure these should be +&r > (but I admit I never understood that so 100%). hm, don't remember seeing this - early clobber input-output? Does it make sense at all? Early clobber tells the compiler, that it cannot reuse an input register for this output parameter, and this is anyway the case, if it is also an input, isn't it? > > +#if (!defined(CONFIG_FLOAT) || !CONFIG_FLOAT) && (!defined(FRAC_BITS) || FRAC_BITS > 15) > > I don't think it's valid for those not to be defined? It is, that's why I added them. They are only defined in mpegaudiodec_float.c, and this header is included from many other files. Thanks Guennadi --- Guennadi Liakhovetski, Ph.D. Freelance Open-Source Software Developer
http://ffmpeg.org/pipermail/ffmpeg-devel/2010-December/084044.html
CC-MAIN-2016-36
en
refinedweb
malloc - a memory allocator #include <stdlib.h> void *malloc [CX] and set errno to indicate the error.and set errno to indicate the error. The malloc() function shall fail if: - [ENOMEM] - [CX] Insufficient storage space is available.Insufficient storage space is available. None. None. None. None. calloc(), free(), realloc() ,, the requirement to set errno to indicate an error is added. - The [ENOMEM] error condition is added.
http://pubs.opengroup.org/onlinepubs/007904875/functions/malloc.html
CC-MAIN-2016-36
en
refinedweb
Alright, so I'm trying to code a program that will have the user input a collection of names and scores and allow them to calculate the highest and second highest scores. The two main questions I have are as follows: CODE: package homework7; import java.util.Scanner; public class ScoreCalculator { public static void main(String[] args) { int i = 0; Scanner keyboard = new Scanner(System.in); System.out.print("Please enter the number of students: "); int count = keyboard.nextInt(); keyboard.nextLine(); do { i++; System.out.print("Please enter score number " + i + ": "); int score = keyboard.nextInt(); keyboard.nextLine(); } while (i <= count); } } - How can I change it to "Please enter name and score number" in the do loop and still extract only the int value despite the fact that a String is also being entered? - How can I store each int the user inputs into a separate variable within the do loop, and then extract those values and use them at the end to calculate the two highest scores? Please keep it simple - this is for a class and we aren't very in depth yet. That means no arrays, no separate classes, no files or lists.
http://www.javaprogrammingforums.com/%20loops-control-statements/14560-user-input-do-loop-printingthethread.html
CC-MAIN-2016-36
en
refinedweb
On Sat, Apr 24, 2010 at 1:32 AM, P.J. Eby <pje at telecommunity.com> wrote: > > If you don't mind trying a simple test for me, would you patch your > pkg_resources to comment out this loop: > > for pkg in self._get_metadata('namespace_packages.txt'): > if pkg in sys.modules: declare_namespace(pkg) That looks much better. It is roughly half the time (450 ms -> 250 ms). I had a simple test set with a directory containing N empty *.egg-info directory, and the import time was proportional to N, now it does not matter anymore. > This change is not backward compatible with some older packages (from years > ago) that were not declaring their namespace packages correctly, but it has > been announced for some time (with warnings) that such packages will not > work with setuptools 0.7. > > (By the way, in case you're thinking this change would only affect namespace > packages, and you don't have any, what's happening is that the > _get_metadata() call forces a check for the *existence* of > namespace_packages.txt in every .egg-info or .egg/EGG-INFO on your path, > whether the file actually exists or not. In the case of zipped eggs, this > check is just looking in a dictionary; for actual files/directories, this is > a stat call.) Yes, that's exactly what I was seeing in the strace output. Is there a design document or something else decribing how the namespace mechanism works for setuptools ? I would like to support namespace package in my own packaging project, but it is not clear to me what needs to be done on my side of things. David
https://mail.python.org/pipermail/distutils-sig/2010-April/016031.html
CC-MAIN-2016-36
en
refinedweb
Python Programming, news on the Voidspace Python Projects and all things techie. New in unittest: Test Discovery and the load_tests protocol for Python 2.7 and 3.2 A feature that has long been missing from unittest, is automatic test discovery. This alone is a major reason why people move to alternative frameworks like nose and py.test. Test discovery is now in unittest, but it missed version 3.1 of Python (which is now at release candidate) and will be in versions 2.7 & 3.2. Automatic test discovery is where you don't need to provide your own test collection machinery, but have a tool that can automatically find and run all the tests in a project. So long as your tests are compatible with the new test discovery (see below) you can now do: python -m unittest discover This will find all the test files that match the default pattern ('test*.py') and run all the tests they contain. It also has a customization hook called load_tests which enables you to customize which tests are loaded and run. This test discovery is not as sophisticated as the discovery in nose or py.test, but it is a good start and will be sufficient for many projects. The system is as follows... Discovery from the command line takes three optional parameters (plus the -v switch to run tests verbosely) which can be passed in by position or by keyword. The parameters are the directory to start discovery (defaults to the current directory), the pattern for matching test modules (defaults to 'test*.py') and the top-level directory of your project (defaults to whatever the start directory is): python -m unittest discover myproject/tests/ '*test.py' myproject/ python -m unittest discover -s myproject/tests/ -p '*test.py' -t myproject/ python -m unittest discover -v -p '*test.py' All your tests must be importable from the top level directory of your project (they must live in Python packages). The start directory is then recursively searched for files and packages that match the pattern you pass in. Tests are loaded from matching modules, and all tests run. Discovery is implemented in the TestLoader class as the discover method. It delegates to loadTestsFromModule to load all tests after discovering and importing all modules that match the pattern provided. The actual signature is: TestLoader().discover(start_directory, pattern='test*.py', top_level_dir=None) The customization hook is implemented in loadTestsFromModule and is available to all systems that uses the standard loader, not just during discovery. Iff a test module defines a load_tests function then loadTestsFromModule will call this function with loader, tests, None. This should return a suite. An example 'do nothing' implementation of load_tests for a test module would be: return tests One use case would be to exclude certain TestCase subclasses from being used (if they are abstract base classes for other tests) during a test run.: if pattern is None: # if loaded as a module just return the normal tests return tests suite = TestSuite() suite.addTests(tests) # continue discovery from this directory this_dir = os.path.dirname(os.path.abspath(__file__)) suite.addTests(loader.discover(this_dir, pattern)) return suite. Both the load_tests protocol and test discovery are useful new features in unittest. I expect test discovery in particular to mature, but it is definitely already usable. The implementation uses os.relpath; so the current trunk version of unittest.py can only run on Python 2.6 or more recent. At some point I'll backport it to work with Python 2.5 / 2.4. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2009-06-01 13:00:24 | | Categories: Python Tags: testing, unittest New in unittest: Other Minor Changes There are a couple of other minor improvements to unittest that I haven't already mentioned. The TestResult class has two new methods: startTestRun and stopTestRun. These are called, unsurprisingly, at the start and end of the test run. A common way to create a custom test framework is to use a subclass of TextTestRunner which uses a custom TestResult class. You do this by overriding the _makeResult method: def _makeResult(self): return CustomTestResult(self.stream, self.descriptions, self.verbosity) I've documented this. The addition of startTestRun and stopTestRun makes creating custom test result objects (for example which push results to a database whilst the tests are running) easier - and it is my intention to write up better documentation on building test infrastructure with unittest as this whole area is woefully under-documented. Another minor change is that TestSuite now only accesses its tests by iterating over itself. This enables you to do lazy generation of tests where the tests are created when the suite is iterated over; another customization point. Many of the current crop of changes to unittest were done with help and input from Robert Collins and Jonathan Lange. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2009-06-01 11:33:33 | | Categories: Python Tags: testing, unittest New in unittest: Cleaning up resources with addCleanup One of the new features in unittest for Python 2.7 / 3.1 is better support for resource deallocation through cleanup functions. This isn't a new idea, it's something that is already in use in the Bazaar, Twisted and Zope test frameworks. A standard technique for allocating resources (creating temporary files or listening on a socket for example) needed during a test is to allocate them in the setUp method and deallocate them in the tearDown method. Whether the test passes or fails the tearDown method is run; however an exception in the setUp method means that neither the test nor tearDown are run. If you need to allocate multiple resources inside setUp, or need to do the same inside the body of the test, then you need to manually track them and in the event of failure only deallocate the ones that were successfully allocated: try: self.resource1 = create_resource1() try: self.resource2 = create_resource2() except: self.resource2.close() raise except: self.resource1.close() raise (If you do something similar inside the body of the test you use try...finally instead of try...except.) Cleanup functions provide a cleaner approach to resource allocation. Once you have created a resource you call addCleanup with the function that deallocates this. After tearDown, or in event of an exception being raised inside setUp, all the cleanup functions are executed in the reverse order that they were added (LIFO). The above example becomes: self.resource1 = create_resource1() self.addCleanup(self.resource1.close) self.resource2 = create_resource2() self.addCleanup(self.resource2.close) Cleanup functions can themselves call addCleanups if they need to, and if you ever need to execute all the cleanups you can call doCleanups() (for example you want to call it manually at the start of tearDown). This isn't just useful for inside setUp, but is also for tests themselves. In some cases it can be a useful alternative to setUp and tearDown. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2009-06-01 01:06:53 | | Categories: Python Tags: testing, unittest New in unittest in Python 2.7 and 3.1: Better Command Line Features One of the things I've been working on over the last few weeks is the Python unittest module. By virtue of having been included in the standard library for many years unittest is the most widely used Python testing framework. In Python 2.7 and Python 3.1 / 3.2 it has had some much overdue attention, with several new features added and some of them by me. I've already blogged about the new assert methods, in the next few blog entries I'll catalogue the other changes I've been involved in. Note It looks like the command line improvements actually appeared too late to make it into Python 3.1 and will be in Python 3.2 instead - along with test discovery. The normal way of making your test files individually executable with unittest is to include the following code at the end of the file: unittest.main() This enables the tests in the file to be run from the command line: python test_something.py What I didn't know was that you could actually pass the name of a test suite, test class or individual test. You can also pass the '-v' flag to run the tests in verbose mode (printing individual test names to stdout as they are run): python test_something.py -v TestClass python test_something.py -v TestClass.test_method Now that Python standard library modules can be run from the command line with the -m command line option we can do better. With unittest in Python 2.7 and Python 3.1 you will be able to specify a test module, class or individual test at the command line: python -m unittest -v test_something python -m unittest test_something.TestClass.test_method This removes the need for the final two lines in your test modules if you don't want them there. You can use this to run tests from multiple modules: python -m unittest -v test_something test_something_else By default main calls sys.exit once it has finished running tests, which makes it inconvenient to use from the interactive interpreter. It now takes two new optional parameters, one to switch off the automatic exit and one to run the tests in verbose mode: >>> from unittest import main >>> main(module='test_something', exit=False, verbosity=2) If exit is False then main returns an object whose result attribute is the TestResult instance used during the test run. You can introspect this if you want more information about the test run. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2009-06-01 00:13:16 | | Categories: Python Tags: testing, unittest Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2009_05_30.shtml?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+voidspace+%28The+Voidspace+Techie+Blog%29#e1096
CC-MAIN-2016-36
en
refinedweb
Nabla computes the derivatives by applying the classical differentiation rules at bytecode level. When an instance of a class implementing UnivariateFunction is passed to its differentiate method, Nabla tracks the mathematical operations flow that leads from the t parameter to the return value of the function. At the bytecode instructions level, all operations are elementary ones. Each elementary operation is then changed to compute both the value and the derivatives. Nothing is changed to the control flow instructions (loops, branches, operations scheduling). Analysis and transformation of the bytecode is realized using both the core API and the tree API from the asm bytecode manipulation and analysis framework. The entry point of this differentiation process is the differentiate method of the MethodDifferentiator which is called from the ForwardModeDifferentiator class for processing the f method of the user class. All the changed operations belong to a small subset of the virtual machine instructions. This set contains basic arithmetic operations (addition, subtraction ...), conversion operations (double to int, long to double ...), storage instructions (local variables, functions parameters, instance or class fields ...) and calls to elementary functions defined in the Math, StrictMath, FastMath or similar classes. There is really nothing more! For each one of these basic bytecode instructions, Nabla knows how to map them to a mathematical equation and how to hand these equations to a class that will compute derivatives. Lets consider the DADD bytecode instruction and consider only first derivative for now. This instruction corresponds to the addition of two real numbers and produces a third number which is their sum. Nabla maps the instruction to the equation: and calls the DerivativeStructure class provided by Apache Commons Math to compute both the value and the first derivative:and calls the DerivativeStructure class provided by Apache Commons Math to compute both the value and the first derivative: c=a+b In this example, theIn this example, the (c=a+b, c'=a'+b') DerivativeStructureclass uses only the linearity property of differentiation which implies that the derivative of a sum is the sum of the derivatives. Similar rules exist for all arithmetic instructions. The derivative of all basic functions in the Math, StrictMathand FastMathare known. The rules are also known for any derivation order, they are not limited to first order. In fact, Nabla itself does not know any of these rules, all computations are delegated to DerivativeStructure. Lets consider a more extensive example: public class Linear implements UnivariateFunction { public double value(double t) { double result = 1; for (int i = 0; i < 3; ++i) { result = result * (2 * t + 1); } return result; } } In this example, the only things that need to be changed for differentiating the value method are the t parameter and the result local variable which must be adapted to hold the derivative structure, the two multiplications and the addition. In order to generate the new function, Nabla will convert these elements one at a time, starting from the parameter change and propagating the change using a simple data flow analysis. There is no need to analyze the global structure of the code, and no need to change anything in the loop handling instructions above. The method generated for the example above will be roughly similar to the one that would result from compiling the code below (except that Nabla generates bytecode directly, it does not use source at all): public DerivativeStructure value(DerivativeStructure t) { // source roughly equivalent to conversion of the result initialization DerivativeStructure result = new DerivativeStructure(t.getFreeParameters(), t.getOrder(), 1.0); // this loop handling code is not changed at all for (int i = 0; i < 3; ++i) { // source roughly equivalent to conversion of "2 * ..." DerivativeStructure tmpA = t.multiply(2.0); // source roughly equivalent to conversion of "... + 1" DerivativeStructure tmpB = tmpA.add(1.0); // source roughly equivalent to conversion of "result * ..." DerivativeStructure tmpC = result.multiply(tmpB); // source roughly equivalent to conversion of "result = ..." result = tmpC; } // source equivalent to code generated at method exit return result; } This example shows that the instructions conversions have local scope. Another thing that is shown in the previous example is that DerivativeStructure instances appear everywhere a double that depends on the input parameter appears, like the result variable and all the temporary variables. However, double that do not depend on the input parameter like the 2.0 and 1.0 literal constants remain primitive double values. The code above also shows that the generated code does not depend on the derivation order or number or free parameters. In fact, this information is only carried at runtime by the DerivativeStructure instance provided as an unput parameters, and the intermediate instances created on the fly will automatically share these values (see the construction of the result variable and the call to getFreeParameters and getOrder). As shown in the example above, the original method bytecode contains both immutable parts that must be preserved and parts belonging to what we will call the computation path from the t parameter to the result that must be differentiated. The instructions that belong to the computation path and hence that must be converted are identified by a data flow analysis seeded with the t parameter. The first step of this data flow analysis is to link each data element (either stack cell or local variable, as explained in the virtual machine execution model section) with the instructions that may produce it and the instructions that may consume it. This task is realized by the TrackingInterpreter and TrackingValue classes. As explained in the double converted to DerivativeStructure section of the usage documentation, the signature of the value method is changed. The primitive double t parameter in the original method is changed during the differentiation process to a DerivativeStructure instance in the generated derivative. All instructions that used the original primitive double parameter must be changed to cope with the new DerivativeStructure local variables. In order to do this, the representation of the t parameter in the bytecode is marked as pending conversion from one primitive double to a DerivativeStructure. Once this data element has been marked, the data flow will propagate the mark to other data elements (both variables and operand stack cells) thanks to the following rules: doubledata element produced by a changed instruction must be marked as pending conversion from one primitive doubleto a DerivativeStructure What is the validity of this approach? For straightforward smooth functions, the expanded code really computes both the value of the equation and its exact derivatives. This is a simple application of the differentiation rules. So the accuracy of the derivative will be in par with the accuracy of the initial function. If the initial function is a good model of a physical process, the derivative will be a good evaluation of its evolution. If the initial function is only an approximation of a real physical model, then the derivative will be an approximation too, but an approximation that is consistent with the initial function up to computer accuracy. If the initial function is not smooth, the singular points must be analyzed specially. Some design choices are involved which have an impact on validity. These choices have been made in such a way that in some sense, the result is still as valid, as accurate and as consistent with the initial function as for smooth functions. This point is explained in details in the section about singularities.
https://commons.apache.org/sandbox/commons-nabla/internals.html
CC-MAIN-2016-36
en
refinedweb
? Testing plugins, while it can be a bit of a pita with the api being mostlyasync, is definitely doable. If you are familiar with nose tests framework thecode excerpt below is a way you can use it to run unit tests on pluginreload [pre=#0C1021]# Nosetry: import noseexcept ImportError: nose = None try: times_module_has_been_reloaded += 1except NameError: times_module_has_been_reloaded = 0 #reloaded RUN_TESTS = nose and times_module_has_been_reloaded if RUN_TESTS: target = name nose.run(argv= 'sys.executable', target, '--with-doctest', '-s' ]) print '\nReloads: %s' % times_module_has_been_reloaded You'll sometimes find that you want file fixtures loaded into views forfunctional testing. You can do this 'manually' via view.insert() which is oneway to make file loading synchronous. [pre=#0C1021]fixtures = ] def teardown(): while fixtures: v = fixtures.pop() v.window().focus_view(v) v.window().cmd.close() def load_fixture(f, syntax=u'Packages/Python/Python.tmLanguage'): """ Create a View using `window.new_file` fixture and `manually` load the fixture as window.open_file is asynchronous. v = window.open_file(f) assert v.is_loading() It's impossible to do: while v.is_loading(): time.sleep(0.01) This would just cause S2 to block. You MUST use callbacks or co routines. """ view = sublime.active_window().new_file() edit = view.begin_edit() view.set_scratch(1) view.set_syntax_file(syntax) try: with codecs.open(f, 'r', 'utf8') as fh: view.insert(edit, 0, fh.read()) finally: view.end_edit(edit) fixtures.append(view) return view[/pre] You can actually write a scheduler using sublime.set_timeout and python generator co-routines, using some kind of method to emulate keyboard input. On windows, for testing Sublime Text 1 plugins, I used to use the SendKeys module. castles_made_of_sand: How did you get the nose module to be accessible from within sublime? I can't figure out how to make it accessible to the sublime interpreter.
https://forum.sublimetext.com/t/how-do-you-refactor-your-plugins/6988
CC-MAIN-2016-36
en
refinedweb