text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
GRVY-314 was marked as "Won't Fix". However, I still think that the current behaviour is wrong. I'll try to explain why, breaking the observed problem into two parts. First part - Java module has Groovy facet and contains a groovy class. Groovy code (GroovyBean.groovy): - class GroovyBean { String someString; } - Java client calling above groovy class (JavaClient.java): - public class JavaClient { public static void main(String[] args) { GroovyBean b = new GroovyBean(); b.setSomeString("test"); // should be "green" but is "red" } } - Observed behavior: 1) Code is red: In the Java code, method "setSomeString" is marked as unresolved. 2) Now compile the groovy class, attach the compiled version as a library and delete the original source. Result: code is green. For 'Groovy properties', the groovy PSI should include a getter and setter, as described here: (section "Property and field rules") -tt It is indeed a bug, though your conslusions about groovy psi containing accessors is not necessary. Hello Eugene, Hmm, would it not make sense if 'synthetic' accessors were present in the PSI? Then a lot of current plugins that work with getters/setters of a (Java) class would transparently work with groovy classes (containing such special 'groovy properties'). -tt Hello Eugene, Is there a ticket already in JIRA? Then I can add myself as 'watcher'. -tt I'm not sure they would like synthetic psi. What it would certainly cause is the performance penalty. No, please submit one. No, please submit one. One concrete example would be spring facet from Selena. As far as I can see property resolving is based on com.intellij.psi.util.PropertyUtil. At the moment no properties are returned of course, for the reason mentioned in this thread. :) -tt
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205999069-Java-interop-1-synthetic-accessors-for-Groovy-bean-properties-
CC-MAIN-2020-24
en
refinedweb
I'm going through some refactoring of old swing code... in the code there is code like this: public class Foo { private static final JDialog dialog = new JDialog(); ... } Then everything uses the dialog instance. I think that's just not cool, and would like to make my life easy by clicking on dialog and choosing a refactoring to make it non static, but have idea cascade the changes... I suspect that's rather tough to do considering what would be involved in doing that... but I've seen incredibly complex things from JetBrains, and was wondering if something existed and I'm just not seeing it. Thanks R ps. this would be better if I can do in 4.5.x since I trust it, but if Irida is the only choice, that's fine as well. I don't understand what you want. Couldn't you make it nonstatic by removing "static"? I assume you want something more complex, so I'll show you a general refactoring series I do when I want to change fields. 1. Encapsulate fields - encapsulates the field into static accessors (getter & setter) so only those two methods reference the field itself 2. Change what's in the getter & setter methods 3. Inline the getter & setter methods This is a way of replacing all reads and writes of the variable with some custom code. Maybe this is what you want. I would do it like this: In article <288568632507611772634576@news.jetbrains.com>, Keith Lea <keith@cs.oswego.edu> wrote: I guess I didn't explain fully. Note that the variable is also final, so no setter there for sure, unless the instantiation is also removed. What I would like to do is remove the static from the member variable, as well as not instantiate it there. Further since the rest of the methods which are making use of this variable are also static, the change needs to take that into account. So never mind, because I think I have to do it manually and painfully, one step at a time, until I change all of them and reconstruct the code properly. Thanks R Robert, I think, your refactoring is not that trivial, that IDEA can handle it automatically. I would got this way: 1) search for usages of the dialog, 2) in methods where it is used, pass it as parameter, 3) repeat step (2) until you only have a few references, where you want it to be created, 4) encapsulate the field access (aka, create a getter), 5) change the getter to create a new dialog on each invocation, 6) inline the getter. Tom In article <d59liu$prj$1@is.intellij.net>, "Thomas Singer (MoTJ)" <I@HateSpam.de> wrote: Yup. Pretty much. I thought I'd ask... like I said one never knows what kind of magic JB has done. R
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206929935-Refactoring-static-members-
CC-MAIN-2020-24
en
refinedweb
- CR CR may refer to: Nations - Coral Sea Islands (FIPS country codes and obsolete NATO country code) Costa Rica Czech Republic Other political places - Castle Rock (disambiguation) - Cedar Rapids, Iowa - Crawford County, Kansas - Province of Cremona in Northern Italy Other places - College of the Redwoods - public toilet, washroom, or comfort room - legislative route (Minnesota) or constitutional route - county highway, county road, or county route - CR postcode area Languages - Cree language (ISO 639 alpha-2) Elements and chemicals Biology - Chemokine receptor - complete remission - Creatine, a molecule found in myocytes Organizations and movements - Celtic Reconstructionism - Český Rozhlas - Chiltern Railways - Choose Responsibility - College Republicans - Community of the Resurrection, an Anglican religious order - Resurrectionist Order, Congregation of the Resurrection, a Roman Catholic religious order - consciousness raising - Consolidated Rail Corporation (Conrail) Publications - Casino Royale (novel), the first James Bond novel. - Comptes rendus - French for reports or proceedings; often used in title of publications, most notably the Comptes rendus de l'Académie des sciences (Proceedings of the (Paris) Academy of Sciences). - Consumer Reports, American magazine - Critical Review (disambiguation), several publications by this name Sports - Colorado Rockies (NHL), former NHL team that has since relocated and became the New Jersey Devils - Colorado Rockies, Major League Baseball team People - C. Rajagopalachari - Chris Rock, American comedian and actor. - Cristiano Ronaldo, Real Madrid and Portugal winger. Companies and products Technology - carriage return (often shortened to return), a term used in typing and computing, used to start a new line of text - challenge-response spam filtering - W3C recommendation#Candidate Recommendation (CR), W3C Candidate Recommendation - cognitive robotics - compression ratio - computed radiography - concentration ratio - continuing resolution Concepts Categories: - caloric restriction - change request - Chart Rulership or Chart Ruler, in astrology - classic rock, a radio format - cognitive radio, an outgrowth of software-defined radio - conversion rate, used in marketing - corporate responsibility - credit (finance), in accounting - critical reading on the SAT - critically endangered species - crore, 10,000,000 in the Hindu-Arabic numeral system - Disambiguation pages Wikimedia Foundation. 2010.
https://en.academic.ru/dic.nsf/enwiki/165744
CC-MAIN-2020-24
en
refinedweb
import spot spot.setup() from IPython.display import display This notebook shows you different ways in which states or transitions can be highlighted in Spot. It should be noted that highlighting works using some special named properties: basically, two maps that are attached to the automaton, and associated state or edge numbers to color numbers. This named properties are fragile: they will be lost if the automaton is transformed into a new automaton, and they can become meaningless of the automaton is modified in place (e.g., if the transitions or states are reordered). Nonetheless, highlighting is OK to use right before displaying or printing the automaton. The dot and hoa printer both know how to represent highlighted states and transitions. a = spot.translate('a U b U c') The # option of print_dot() can be used to display the internal number of each transition a.show('.#') Using these numbers you can selectively hightlight some transitions. The second argument is a color number (from a list of predefined colors). a.highlight_edges([2, 4, 5], 1) Note that these highlight_ functions work for edges and states, and come with both singular (changing the color of single state or edge) and plural versions. They modify the automaton in place. a.highlight_edge(6, 2).highlight_states((0, 1), 0) The plural version can take a list or tuple of state numbers (as above) or of Booleans (as below). In the latter case the indices of the True values give the states to highlight. a.highlight_states([False, True, True], 5) print(a.to_str('HOA', '1')) print() print(a.to_str('HOA', '1.1')) HOA: v1 States: 3 Start: 2 AP: 3 "a" "b" "c" acc-name: Buchi Acceptance: 1 Inf(0) properties: trans-labels explicit-labels state-acc deterministic properties: stutter-invariant terminal --BODY-- State: 0 {0} [t] 0 State: 1 [2] 0 [1&!2] 1 State: 2 [2] 0 [!0&1&!2] 1 [0&!2] 2 --END-- HOA: v1.1 States: 3 Start: 2 AP: 3 "a" "b" "c" acc-name: Buchi Acceptance: 1 Inf(0) properties: trans-labels explicit-labels state-acc !complete properties: deterministic stutter-invariant terminal spot.highlight.states: 0 0 1 5 2 5 spot.highlight.edges: 2 1 4 1 5 1 6 2 --BODY-- State: 0 {0} [t] 0 State: 1 [2] 0 [1&!2] 1 State: 2 [2] 0 [!0&1&!2] 1 [0&!2] 2 --END-- One use of this highlighting is to highlight a run in an automaton. The following few command generate an automaton, then an accepting run on this automaton, and highlight that accepting run on the automaton. Note that a run knows the automaton from which it was generated, so calling highlight() will directly decorate that automaton. b = spot.translate('X (F(Ga <-> b) & GF!b)'); b r = b.accepting_run(); print(r) Prefix: 4 | 1 0 | !a & !b Cycle: 1 | !b {0} r.highlight(5) # the parameter is a color number The call of highlight(5) on the accepting run r modified the original automaton b: b def show_accrun(string): aut = spot.automaton(string) run = aut.accepting_run() run.highlight(5) display(aut) show_accrun(""" HOA: v1 States: 10 Start: 0 AP: 2 "a" "b" Acceptance: 5 (Inf(1) | (Fin(0) & Inf(4)) | Fin(2)) & Fin(3) properties: trans-labels explicit-labels trans-acc --BODY-- State: 0 [0&1] 9 {3} [!0&!1] 0 {3} [0&!1] 5 {0 1} State: 1 [0&!1] 9 {4} [0&1] 8 {3} State: 2 [!0&!1] 8 {0} [!0&1] 6 {2 4} [0&1] 2 {3} [!0&1] 7 State: 3 [0&!1] 2 {0 4} [!0&!1] 3 {1 3} [!0&1] 4 {0} State: 4 [0&!1] 5 {2} [0&1] 0 [!0&1] 1 {0} State: 5 [!0&!1] 0 {3} [!0&!1] 6 {3} State: 6 [0&1] 3 {2} [!0&1] 1 [0&1] 2 {0 1 3 4} State: 7 [0&1] 1 [!0&1] 7 {0 2} State: 8 [!0&1] 7 [!0&!1] 9 {0} State: 9 [0&1] 8 {3} [0&!1] 5 [0&!1] 1 --END-- """) show_accrun(""" HOA: v1 States: 10 Start: 0 AP: 2 "a" "b" Acceptance: 6 Fin(5) & ((Fin(1) & (Inf(3) | Inf(4))) | Fin(0) | Fin(2)) properties: trans-labels explicit-labels trans-acc --BODY-- State: 0 [0&1] 8 {0} [0&!1] 6 {2} State: 1 [!0&1] 9 {0 4 5} State: 2 [!0&1] 1 State: 3 [0&!1] 3 {2} [0&1] 4 {3 5} State: 4 [0&1] 7 {5} [0&!1] 9 {2} [!0&1] 0 {0 2} State: 5 [!0&1] 1 [!0&1] 3 {2 3} State: 6 [0&!1] 8 {1 2 5} [!0&1] 7 {3} State: 7 [0&1] 2 {0} [!0&1] 5 State: 8 [0&!1] 3 {4 5} State: 9 [!0&1] 3 {1 2} [0&1] 1 {4} [0&!1] 5 {2} --END--""") show_accrun(""" HOA: v1 States: 4 properties: implicit-labels trans-labels no-univ-branch deterministic complete tool: "ltl2dstar" "0.5.4" name: "i G F a G F b" comment: "Union{Safra[NBA=2],Safra[NBA=2]}" acc-name: Rabin 2 Acceptance: 4 (Fin(0)&Inf(1))|(Fin(2)&Inf(3)) Start: 0 AP: 2 "a" "b" --BODY-- State: 0 {0} 1 0 3 2 State: 1 {1} 1 0 3 2 State: 2 {0 3} 1 0 3 2 State: 3 {1 3} 1 0 3 2 --END-- """) left = spot.translate('a U b') right = spot.translate('GFa') display(left, right) prod = spot.product(left, right); prod run = prod.accepting_run(); print(run) Prefix: 1,0 | !a & b Cycle: 0,0 | a {0} run.highlight(5) # Note that by default project() needs to know on which side you project, but it cannot # guess it. The left-side is assumed unless you pass True as a second argument. run.project(left).highlight(5) run.project(right, True).highlight(5) display(prod, left, right) The projection also works for products generated on-the-fly, but the on-the-fly product itself cannot be highlighted (it does not store states or transitions). left2 = spot.translate('!b & FG a') right2 = spot.translate('XXXb') prod2 = spot.otf_product(left2, right2) # Note "otf_product()" run2 = prod2.accepting_run() run2.project(left2).highlight(5) run2.project(right2, True).highlight(5) print(run2) display(prod2, left2, right2) Prefix: 0 * 3 | a & !b 1 * 2 | a {0} 1 * 1 | a {0} 1 * 0 | a & b {0} Cycle: 1 * 4 | a {0,1} b = spot.translate('X (F(Ga <-> b) & GF!b)') spot.highlight_nondet_states(b, 5) spot.highlight_nondet_edges(b, 4) b As explained at the top of this notebook, named properties (such as highlights) are fragile, and you should not really on them being preserved across algorithms. In-place algorithm are probably the worst, because they might modify the automaton and ignore the attached named properties. randomize() is one such in-place algorithm: it reorder states or transitions of the automaton. By doing so it renumber the states and edges, and that process would completely invalidate the highlights information. Fortunately randomize() know about highlights: it will preserve highlighted states, but it will drop all highlighted edges. spot.randomize(b); b For simplicity, rendering of partial automata is actually implemented by copying the original automaton and marking some states as "incomplete". This also allows the same display code to work with automata generated on-the-fly. However since there is a copy, propagating the highlighting information requires extra work. Let's make sure it has been done: spot.highlight_nondet_edges(b, 4) # let's get those highlighted edges back display(b, b.show('.<4'), b.show('.<2')) For deterministic automata, the function spot.highlight_languages() can be used to highlight states that recognize the same language. This can be a great help in reading automata. States with a colored border share their language, and states with a black border all have a language different from all other states. aut = spot.translate('(b W Xa) & GF(c <-> Xb) | a', 'generic', 'det') spot.highlight_languages(aut) aut.show('.bas')
https://spot.lrde.epita.fr/ipynb/highlighting.html
CC-MAIN-2020-24
en
refinedweb
Okay so right now I am working on my 2nd program yet. I just started Java a few days ago and programming--overall-- for only a week, so a lot of information is very foreign to me. What I've been doing is following some basic tutorials, and with each section I learn, I will make my own program to test my own abilities and understanding. This means that I am trying to work out the problems of the programs i want to make with less resources (since I have not covered many topics yet) I am now working on a very minor card game. (It will simply be executed in the play function of Eclipse. There will be no special graphics or anything) What I thought would be a good idea was to create a class for the characteristics required of each card: package cardGame; public class cards { String name; int traitBarrier; String traitCardType; int cardNumber; public void setName(String n){ name = n; } public String getName(){ return name; } public void setTraitBarrier(int b){ traitBarrier = b; } public int getTraitBarrier(){ return traitBarrier; } public void setCardNumber(int c){ cardNumber = c; } public int getCardNumber(){ return cardNumber; } public void setTraitCardType(String t){ traitCardType = t; } public String getTraitCardType(){ return traitCardType; } } Then I created another class to database all the created cards(referencing the previous class): package cardGame; public class cardLibrary { public static void main(String[] args) { cards card1 = new cards(); cards card2 = new cards(); cards card3 = new cards(); cards card4 = new cards(); cards card5 = new cards(); cards card6 = new cards(); cards card7 = new cards(); cards card8 = new cards(); cards card9 = new cards(); cards card10 = new cards(); card1.name = "Flames Elemental"; card1.cardNumber = 1; card1.traitCardType = "Fr"; card1.traitBarrier = 3; card2.name = "Earth Elemental"; card2.cardNumber = 2; card2.traitCardType = "Ert"; card2.traitBarrier = 5; card3.name = "Aquatic Necromancer"; card3.cardNumber = 3; card3.traitCardType = "Wtr"; card3.traitBarrier = 4; card4.name = "Soul Hunter"; card4.cardNumber = 4; card4.traitCardType = "Dk"; card4.traitBarrier = 4; card5.name = "Crusading Monk"; card5.cardNumber = 5; card5.traitCardType = "Lit"; card5.traitBarrier = 6; card6.name = "Explosion Vigilante"; card6.cardNumber = 6; card6.traitCardType = "Fr"; card6.traitBarrier = 2; card7.name = "Gate Keeper"; card7.cardNumber = 7; card7.traitCardType = "Ert"; card7.traitBarrier = 8; card8.name = "Priestess of the Lake"; card8.cardNumber = 8; card8.traitCardType = "Wtr"; card8.traitBarrier = 3; card9.name = "Jr. Assailant"; card9.cardNumber = 9; card9.traitCardType = "Dk"; card9.traitBarrier = 1; card10.name = "Divine Alchemist"; card10.cardNumber = 10; card10.traitCardType = "Lit"; card10.traitBarrier = 3; } } Finally in my main class… I seem to be confused on how exactly I can call the cards from my cardsLibrary class into my main class to create the deck. Did I skip some steps that were required for me to be able to do this?
http://www.javaprogrammingforums.com/object-oriented-programming/34859-classes-classes-%5Bvery-beginner-asking-question%5D-%5Bw-codes%5D.html
CC-MAIN-2017-47
en
refinedweb
Introduction: In first part of the tutorial series we got a glimpse of MVC. In this part we’ll focus on practical implementation of MVC Pattern. I don’t need to explain about theory of MVC as we have already covered this in previous part of the article. Our Roadmap: We stick our agenda as follows, -. Topics to be covered: 1. Creating MVC project from scratch. 2. Adding Controllers, Views and Models. 3. Creating sample database and use LINQ to SQL for communication. 4. Perform CRUD operations in MVC application using LINQ to SQL. 5. Understand ViewData, ViewBag and TempData. 6. Model Validation by System.Component.DataAnnotation. 1. Creating MVC project: Step1: Open Visual Studio 2010/2013,I am using 2010.Goto File=>New=>Project and select ASP.Net MVC3 Web Application, as shown below, Name the application as LearningMVC. Step2: A project template selection window will be opened, select Empty in that.Select View Engine as Razor and press OK. Step3: Now our solution is ready with an empty MVC application, We can clearly see that the solution contains some extra folders in comparison to traditional Asp.Net web application. We got Models, Views and Controllers folder and a Shared folder in Views folder. The folders as name denotes are used to hold the respective MVC players model-view-controllers, the shared folder in Views contains the _Layout.cshtml, that can be used as the master page for the views which we create. We see the global.asax file that contains a default routing table, that defines the route to be followed when request comes, it says that when request comes to Home controller, the Index action of that Home Controller has to be called, Actions are the methods defined in Controllers, that can be called defining a route, the Action methods can also contain parameters, in above mentioned figure, it says that Home controller has an Action Index which contains an optional parameter id. When we run our application, we get something as shown below, It says that the resource which we are looking for can not be found.The request by default follows the default route as mentioned in global.asax, i.e. go to controller Home and invoke method Index.Since we don’t have any of these yet, the browser shows this error. Never mind, lets make the browser happy. 2. Adding Controllers ,View and Models: Step1: Create a My Controller by right clicking on Controllers folder and add a controller named My, add the controller with empty read/write actions,2: We can see that we have Actions but they return a View, so we need to create Views for them.But before this we’ll create a Model named User for our Views.Right click on Model folder add a class named User, Add following properties to User class, Now our model is created and we can create Views bound to this particular model. Step3: Views folder under My folder(auto created as per Controller’s name).This is to maintain a particular structure for MVC, so that we don’t have to take overhead to maintain it. Now we have controller as well as Views, so if we run the application we get, 3. Creating sample database and use LINQ to SQL for communication. Our MVC application is ready but, rather than displaying dummy data, I would go for running the application talking to a data base so that we can cover wider aspect of the application. Step1: Create a database, script is given in the attachment, just execute it over Sql Server 2005/2008. Step2: Add new Item to the solution, and select LINQ to SQL class, call it MyDB.dbml Our Solution looks like, Step3:Open Server explorer of Visual Studio, Open a connection, by providing Server name and existing database name in Server Explorer Open Connection window, Click OK.Our solution looks like, Step4: Drag the User table to dbml designer window,we get the table in class diagram format in designer window, When we open MyDB.designer.cs, we get MyDBDataContext class.This class holds databse User table information in the form of Class and Properties.For every column of the table, properties are created in the class, and we can use these properties to get/set values from/in database. 4. Perform CRUD operations in MVC application using LINQ to SQL. We now have a database, a context class to talk to data base and a MVC application to perform CRUD operations in database using the context class. Step1 Read : i) Go to Index Action, make an instance of context class, We can get all the table and column names in that context’s instance. ii) Make a query to display all the records on Index view. iii) Populate the User Model that we created earlier, and pass it to the Index view(Index View will be of List type Item template) When we run the application, we get empty list, i.e. we don’t have records in database, Step2 Create: i)First write code for creating a user, for the first time for Get Action of create, always an empty view will be returned. ii)When we post some data on click of submit of Create, then we need to make a data entry in table for creating a new user. iii)When form posted, it fires Post Action of Create with the already bound User model properties to view fields, we’ll retrieve these model properties and make an instance of context class populate context User and submit to data base. iv)Redirect action to Index, and now a record will be shown on the Index View.We successfully created a user J. Step3 Update & Step4 Delete: Now we are smart enough to perform update and delete by ourself, this I leave for reader’s understanding capabilities,. 5. Understand ViewData, ViewBag and TempData. I wanted to take this topic as there is much confusion regarding these three players. MVC provides us ViewData, VieBag and TempData for passing data from controller, view and in next requests as well. ViewData and ViewBag are similar to some extent but TempData performs additional roles. Lets get key points on these three players: ViewBag & ViewData : I have written sample test code in the same application which we are following from the beginning, - Populate ViewData and ViewBag on Index action of My Controller, - Code in View to fetch ViewData/ViewBag, - When run the application, we get on screen, Following are roles and similarities between ViewData and ViewBag: Ø Maintains data when move from controller to view. Ø Passes data from controller to respective view. Ø Their value becomes null when any redirection occurs , because their role is to provide a way to communicate between controllers and views. It’s a communication mechanism within the server call. Differences between ViewData and ViewBag (taken from a blog): Ø ViewData is a dictionary of objects that is derived from ViewDataDictionary class and accessible using strings as keys. Ø ViewBag is a dynamic property that takes advantage of the new dynamic features in C# 4.0. Ø ViewData requires typecasting for complex data type and check for null values to avoid error. Ø ViewBag doesn’t require typecasting for complex data type. TempData: TempData is a dictionary derived from TempDataDictionary class and stored in short lives session.It is a string key and object value. It keep the information for the time of an HTTP Request. This mean only from one page to another. Helps to maintain data when we move from one controller to other. I added a TempData in Edit Action as, [HttpPost] public ActionResult Edit(int? id, User userDetails) { TempData["TempData Name"] = "Akhil"; ….. And when View redirected to Index Action, i.e. I get the TempData value across Actions. 6.Model Validation: We can have many methods for implementing validation in our Web Application Client Side, Server Side etc… But MVC provides us a feature with which we can annotate our Model for validation by writing just one/two line of code. Go to the Model class User.cs, add [Required(ErrorMessage = "FirstName is required")] on the top of FirstName property as, System.ComponentModel.DataAnnotations; Namespace, when using Model Validation.This is the namespace that holds classes used for validation. Conclusion: Happy Coding J. Read more: - C# and ASP.NET Questions (All in one) - MVC Interview Questions - C# and ASP.NET Interview Questions and Answers - Web Services and Windows Services Interview Questions Other Series My other series of articles:
http://csharppulse.blogspot.com/2013/08/learning-mvc-part-2-creating-mvc.html
CC-MAIN-2017-47
en
refinedweb
Plan 9 From Bell Labs Operating System Now Available Under GPLv2 223 TopSpin." Still holds up (Score:5, Insightful) A model for consistency and simplicity. It validated the concepts underlying Unix, and influenced modern Linux/BSD. It also didn't hurt that they had some category-1 geniuses working on it - Kernighan, Ritchie, Duff, etc. I find it interesting (Score:4, Insightful). Re:I find it interesting (Score:5, Insightful) You're a file. Re:I find it interesting (Score:5, Funny) Your mom is a bmp. Re: (Score:2) No she's a spreadsheet. Touch my file, head or tail, permission (Score:2) Touch my file, head or tail, I give you permission. ** file porn ** Re: (Score:3) With a sticky bit. Re: (Score:2) Re:I find it interesting (Score:5, Interesting). Sad that they held on to it just long enough for it to become irrelevant. Anything unique that it had to offer has probably been done in other ways. I suspect that between various BSDs and Linux versions that the concept of everything being a file has pretty much reached its logical endpoint. Eventually you have to talk to highly interactive hardware with massively parallel threads and then the paradigm starts to become unhinged, and you spend more time trying to defend and extend the paradigm than anything else. So how is it irrelevant? I don't get your point (Score:3) How so? How would this abstraction fall down with cluster computing or GPU processing? Are you suggesting going back to addressing everything by memory location like we used to do or do you have a different suggestion? Re: (Score:3) Eventually you have to talk to highly interactive hardware with massively parallel threads What does parallelism have to do with anything? The only argument against everything's-a-file is overhead, not complexity. Re: (Score:2) Eventually you have to talk to highly interactive hardware with massively parallel threads What does parallelism have to do with anything? The only argument against everything's-a-file is overhead, not complexity. Exactly. I'd like to see more exploration of something like Kahn process networks [wikipedia.org] as a fundamental programming abstraction; it seems to me that we need to be thinking of programs, filesystems and networks as examples of the same thing. Our networks are becoming software-defined (especially in virtualisation), our chipsets are compiled from languages like VHDL, our programs are becoming parallelised, and our filesystems are starting to grow virtual nodes and do processing. Seems dumb to be maintaining multip Re: (Score:2) Re: (Score:2) Re: (Score:3)? Re: (Score:2)? At last someone who understands, unlike the other clowns posting replys to this thread. There are some things that just don't work as files, X11 being one obvious example, along with sockets, handles to data structures, entry points (methods) to loaded modules, etc. You can force these things into that mold, but you do so more as an intellectual exercise, at the expense of efficiency and speed, (and largely just to prove a point). There is no rational reason to carry the everything-is-a-file concept any furt Re: (Score:3) Oh bullshit. Sockets (and named pipes and serial ports; heck, even anonymous pipes) are the best example of leveraging files. Sockets ARE files. Files with funny names, a necessarily special way to open, and what amount to a bunch of ioctls to tend to details. The main interaction is by read() and write(), exactly the same as files. Monitoring for data to appear is done by select(), the same as for any "file" whose input data s Re: (Score:2) A truly 'everything is a file' Unix would implement BSD sockets and X11 windows as files, just for a start. Can you do that on Linux yet? The problem with this is when does it end? How about menus? How about browser tabs? How about form fields in a browser? How about layers and pixels in gimp? How about paths in SVG? Why can't I just cp a tab from chrome to firefox and have the page magically open? Oh, with the forms filled out with the same data if I use -r. The problem is that this really only works if every application on the OS ends up being completely standardized, which means you can't add a feature to anything until you add it to e Re: (Score:2) I see your file, and raise you a redirection. Anyone who has had to write a simple ">" or "|> in JCL will know what I mean. Re:I find it interesting (Score:5, Informative). Funny that you mention that, because systemd exposes lots of features through cgroups and a nice filesystem on /sys. And to use systemd's journal's files, the documentaion already explains that you just open the files, memory map them, and use inotify, a classic notification API on files... Re: (Score:3) And that's how you change minds. The more I learn about systemd and journald, the less my knee jerks against them. Re:I find it interesting (Score:4, Interesting) I like the idea how everything is a file (...) "geeks" who don't care anymore about open file system and results are like systemd journalctl. It's part good, part "when all you have is a hammer, everything looks like a nail". What happens is that you put a lot of very structured information into an unstructured format, then "reverse engineer" the structure on demand. To take a trivial example with log files, pretty much every log entry has a timestamp. Now we could store this in plaintext and use grep, or we could store this in a database and use "SELECT * FROM logentries WHERE timestamp BETWEEN '2014-01-14' AND '2014-01-15'". Particularly if you got other timestamps stored in the same file you start reinventing columns based on position or markers. On the good side we now have metadata, a language designed for structured queries, indexing, the ability to implement ACID compliance and an easy means to join information from different sources, on the bad side it's no longer plain text, we depend on a running database service and database corruption could potentially render everything unusable. But then again, so could file system corruption. From what I gather that's pretty much what systemd does and journalctl is kind of like SQL for systemd. That said, it seems like an "almost SQL" implementation with its own limited language, personally I'd rather go with a proper implementation like SQLite but maybe there's some gotchas there I haven't thought about, in particular it seems clients can define their own log fields on the fly which would require a little dynamic DDL but I don't see any showstoppers. In particular I notice they only have text and binary fields, you can't say that something is an integer or date field so you could filter on them more intelligently. Re: (Score:2) Re: (Score:2) Or run to a proprietary dungeon like OSX or Windows. "Proprietary dungeon". :D That was funny. Re: (Score:3) Pry my init scripts from my cold, dead hands. Re: (Score:2, Offtopic) How 'bout I stick with my nice stable Slackware, and you whipper-snappers can use whatever Windows-clone-oh-but-it-runs-Linux-under-the-hood you want? That do it for ya, plucky? Pffft, arguing about changing something that hasn't broken. "Hey, my left arm works just fine, but I really think I should cut it off and get a shiny new model!" Re: (Score:3, Insightful) As a fellow Slackware user I echo you sentiment but I kinda suspect we are going to end up with Systemd. Even some comments Volkerdi has made reflect that. Now that some big dominos like Debian have toppled its probably over. To much of the user land is ending up with Systemd as a hard dependency. Because of the Systemd spawns processes and tracks things the daemons themselves have to get modified which makes them all require Systemd. udev and udisks getting the shotgun wedding treatment to Systemd as w Re: (Score:3, Insightful) What world do you live in where modern SysV init isn't broken? Hell, the old approach, where everything went in inittab and then init(1) supervised processes, starting things up when they failed, was closer to right than the approach taken by "modern" distros is, where you have everyone trying their own mechanism at self-daemonization and absolutely Re: (Score:2, Offtopic) Can we get a mod up for this guy? Seriously - init scripts are a hack. They've always been a hack. Just because they're a hack you're comfortable with doesn't mean it's the "right way" to do it. Re:On Debian that's allready done. (Score:4, Insightful) If you have daemons that keep falling over and needing restart, you're already at the hack stage. But going to something that can't decide if it's a dessert topping or a floor wax is not the right answer. Re: (Score:2) If you have daemons that keep falling over and needing restart, you're already at the hack stage. What do you mean IF, it just happens from time to time for a variety of reasons. This is an incredibly basic problem in multiprocess systems. It's like saying IF your computer crashes and needs to be restarted... in a datacenter, it's a matter of WHEN. In both cases, absent an expected, non-rectified reason for them to crash, the immediate action for a human operator is... try restarting it. If the dependancies are programmatically declared (a Good Thing in itself), we can automate this. It's not a hack, be Re: (Score:2) Wow, talk about lowered expectations. I run several machines and restarts are for when you reconfigure. Very rarely, something might get into a bad state, but in those cases the process tends to still be running and so an automatic restart won't help. Often, fixing the configuration will help. Sure, guano occurs and it's natural to try restarting. If it never happens again, chalk it up to sunspots. If it keeps happening, something 's wrong and automating the restart isn't a good answer. If your car kept stall Re: (Score:2) Then "the hack stage" is the state of the world when you're operating at any significant scale. You have thousands of machine in the field? Guess what -- some of them are going to hit bizarre race conditions. Some of them are going to be targets of successful DOS attacks that crash your daemons. Some of them will have iffy memory in a way that's only visible when it gets poked in just the wrong way. One way or an Re:On Debian that's allready done. (Score:4, Insightful) This is an incredibly basic problem in multiprocess systems. It's like saying IF your computer crashes and needs to be restarted... in a datacenter, it's a matter of WHEN. Except that in today's hostile Internet, WHEN that broken Internet-facing process crashes it WILL be because it was pwned by shellcode, and if that process had write access to core files, your entire server is now rooted. If that process also had any read or write credentials to your local network, your entire data center possibly just got rooted also. Are you _really_ saying that the appropriate thing to do in that situation is to simply restart the process and continue? You'd be better to flash-wipe and reinstall at least the entire server node, and probably also change all your internal administration passwords. Otherwise, you're an infosec disaster waiting to happen. You're fighting a full-scale hot cyberwar out there, don't forget. It's no longer 1970. You don't have the luxury of trusting that incoming packets come from universities and defense contractors with administrators you can chew out with a phone call when they misconfigure stuff by accident. NSA owns the wires and your packets come direct from the Russian Mafia and Syrian Electronic Army. It's not a hack, because machines are NEVER perfect. It's totally a hack, and _because_ machines are never perfect you'd better be 150% certain that every single step in your error-recovery process is double and triple checked and accounts for every possible side-effect of executing evil x86 machine code with root permissions. Look, we both agree that Murphy rules. And you're right to say 'because random stuff happens, I need an overseeing process to automatically fix it'. But auto-restarting pwned services is not that fix, anymore, and it really hasn't been since 1999. Re: (Score:2) "Several machines" meaning what? 5? 10? 30? Try running 10,000 systems at-load; bonus points if your production system is at a substantially larger scale than your upstream vendors' test labs. You can't afford to look at things manually when something goes belly-up -- not immediately -- so you do an automated remediation, log everything you can, and have a human look at the outliers. If you look at the big b Re: (Score:2) Then "the hack stage" is the state of the world when you're operating at any significant scale. And that's why every week we have reports of major data centers being hacked. This is not a sustainable course for the global Internet. Eventually, people are going to die from infosec disasters. (In drone warfare, they already have, but that's also a political problem.) Yes, we'll always have bugs. But we have to get to the point where we have zero tolerance for _preventable_ bugs, such as machine code level crashes. Raw x86 code is simply too unsafe to run at any speed on the Internet; it gives no fundamen Re: (Score:2) I agree with almost everything you're saying here. However, none of that is an excuse for building process supervision infrastructure as a house of cards. Even building higher-level systems in functional languages with provably correct code, I've seen underlying layers blow up (hello, Erlang... though back in the day, I had much more than my share of JVM failures too... and CRuby, and others). Doing things right at the higher levels doesn't negate the need for doing things right at the lower levels -- defense Re: (Score:2) Look, we both agree that Murphy rules. And you're right to say 'because random stuff happens, I need an overseeing process to automatically fix it'. But auto-restarting pwned services is not that fix, anymore, and it really hasn't been since 1999. Sure, but process supervision is still part of the solution, and SysV still doesn't get you there (remember, the argument that set off this whole thread was "don't fix what ain't broke" in reference to SysV init). If you want to set something up to nuke-and-pave any Re: (Score:2) If you're crashing on memory corruption, you're also serving garbage due to memory errors. Perhaps you should consider going to ECC if it's happening that often. If a DOS attack takes the daemon out, it's got bugs. It's understood that a DOS attack may cause it to not get to requests in a timely manner but it shouldn't actually crash. Bizarre race conditions? That's another word for bug. What happens when the same memory corruption and race conditions send the daemon chasing it's tail but not actually termin Re: (Score:2) I work at both ends of the spectrum. These days it's mostly embedded systems (no crashes allowed) but I also do HPC clusters (if they rely on automatic restart to stay up, people will not be amused). Beyond that, I run a mixed bag of servers from old beaters to shiny and new. I don't do a lot of service restarting. If I do have to restart a service, it is a bug, pure and simple. Facebook's poor code quality is legendary. As for the rest, for reasons I have detailed you need more than systemd. You have to act Re: (Score:2) If you have daemons that keep falling over and needing restart, you're already at the hack stage. Sounds like a great argument for why we don't need pre-emptive multitasking. If a process doesn't yield time, just don't run it! It is called defense in depth. Yes, an application that crashes is broken. That doesn't mean that an OS that can't restart it isn't also broken. Re: (Score:3) Err, sorry, but Debian people don't say things like that. It has indeed be decided. If you don't like it, you do apt-get install sysvinit yourself. Hot grits (Score:5, Funny) I'm running Plan9 in a VM hosted on Hurd (sorry, that's GNU/Hurd) on a computer I made on a 3D printer that I bought with bitcoins. Meanwhile, in Soviet Russia Bennet Haselton is waiting for a long pompous article about how everyone else is wrong and beta is great written by ME!!!! Re: (Score:2) Can I get an invitation to your birthday party? Your hipness is off the scale! Re:Hot grits (Score:4, Funny) Re:Hot grits (Score:4, Funny) Pwning all your base Found dead, manscaping With soap on his face? Burma Shave Re: (Score:2) Re: (Score:2) Beowulf clusters are like *totally* last decade. I probably should have worked the cloud in, though. The link is a license (Score:5, Informative) Re:The link is a license (Score:5, Informative) Or if you're looking for a live image [bell-labs.com] to play with... Slashdotted... (Score:2) Was going to run the liveCD just cause, can't get in (too busy try again later). But Everything's Not A File (Score:2) Re: (Score:2) I'ts filizing, not anthropomorphizing. Dead end (Score:2, Interesting) I'm by no means a plan 9 expert, but as far as I see, the paradigm that everything is a byte stream is a bit of a dead end idea. Something like everything is an object or some such paradigm is much more interesting. Sure, UNIX and it's ilk, with everything as a byte stream was a great advance on what came before. But a stream of bytes is inherently too low an abstraction to build everything on. Waiting for the day when an object database or something like it is at the heart of a modern popular OS. Re: (Score:2) You'll wait a long time: that's been tried and it doesn't work. There are simply too many conflicting demands placed on databases, and any OS that favors one over the other immediately makes itself irrelevant for a large chunk of possible applications. Re:Dead end (Score:5, Funny) What about an OS where everything is a potato? I tried that once. Unfortunately when I ran it full multitasking on a multicore processor, the timeslicing just left me with a bag of chips.... Re: (Score:2). Re: (Score:3) Re: (Score:2) Didn't Debian try that already? Then they got a woody. Re:Dead end (Score:4, Insightful) "Those who don't understand Unix are condemned to reinvent it, poorly." Re:Dead end (Score:5, Insightful) oddly enough Plan 9 is from the guys who invented Unix who were trying to reinvent it. unix multics (Score:2) Re: (Score:2) been there (Score:2) Waiting for the day when an object database or something like it is at the heart of a modern popular OS. been around for nearly 2 decades now: look up os/400 and os/2, two very fine and different implementations of what you just asked for. both got trampled into oblivion so, ok, you could argue about the "popular" thing. i'd say you really are asking to much. Re: (Score:2) Re: (Score:2) Database at the heart of the OS? I think you're talking about IBM's approach with the AS/400. Re: (Score:3) Waiting for the day when an object database or something like it is at the heart of a modern popular OS. That is basically what Smalltalk was (except not that popular). When Apple went to Xerox they copied the look and made it popular, but they didn't really understand the implementation at the time. Re: (Score:2) Objects can be serialized and the result looks like a file. More generally, everything is a namespace/filesystem. Re: (Score:2) Objects can be serialized and the result looks like a file. More generally, everything is a namespace/filesystem. Yep. There's a very close connection between objects, dictionaries, relational tables, files/filesystems, and functions - all centred around binary relations, a fairly well-understood mathematical object - which seems well worth exploring. However, there haven't been (to my knowledge) many languages which attempt to explore this connection at a fundamental level. Here's a suggestion: we could fairly simply extend S-expressions so they allow for multiple lists or atoms after the dot in a dotted pair. This wou Re: (Score:2) In a limited sense, the internet is already represented by /dev/tcp. For example, in bash do: cat That feature is part of bash itself and so only works within a bash script. A nice thing in plan9 is that a system daemon can be mounted as /dev/tcp and supply that service for any process that cares to use it (and has permission). I suspect (but do not know for a fact) that that is inspired by plan9. The concept could be expanded upon by adding persistence such that once it learned of a host it would show up Re: (Score:2) Re: (Score:2) But a stream of bytes is inherently too low an abstraction to build everything on. How about taking it just one step forward to a stream of streams? Then we could at least create object-like structures but with minimal overhead. Plus, it would be a fully recursive definition that would lend itself to virtualisation. Of course, S-expressions are only 56 years old [wikipedia.org] so such a radical proposal isn't likely to be adopted any time soon. Re: (Score:2) Re: (Score:2) Re: (Score:2) Java has primitives, which aren't Objects. (though there's a compiler hack known as auto-boxing, which when misused can lead to NullPointerExceptions) Perhaps you're thinking of Ruby. Re: (Score:2) > Something like everything is an object or some such paradigm is much more interesting. I see you contracted the Java Disease. I'm sorry for you. (and no: I know about Smalltalk and all that. Still it's called the Java Disease. Guess why?) I'll guess- because you're a moron? Re: (Score:2) The registry is nobody's friend. A bunch of indistinct junk values with opaque names that get dropped all over the place like rabbit pellets and never cleaned up. my thoughts on plan9 (Score:2, Interesting) I just want to note that I am surprised by how many useless troll comments there are on this topic. Little more than a decade ago I tried out Inferno, actually purchased a copy still have the box even. My take away was that it was interesting, but not very useful. I could not do very much with it. I learned the Limbo programming language that came with it for fun because I like learning new languages. But, after that I went back to Linux again. Then I needed a job after I graduated from university and there w Re: (Score:2) worse than that, MS could CONTRIBUTE file X (Score:2) The problem is worse than that, in my view. Suppose Bell has a patent on foo. Foo is not used in Plan 9. Microsoft wants the foo patent to go away. Microsoft puts a non-obvious reference to foo in their new raid card driver, then contributes a Plan 9 port of the driver. Alcatel is still distributing Plan 9, now with the reference to foo, at least for a few hours until they notice the problem. Alcatel has given up their patent on foo by briefly distributing Microsoft's code Re: (Score:2) Agreed on the license thing. I tend to view software development in terms of "raising the roof" vs "raising the floor". If something's new and unique, it's "raising the roof" and merits some protection. If something's old and not particularly special, it should be freely available (BSD/MIT free) so that it can form a new "base" level of performance that anyone and his dog can build on. Plan 9 is not new by any stretch of the imagination, and by trying to keep it restricted by GPL terms they've made it unattr Unimpressed. (Score:2, Troll) Unimpressed. I was involved in the genesis of no less than 5 major open source projects and 7 not so major. License is always a political thing. It has benefitted Samba, benefitted Linux less, Benefitted Hurd not at all, and benefitted Apache, OpenLDAP, and the BSD's to varying degrees. If they wanted to displace Mach in Hurd, they would have GPLv3'ed it (or done a "GPLv2 or later thing) so RMS could play daddy. They didn't. They're not going to displace Linux, which is the poster boy of GPL through v2, a Re: (Score:2) Perhaps, *gasp*, political reasons weren't behind the licence choice or indeed the release... Re: (Score:2) Probably the last thing anyone would want with their project merely because he's too busy. Re: (Score:2) So what would impress you? Having a lawyer write up a whole new license? Keeping it closed source? too late (Score:3) It's a shame that this has come so late. If AT&T hat open source Plan 9 right when it was being developed, it might have saved FOSS from the mess of IPC and distributed computing tools it currently has. Re: (Score:2) Plan 9 predates Linux, nobody was open sourcing a commercial product back then. The most commercial product, I can think of, that went open source, was Blender. Plan 9 from User Space (Score:4, Interesting) Does this mean Plan 9 from User Space [swtch.com] (an implementation of Plan 9 tools and libraries for UNIX and Linux) will be GPLv2 licensed too now? Licensing (Score:2, Interesting) Unless I'm reading it wrong, it previously appears to've been released under a BSD-like license that is non-copyleft, allows commercial redistribution. The only reason it's GPL incompatible is because they describe the venue of law under which the agreement is binding. And they aren't dual-licensing, but simply relicensing from one to the other. That...is actually a step backwards. In general. I suppose for this particular code release, there's no difference of practical value, but in general it's still goin Software freedom for derivatives is a good thing. (Score:2) Nothing in the GPL prohibits commercial redistribution. The GPL aims to prohibit proprietarization. Commercial redistribution and proprietarization are not the same thing. Ensuring software freedom for users of derivatives is a good thing. It would have been better if the follow on was (Score:2) Coolest thing is learning sharing incorporating (Score:3) For me, the coolest thing about any software becoming GPL, or released GPL from the outset, is the immediate learning and sharing possible with anyone that reads it. Sometimes it allows other projects to say, "excellent idea, let's incorporate that, and give credit to them", which to my thinking, means all other GPL project(s) can potentially benefit each other synergistic-ly. Plan 9 Torrent (Score:2) Site doesn't seem to be accepting any connections to download this so here's a magnet uri: Magnet Link [magnet] If only they had published it under GPLv3 (Score:2) Re: (Score:2) No, you're wrong. This is the only One True OS : [pudge.net] (In other words, the 90's called they want their joke back) Re: (Score:2) Re: (Score:3) God told him to write it [templeos.org] (or so he says). Re: (Score:2) Re: (Score:2) Re: (Score:3) And a Beowulf cluster.
https://slashdot.org/story/198237
CC-MAIN-2017-47
en
refinedweb
QtCharts causing linking errors (LNK2019) Hi, I've been trying to use QtCharts for Qt 5.7 for VS 2013, 32 bit. However, whenever I try to use a QtChart object, for example QHorizontalStackedBarSeries, it results in numerous linking errors. Here is an example of a cpp file where this occurs: #include <qtcharts/qchartview> #include <qtcharts/qbarseries> #include <qtcharts/qbarset> #include <qtcharts/qlegend> #include <qtcharts/qbarcategoryaxis> #include <qtcharts/qhorizontalstackedbarseries> using namespace std; using namespace QtCharts; ClassName::ClassName() { } ClassName::FunctionName() { QHorizontalStackedBarSeries *series = new QHorizontalStackedBarSeries(); } This results in the following errors: Error 2 error LNK2019: unresolved external symbol "__declspec(dllimport) public: __thiscall QtCharts::QHorizontalStackedBarSeries::QHorizontalStackedBarSeries(class QObject *)" (__imp_??0QHorizontalStackedBarSeries@QtCharts@@QAE@PAVQObject@@@Z) referenced in function "public: __thiscall ClassName::FunctionName(class QWidget *)" (??0ClassName@@QAE@PAVQWidget@@@Z) ProjectName\ClassName.obj ProjectName Error 3 error MSB6006: "link.exe" exited with code 1120. C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V120\Microsoft.CppCommon.targets 607 5 ProjectName Error 4 error LNK2019: unresolved external symbol "__declspec(dllimport) public: virtual __thiscall QtCharts::QHorizontalStackedBarSeries::~QHorizontalStackedBarSeries(void)" (__imp_??1QHorizontalStackedBarSeries@QtCharts@@UAE@XZ) referenced in function "public: virtual void * __thiscall QtCharts::QHorizontalStackedBarSeries::`scalar deleting destructor'(unsigned int)" (??_GQHorizontalStackedBarSeries@QtCharts@@UAEPAXI@Z) ProjectName\ClassName.obj ProjectName Error 5 error LNK2001: unresolved external symbol "public: virtual struct QMetaObject const * __thiscall QtCharts::QHorizontalStackedBarSeries::metaObject(void)const " (?metaObject@QHorizontalStackedBarSeries@QtCharts@@UBEPBUQMetaObject@@XZ) ProjectName\ClassName.obj PrjoectName Error 6 error LNK2001: unresolved external symbol "public: virtual int __thiscall QtCharts::QHorizontalStackedBarSeries::qt_metacall(enum QMetaObject::Call,int,void * *)" (?qt_metacall@QHorizontalStackedBarSeries@QtCharts@@UAEHW4Call@QMetaObject@@HPAPAX@Z) ProjectName\ClassName.obj ProjectName Error 7 error LNK2001: unresolved external symbol "public: virtual void * __thiscall QtCharts::QHorizontalStackedBarSeries::qt_metacast(char const *)" (?qt_metacast@QHorizontalStackedBarSeries@QtCharts@@UAEPAXPBD@Z) ProjectName\ClassName.obj ProjectName Error 8 error LNK2001: unresolved external symbol "public: virtual enum QtCharts::QAbstractSeries::SeriesType __thiscall QtCharts::QHorizontalStackedBarSeries::type(void)const " (?type@QHorizontalStackedBarSeries@QtCharts@@UBE?AW4SeriesType@QAbstractSeries@2@XZ) ProjectName\ClassName.obj ProjectName Error 9 error LNK1120: 6 unresolved externals ProjectName\ClassName.obj ProjectName This will also happen with other QtChart objects, such as QBarSet. Would anyone have an idea what could be causing this? I've already tried: -cleaning the project -in QtLauncher, cleaning the project and running QMake -adding "QT += charts" in the .pro file Any input would be appreciated, thanks! - kshegunov Qt Champions 2016 @ashkan849 Hello, First a very quick note. These types of includes: #include <qtcharts/qbarcategoryaxis> will render your program uncompilable on any platform different from windows. My advice is to stick to the standard way of including the headers, i.e. respecting their case. Now to the main problem, have installed the QtCharts module? Kind regards. Yes I believe you are right, I think you mean that I should have it as: #include <QtCharts/QBarCategoryAxis> I have installed Qt 5.7 for VS2013 (32 bit), which includes the source files for QtCharts in the folder <Qt directory>\5.7\msvc2013\include\QtCharts. Is that sufficient? Hi and welcome to devnet, It should yes. However there's something not clear: did you re-run qmake after adding QT += charts? Thanks! And yes, I did run qmake after adding QT += charts Good, then next thing to check: do you have the lib files available ? Should be but doesn't hurt to double check. - kshegunov Qt Champions 2016 Yes I believe you are right, I think you mean that I should have it as Yes, that's what I meant. I have installed Qt 5.7 for VS2013 (32 bit), which includes the source files for QtCharts in the folder <Qt directory>\5.7\msvc2013\include\QtCharts. Is that sufficient? I'd be more concerned, just like @SGaist, whether you can find the import library (the .lib) file and the actual binary (the .dll). Could you search for them in the appropriate folders? You guys are on to something. Apparently I forgot to include QtCharts.lib/QtChartsd.lib in Properties -> Linker -> Input -> Additional Dependencies. I then also had to copy the QtCharts.dll and QtChartsd.dll to the release and debug folders respectively, and now my program runs. Thanks so much for your help! Aha ! You're using Visual Studio to build your application… That was the big missing piece of the puzzle… I took you where using Qt Creator. You shouldn't need to copy the .dlls since your application already worked before. In any case, glad you found out. Since you have it working now, please mark the thread as solved using the "Topic Tools" button so that other forum users may know a solution has been found :) Will do. Thanks again for your help! Thanks a lot! solve my problem, it is really easy to foget that!
https://forum.qt.io/topic/69915/qtcharts-causing-linking-errors-lnk2019
CC-MAIN-2017-47
en
refinedweb
The long awaited release 8.5.8.0 is here! Originally intended to be released in March of 2016, a shift in strategy has resulted in this release coming out in June. We greatly appreciate everyone’s patience, and hope that we find the result to be rewarding for everyone. The biggest change in this release is an overhaul of the Form Rendering Engine. One of our biggest complaints for Touch UI was the lack of customizability of the form. The original implementation of forms provided for limited capability in ordering data fields. Categories were used to group data fields into new rows, columns, and tabs. In response to these comments,we have added the ability to define HTML templates that allow precise positioning of data fields and custom content into the form. A simple example can be seen below. These HTML templates will be placed under the ~/Views folder of your project. When loading a view, the application framework will attempt to find the file “[Controller Name].[View ID].html”. If not found, it will generate a default template. A snippet from the template can be seen below: <div data- <div data- <div data- <div data- <div data-This is the order information.</div> </div> <div data- <span data-CustomerID</span> <span data-[CustomerID]</span> </div> <div data- <span data-EmployeeID</span> <span data-[EmployeeID]</span> </div>... The entire template must be wrapped in a data-layout element, and various different data-container elements are available for easy positioning of elements. Rows can be used to automatically position fields. It is always possible to use your own elements and position items manually. More documentation on this feature will be coming soon. These layouts work perfectly well on mobile devices, too. In order to make the process of developing form templates easy, a visual Form Designer will be included in future releases. The engine utilizes our new Universal Input API in order to react to user clicks and key presses. When the user clicks on a label or field, the API will find and build the relevant input control in order to handle that field. When the user leaves the field, all values that show that field value will be updated. The API handles Tab, Shift+Tab, Up, Down, Left, Right, Enter, Shift+Enter keys in order to move between fields. With the new Form Rendering Engine, we decided to overhaul the lookup input control to allow your users to get their work done easier and faster. The new lookup allows the user to type in their search query, and a suggestion list will automatically be loaded. Use arrow keys to navigate up and down in the list. Press Enter on your keyboard to select an option. The user can click on the arrow to the right of the field to navigate to the lookup view. The user can also press Ctrl+Space to activate the list. From there, the user can create a new lookup record, or jump to the lookup view by pressing “See All”. “Ctrl+Enter” will also activate the lookup view. Client-side data caching and filtering is employed in order to ensure that performance is top-notch. Dates have always been a difficult data type to work with. Every browser implements native input differently, some working better than others. Rather than compromising in order to utilize the native input of every browser, a new Calendar Input has been developed. This input control is an extension of the calendar sidebar filter component, which also includes the upgrades. The input will be activated when the user focuses on the field. Selecting a day in the month will set that date. The user can drag or use the arrow buttons to move between different months. Clicking on the header will allow the user to select the month or year. If the data field also renders time, a clock will be rendered. The user can click on an area in the clock to set the time. Clicking on the hour or minute part of the header will allow changing that part of the time. Clicking on AM/PM will toggle the time of day. The user will continue to be able to manually edit the value in the input control. If the input is activated on a very small screen or mobile device, the Calendar Input will be displayed in the center of the screen. The user must click “OK” to save the new value, “Cancel” to close the popup, or “Clear” to reset the value. Notice that days that contain data in the month are bold. Hovering over the day will reveal a count of records on that day. The client library makes asynchronous requests to pull the data and caches it on the client. If performance is a concern, this feature can be disabled by tagging the data field “calendar-input-data-none”. In the past, Code OnTime users needed to configure pages with multiple data views in order to display lists of data related to the master record. This process led to a disconnect between the data and presentation layers of the application. Release 8.5.8.0 changes the paradigm. A new field type has been introduced, “DataView”. This will allow users to embed lists of records directly into the forms of master records. This change brings controllers more inline with how users would intuitively understand business objects. Simply define a field of type “DataView”, point to the correct controller, specify the filter field, and create a data field to bind it to the form. All pages that refer to that form will now reveal relevant child records. The traditional method of defining child data views still works. This can be used for child data views that should only be displayed on certain pages (or define another view that excludes the data view field). Future releases of the app generator will allow users to perform inline editing of child records at the same time that the master record is being modified. This brings us to the new and improved grid. In previous releases, it was difficult to set the size of grid columns to match up with the intended look and feel. Release 8.5.8.0 has made the grid sizing process more transparent. The grid will now use the exact size of each data field in columns when allocating space. If there are no columns defined, then the columns will be set to 2/3s of the length of the field, or various preconfigured lengths depending on the type of the field. In order to make the client library more intelligent and require less involvement of the user, a new feature has been added to the grid – “Fit To Width”. This will automatically shrink the grid columns to fit the screen, down to a certain limit. The space allocated to each column is equal to the proportion of “columns” that field was assigned. This feature is automatically enabled for every grid. If the behavior is undesired, the data view can be tagged “grid-fit-none” to disable the functionality. The width of the grid may surpass the width of the page – the user will then be able to drag the grid left and right to bring different columns into view. Touch input is now supported for dragging. If a column is too small or big to see the data, the user can click and drag the divider between columns in order to resize. Future releases will offer the ability to reorder the columns on the client. Touch UI applications offer several different display densities in order to fit the needs of every user. The smallest size, Condensed, was still larger than Desktop UI. Therefore, we are introducing “Tiny” display density, which uses the same font and font size of the desktop. The picture below compares “Comfortable” and “Tiny” display densities. Code business rules in previous releases of Code OnTime app generator would list each field in the parameters of the method. Controllers with over a hundred fields would result in sprawling and ungainly method signatures. To update a data field for the client, it was necessary to call the UpdateFieldValue() method. See an example of legacy code below.) { UpdateFieldValue("Quantity", 1); } } } Release 8.5.8.0 will now generate data model objects for each controller that has a code business rule, and will pass this object as a parameter to the method. The setters for each property of the data model object will update the corresponding field on the client side. using System.Data; using MyCompany.Data; using MyCompany.Models; namespace MyCompany.Rules { public partial class OrderDetailsBusinessRules : MyCompany.Data.BusinessRules { [Rule("r100")] public void r100Implementation(OrderDetailsModel instance) { instance.Quantity = 1; } } } The new business rule format is vastly easier to read and understand, even for non-professional C# or Visual Basic developers. Legacy business rules will continue to function as they did before. Release 8.5.8.0 no longer offers a way to enable data access objects globally. The developer will need to enable data access objects on each controller by enabling the “Generate Data Access Objects” checkbox. These objects will extend the business object models. Models and data access objects will now be stored under ~/App_Code/Models folder. These are just some of the new features in release 8.5.8.0. A more comprehensive list can be seen below: We were not able to finalize some of the features that we desired to include in this release, due to time constraints. Expect to see these features in future releases:
http://codeontime.com/blog/2016/06/8580-has-landed-new-form-rendering
CC-MAIN-2017-47
en
refinedweb
General A root category General/Appearance A child category under General General/Appearance/Console Another child category under General/Appearance Data Access Another root categoryPreferences lie under a particular category. They are identified by a name called key, which must be unique within that category, but can be reused in other categories. The following set of HIPE preferences (mostly based on Eclipse/IDLDE's preferences) was agreed as the starting structure for organising HIPE's configuration. They are out of date, see below the text tree for a screenshot of a newer version: per preference. This is how you specify the keys belonging to the associated category. Each preference is handled by a PreferenceHandler, which provides the means of updating the GUI with the existing preference value and to get any change that the user introduces, and more. You may consider to extend AbstractPreferenceHandler: from herschel.ia.gui.kernel import ExtensionRegistry, Extension from herschel.ia.gui.kernel.prefs import UserPreferences CATEGORY = UserPreferences.CATEGORY REGISTRY = ExtensionRegistry.getInstance() # Preferences categories REGISTRY.register(CATEGORY, Extension( "Some/Category", "herschel.some.package.SimplePreferencesPanel", None, # unused None)) # unused # Cleanup del(ExtensionRegistry, Extension, UserPreferences, CATEGORY, REGISTRY). // The notifier class public class SplittedEditor extends AbstractEditorComponent<SomeSelection> { // Suppose we detect changes in the split pane here private void splitMoved(float newLocation) { SiteEvent event = new PreferenceChangeRequestEvent(this, category, key, newLocation); getPart().getEventHandler().trigger(event); } } // The preferences panel public class SomePreferencesPanel extends PreferencesPanel implements SiteEventListener { @Override protected void makeContent() { // Fill the panel ... // and register to split changes SiteEventHandler eventHandler = SiteUtil.getSite().getEventHandler(); eventHandler.addEventListener(PreferenceChangeRequestEvent.class, this); } @Override public void selectionChanged(SiteEvent event) { Float location = (Float)((PreferenceChangeRequestEvent)event).getNewValue(); setValue("splitLocation", location); // update the value in the panel saveChanges(); // and save it } }in your client code, for example because your module cannot depend on ia_gui_kernel, but still want to provide a panel for the user in the preferences window. For instance, you want to add a panel for letting the user to set the Versant server and a database name. In this case, your panel should be written in a module that can depend on ia_gui_kernel; however, herschel.versant.store(in this example) can only use Configuration for reading the preferences values. The solution is to provide the PreferenceHandler the name of the associated property, so the framework would override the property in hipe.props. As simple as.
http://herschel.esac.esa.int/twiki/bin/view/Public/DpHipePreferences
CC-MAIN-2017-47
en
refinedweb
gnutls_x509_crq_sign(3) gnutls gnutls_x509_crq_sign(3) gnutls_x509_crq_sign - API function #include <gnutls/compat.h> int gnutls_x509_crq_sign(gnutls_x509_crq_t crq, gnutls_x509_privkey_t key); gnutls_x509_crq_t crq should contain a gnutls_x509_crq_t type gnutls_x509_privkey_t key holds a private key This function is the same a gnutls_x509_crq_sign2() with no flags, and SHA1 as the hash algorithm. On success, GNUTLS_E_SUCCESS (0) is returned, otherwise a negative error value. Use gnutls_x509_crq_privkey_sign()q_sign(3)
http://man7.org/linux/man-pages/man3/gnutls_x509_crq_sign.3.html
CC-MAIN-2017-47
en
refinedweb
Using Solarium with SOLR for Search – Advanced Using Solarium for SOLR Search - Using Solarium with SOLR for Search – Setup - Using Solarium with SOLR for Search – Solarium and GUI - Using Solarium with SOLR for Search – Implementation - Using Solarium with SOLR for Search – Advanced This is the fourth and final part of a series on using Apache’s SOLR search implementation along with Solarium, a PHP library to integrate it into your application as if it were native.. Highlighting Results with SOLR The Highlighting component allows you to highlight the parts of a document which have matched your search. Its behavior around what gets shown depends on the field – if it’s a title chances are it’ll show it in its entirety with the matched words present, and longer fields – such as a synopsis or the body of an article – it will highlight the words but using snippets; much like Google’s search results do. To set up highlighting, you first need to specify the fields to include. Then, you can set a prefix and corresponding postfix for the highlighted words or phrases. So for example, to make highlighted words and phrases bold: $hl = $query->getHighlighting(); $hl->setFields(array('title', 'synopsis')); $hl->setSimplePrefix('<strong>'); $hl->setSimplePostfix('</strong>'); Alternatively, to add a background color: $hl = $query->getHighlighting(); $hl->setFields(array('title', 'synopsis')); $hl->setSimplePrefix('<span style="background:yellow;">'); $hl->setSimplePostfix('</span>'); Or you can even use per-field settings: $hl = $query->getHighlighting(); $hl->getField('title')->setSimplePrefix('<strong>')->setSimplePostfix('</strong>'); $hl->getField('synopsis')->setSimplePrefix('<span style="background:yellow;">')->setSimplePostfix('</span>'); Once you’ve configured the highlighting component in your search implementation, there’s a little more work to do involved in displaying it in your search results view. First, you need to extract the highlighted document from the highlighting component by ID: $highlightedDoc = $highlighting->getResult($document->id); Now, you can access all the highlighted fields by iterating through them, as properties of the highlighted document: if($highlightedDoc){ foreach($highlightedDoc as $field => $highlight) { echo implode(' (...) ', $highlight) . '<br/>'; } } Or, you can use getField(): if($highlightedDoc){ $highlightedTitle = $highlightedDoc->getField('title'); } Highlighted fields don’t simply return text, however Instead, they’ll return an array of “snippets” of text. If there are no matches for that particular field – for example if your search matched on title but not synopsis – then that array will be empty. The code above will return a maximum of one snippet. To change this behavior, you can use the setSnippets() method: $hl = $query->getHighlighting(); $hl->setSnippets(5); // . . . as before . . . For example, suppose you search for the word “star”. One of the results has a synopsis that reads as follows:. The highlighted document’s synopsis array will contain three items: - One way to display multiple snippets is to implode them, for example: implode(' ... ', $highlightedDoc->getField('synopsis')) This results in the following: There are a number of other parameters you can use to modify the behavior of the highlighting component, which are explained here. Integrating Highlighting into Our Movie Search Now that we’ve covered how to use highlighting, integrating it into our movie search application should be straightforward. The first thing to do is modify app/controllers/HomeController.php by adding the following, just before we run the search: // Get highlighting component, and apply settings $hl = $query->getHighlighting(); $hl->setSnippets(5); $hl->setFields(array('title', 'synopsis')); $hl->setSimplePrefix('<span style="background:yellow;">'); $hl->setSimplePostfix('</span>'); // Execute the query and return the result $resultset = $this->client->select($query); Then the search results – which you’ll remember are in app/views/home/index.blade.php – become: @if (isset($resultset)) <header> <p>Your search yielded <strong>{{ $resultset->getNumFound() }}</strong> results:</p> <hr /> </header> @foreach ($resultset as $document) <?php $highlightedDoc = $highlighting->getResult($document->id); ?> <h3>{{ (count($highlightedDoc->getField('title'))) ? implode(' ... ', $highlightedDoc->getField('title')) : $document->title }}</h3> <dl> <dt>Year</dt> <dd>{{ $document->year }}</dd> @if (is_array($document->cast)) <dt>Cast</dt> <dd>{{ implode(', ', $document->cast) }}</dd> @endif </dl> {{ (count($highlightedDoc->getField('synopsis'))) ? implode(' ... ', $highlightedDoc->getField('synopsis')) : $document->synopsis }} @endforeach @endif Notice how each search result essentially mixes and matches fields between the search result document, and the highlighted document – the latter is effectively a subset of the former. Depending on your schema, you may have all your fields available in the highlighted version. Suggester – Adding Autocomplete The Suggester component is used to suggest query terms based on incomplete query input. Essentially it examines the index on a given field and extracts search terms which match a certain pattern. You can then order those suggestions by frequency to increase the relevance of the search. To set up the suggester, we need to configure it in your solrconfig.xml file. Open it up place the following snippet of XML somewhere near the other <searchComponent> declarations: > <str name="field">title</str> <!-- the indexed field to derive suggestions from --> <float name="threshold">0.005</float> <str name="buildOnCommit">true<> You’ll notice a number of references to “spellcheck”, but this is simply because the Suggester component reuses much of that functionality internally. The important bit to notice is the <str name="field"> item, which tells the component that we want to use the title field on which to base our suggestions. Restart SOLR, and you can now try running a suggest query through your web browser: `` (You may need to alter the port number, depending on how you set up SOLR) The output should look a little like this: <?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">0</int> </lst> <lst name="spellcheck"> <lst name="suggestions"> <lst name="ho"> <int name="numFound">4</int> <int name="startOffset">0</int> <int name="endOffset">2</int> <arr name="suggestion"> <str>house</str> <str>houses</str> <str>horror</str> <str>home</str> </arr> </lst> <str name="collation">house</str> </lst> </lst> </response> As you can see, SOLR has returned four possible matches for “ho” – *ho**use, **ho**uses, **ho**rror and **ho**me. Despite *home and horror being before house in the alphabet, house appears first by virtue of being one of the most common search terms in our index. Let’s use this component to create an autocomplete for our search box, which will suggest common search terms as the user types their query. First, define the route: public function getAutocomplete() { // get a suggester query instance $query = $client->createSuggester(); $query->setQuery(Input::get('term')); $query->setDictionary('suggest'); $query->setOnlyMorePopular(true); $query->setCount(10); $query->setCollate(true); // this executes the query and returns the result $resultset = $client->suggester($query); $suggestions = array(); foreach ($resultset as $term => $termResult) { foreach ($termResult as $result) { $suggestions[] = $result; } } return Response::json($suggestions); } Include JQuery UI (and JQuery itself) in your layout: <script src="//code.jquery.com/jquery-1.11.0.min.js"></script> <script src="//code.jquery.com/ui/1.10.4/jquery-ui.min.js"></script> Include a JQuery UI theme: <link rel="stylesheet" type="text/css" href="//code.jquery.com/ui/1.10.4/themes/redmond/jquery-ui.css"> And finally, add some JS to initialize the autocomplete: $(function () { $('input[name="q"]').autocomplete({ source: '/autocomplete', minLength: 2 }); }); That’s all there is to it – try it out by running a few searches. Array-based Configuration If you prefer, you can use an array to set up your query – for example: $select = array( 'query' => Input::get('q'), 'query_fields' => array('title', 'cast', 'synopsis'), 'start' => 0, 'rows' => 100, 'fields' => array('*', 'id', 'title', 'synopsis', 'cast', 'score'), 'sort' => array('year' => 'asc'), 'filterquery' => array( 'maxprice' => array( 'year' => 'year:[1990 TO 1990]' ), ), 'component' => array( 'facetset' => array( 'facet' => array( array('type' => 'field', 'key' => 'rating', 'field' => 'rating'), ) ), ), ); $query = $this->client->createSelect($select); Adding Additional Cores At startup, SOLR traverses the specified home directory looking for cores, which it identifies when it locates a file called core.propeties. So far we’ve used a core called collection1, and you’ll see that it contains three key items: The core.propertes file. At its most basic, it simply contains the name of the instance. The conf directory contains the configuration files for the instance. As a minimum, this directory must contain a schema.xml and an solrconfig.xml file. The data directory holds the indexes. The location of this directory can be overridden, and if it doesn’t exist it’ll be created for you. So, to create a new instance follow these steps: - Create a new directory in your home directory – moviesin the example application - Create a confdirectory in that - Create or copy a schema.xmlfile and solrconfig.xmlfile in the confdirectory, and customize accordingly - Create a text file called core.propertiesin the home directory, with the following contents: name=instancename …where instancename is the name of your new directory. Note that the schema.xml configuration that ships in the examples directory contains references to a number of text files – for example stopwords.txt, protwords.txt etc – which you may need to copy over as well. Then restart SOLR. You can also add a new core via the administrative web interface in your web browser – click Core Admin on the left hand side, then Add Core. Additional Configuration There are a few additional configuration files worth a mention. The stopwords.txt file – or more specifically, the language-specific files such as lang/stopwords_en.txt – contain words which should be ignored by the search indexer, such as “a”, “the” and “at”. In most cases, you probably won’t need to modify this file. Depending on your application, you may find that you need to add words to protwords.txt. This file contains a list of protected words that aren’t “stemmed” – that is, reduced to their basic form; for example “asked” becomes “ask”, “working” becomes “work”. Sometimes stemming attempts to “correct” words, perhaps removing what it thinks are erroneous letters of numbers at the end. You might be dealing with geographical areas and find that “Maine” is stemmed to “maine”. You can specify synonyms – words with the same meaning – in synonyms.txt. Separate synonyms with commas on a per-line basis. For example: GB,gib,gigabyte,gigabytes MB,mib,megabyte,megabytes Television, Televisions, TV, TVs You may also use synoyms.txt to help correct common spelling mistakes using synonym mappings, for example: assassination => assasination environment => enviroment If you’re using currency fields, you may wish to update and keep an eye on currency.xml, which specifies some example exchange rates – which of course are highly volatile. Summary In this series we’ve looked at Apache’s SOLR implementation for search, and used the PHP Solarium library to interact with it. We’ve installed and configured SOLR along with an example schema, and built an application designed to search a set of movies, which demonstrates a number of features of SOLR. We’ve looked at faceted search, highlighting results and the DisMax component. Hopefully this will give you enough of a grounding to adapt it to use SOLR for search in your applications. For further reading, you may wish to download the SOLR reference guide as a PDF, or consult the Solarium documentation. No Reader comments
http://www.sitepoint.com/using-solarium-solr-search-advanced/
CC-MAIN-2015-11
en
refinedweb
I clicked Libaries->add Jar->portlet-2.0.jar so I can use javax.portlet. The files look like they are there but where I try to do import javax.portlet; I get Unused Import Import section does not correspond to the specified code style rules ---- please help, thanks [nbj2ee] Can't access added Jar file [nbj2ee] Re: Can't access added Jar file
https://netbeans.org/projects/www/lists/nbj2ee/archive/2013-02/message/29
CC-MAIN-2015-11
en
refinedweb
I'm trying to create a menu style enqueue and dequeue with pointers. I have this so far I want to use the left and right pointers to lets say dequeue a number. I also thought about using two temps. Any suggestions? Code:#include<iostream> using namespace std; struct qNode{ int num; qNode *next; qNode *left; qNode *right; }; int main(){ qNode *temp = NULL; qNode *front = NULL; qNode *back = NULL; temp = new qNode; temp->num = 7; cout<<temp->num<<endl; cout<<endl; // cout<<temp<<endl; front = temp; back = front; // cout<<"front: "<<front<<endl; // cout<<"back: "<<back<<endl; temp = new qNode; temp->num = 9; back->next = temp; back = temp; cout<<front->next->num<<endl; temp = front; while(temp){ cout<<temp->num<<endl; temp = temp->next; } front = front->next; delete temp; return 0; }
http://cboard.cprogramming.com/cplusplus-programming/96316-enqueue-dequeue-pointer.html
CC-MAIN-2015-11
en
refinedweb
. * <p/> 39 * There are some cases where strictly behaving as a compliant caching proxy would result in strange 40 * behavior, since we're attached as part of a client and are expected to be a drop-in replacement. 41 * The test cases captured here document the places where we differ from the HTTP RFC. 42 */ 43 public class TestAsyncProtocolDeviations extends TestProtocolDeviations { 44 45 @Override 46 protected ClientExecChain createCachingExecChain( 47 final ClientExecChain backend, 48 final HttpCache cache, 49 final CacheConfig config) { 50 return new CachingHttpAsyncClientExecChain(backend, cache, config); 51 } 52 }
http://hc.apache.org/httpcomponents-asyncclient-4.0.x/httpasyncclient-cache/xref-test/org/apache/http/impl/client/cache/TestAsyncProtocolDeviations.html
CC-MAIN-2015-11
en
refinedweb
NAME vhangup - virtually hangup the current tty SYNOPSIS #include <unistd.h> int vhangup(void); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): vhangup(): _BSD_SOURCE || (_XOPEN_SOURCE && _XOPEN_SOURCE < 500) calling process has insufficient privilege to call vhangup(); the CAP_SYS_TTY_CONFIG capability is required. CONFORMING TO This call is Linux-specific, and should not be used in programs intended to be portable. SEE ALSO capabilities(7), init(8) COLOPHON This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/precise/en/man2/vhangup.2.html
CC-MAIN-2015-11
en
refinedweb
Mike is an independent consultant and developer in Victoria, BC. He can be reached at mjenning@islandnet.com. Java applets are small Java applications designed to run inside a web browser whenever it encounters a web page that contains an applet. Since Java applets can come from anywhere and be written by anyone, it only makes sense to restrict what an applet can do on your computer. These safety restrictions are built into the Java platform and referred to as the "security sandbox." One of the consequences of this sandbox is that a Java applet cannot establish a network connection to a computer other than the web server hosting it. Unfortunately, it is often the case that a Java applet needs to make a network connection to a computer other than its hosting web server -- a database server, for example. After encountering this problem a few times, I opted for a simple solution that I implemented both as a standalone Java application and as a Windows NT service using the framework described in a previous article (see "Java Q&A," DDJ, March 2000). A Simple Solution If a Java applet hosted on .???.com/ needs to access a database on oracleserver.???.com/, it cannot do so because of the security restrictions that are placed on Java applets by the Java security model (the sandbox). The easiest solution is to create a server program that runs on the web server and listens for socket connections on a particular port. When the server program gets a socket connection, it establishes a second socket connection to the database server and then spins off two threads that effectively tie the input and output streams of the two sockets together. The program I wrote to do this task I refer to as an "IP redirector" because it effectively redirects incoming socket connections to their desired host. Java Service Revisited In my previous article, I described a framework for writing Windows NT services in Java (javaservice). Since my IP redirector needs no user interface and, for the most part, runs in the background, it is a perfect candidate for implementation as a service. The javaservice framework I developed has gone through some substantial changes since it was first published. System properties are now added from a file called "javaservice.properties" that exists in the same directory as javaservice.exe. Also, all .jar files in the same directory as javaservice.exe automatically get added to the classpath. Javaservice also uses the new invocation API and is now compatible with Hotspot JVM. The newly enhanced version of javaservice will be available as a commercial product from. In the meantime, the IPRedirectDaemon class can be invoked as a standalone with the following commands: On Windows: java -cp .\javaservice.jar;.\ipredirect.jar IPRedirectDaemon On Unix: java -cp ./javaservice.jar;./ipredirect.jar IPRedirectDaemon The IPRedirectDaemon class The Java class I wrote to do the socket redirection is IPRedirectDaemon. Because system properties are the easiest way to pass information to any Java service, IPRedirectDaemon was written to use system properties to determine which ports to listen on and which IP address/ port to redirect a given port to. Example 1 shows a few IP address redirection properties. In this example, the IPRedirectDaemon listens on ports 2433 and 1521. If IPRedirectDaemon gets a connection on port 2433 (the accepted socket), it tries to establish a connection to port 1433 on sqlserver.???.com (the real socket). Once both sockets are connected successfully, they are tied together and the input from one socket becomes the output for the other. The Java class responsible for tying the two sockets together is SocketTie. The SocketTie Class The class that does the actual work of redirecting bytes from one socket connection to another is called "SocketTie.java" (Listing One) and is trivial. The operation of SocketTie is pretty straightforward. Two Thread-derived inner classes (A_run and B_run) do the actual data pumping from one socket's InputStream to another socket's OutputStream. If either socket's InputStream or OutputStream throws an IOException, both streams are immediately closed in that thread. The Java class that listens on a specific port and establishes remote connections when a socket is accepted is SocketListen. The constructor for SocketListen looks like the following: public SocketListen(int listenport,InetAddress remoteAddress,int remoteport) The listenport parameter is the port that the SocketListen class listens on; the remoteAddress and remoteport parameters are the IP address and port of the remote host to connect to when an incoming connection is established on listenport. When the start() method of SocketListen (inherited from its base class Thread) is invoked, it creates a ServerSocket on listenport and begins accepting connections. When a connection is received, SocketListen attempts to establish another connection to remoteAddress/remoteport. If the second connection to remoteAddress is successful, a SocketTie object is created to tie the two sockets together. Listing Two is the listen() method of the SocketListen class, while Listing Three shows the IPRedirectDaemon class. To use the IP redirector on Windows NT, create a directory and unzip the ipredirect.zip file (available electronically; see "Resource Center," page 5) into that directory. One of the files in that directory will be a text file called "javaservice.properties" and will contain three lines as follows: redirect1.listenport=1521 redirect1.address=oracleserver.???.com redirect1.port=1521 Edit the javaservice.properties file using Notepad (or your favorite text editor, being sure to save as plain text) and change oracleserver.???.com to either the IP address or domain name of the host you want to redirect connections to. Change the listenport and port values to whatever values you like. Open a Command Prompt window and change the current directory so that you are in the same directory as the one you unzipped the ipredirect.zip file into. At the command prompt, enter: javaservice /install. You should see "Service 'Socket Redirector' installed" as the last line printed out. To Uninstall the socket redirector, simply type javaservice/remove. Conclusion The security model for Java applets prevents them from accessing network resources from any computer other than the hosting web server. By configuring and running the IP redirector on a web server, this limitation can be overcome easily. Because the IP redirector itself is written entirely in Java, it would be fairly trivial to adapt the IP redirector to run on any operating system with a Java run time available for it. I have already started an all-Java version of my javaservice framework and tested my services successfully on Linux. DDJ <H4><A NAME="l1">Listing One</H4> import java.net.*; import java.io.*; public class SocketTie { InputStream isa,isb; OutputStream osa,osb; public SocketTie(Socket a,Socket b) throws IOException { init(a.getInputStream(),a.getOutputStream(), b.getInputStream(),b.getOutputStream()); } class A_run extends Thread { public void run() { readFromA(); } } class B_run extends Thread { public void run() { readFromB(); } } A_run arun; B_run brun; private void init(InputStream _isa,OutputStream _osa, InputStream _isb,OutputStream _osb) { isa=_isa; osa=_osa; isb=_isb; osb=_osb; arun=new A_run(); brun=new B_run(); } public void start() { arun.start(); brun.start(); } private void readFromA() { int abytes; byte[] buffer=new byte[1024]; try { for (;;) { abytes=isa.read(buffer,0,1024); osb.write(buffer,0,abytes); } } catch(Exception ioe) { // underlying stream is closed try { osb.close(); } catch(IOException ioe2) { } try { isa.close(); } catch(IOException ioe3) { } return; } } private void readFromB() { int bbytes; byte[] buffer=new byte[1024]; try { for (;;) { bbytes=isb.read(buffer,0,1024); osa.write(buffer,0,bbytes); } } catch(Exception ioe) { // underlying stream is closed try { osa.close(); } catch(IOException ioe2) { } try { isb.close(); } catch(IOException ioe3) { } return; } } } <A HREF="#rl1">Back to Article</A> <H4><A NAME="l2">Listing Two</H4> package slickjava.net; import java.net.*; import java.io.*; import slickjava.net.*; public class SocketListen extends Thread { int listenport; InetAddress remoteAddress; int remoteport; public SocketListen(int listenport,InetAddress remoteAddress,int remoteport) { this.listenport=listenport; this.remoteAddress=remoteAddress; this.remoteport=remoteport; System.out.println("listening on port "+listenport+ ", redirecting to "+remoteAddress+","+remoteport); System.out.flush(); } public void run() { try { listen(); } catch(IOException ioe) { } } ServerSocket serversocket; private void listen() throws IOException { serversocket=new ServerSocket(listenport); for(;;) { Socket acceptedsocket=serversocket.accept(); Socket realsocket; try { realsocket=new Socket(remoteAddress,remoteport); } catch(IOException ioe2) { try { acceptedsocket.close(); } catch(IOException ioe1) { } continue; } SocketTie tie=new SocketTie(acceptedsocket,realsocket); tie.start(); } } public void stopListening() { try { serversocket.close(); } catch(IOException ioe) { } } } <A HREF="#rl2">Back to Article</A> <H4><A NAME="l3">Listing Three</H4> import java.util.*; import java.net.*; import java.io.*; import com.slickjava.jdaemon.*; import slickjava.net.SocketListen; public class IPRedirectDaemon implements JDaemon { public IPRedirectDaemon() { } public String getDaemonName() { return "IPRedirectDaemon2"; } public String getDisplayName() { return "Socket-Redirector"; } public String[] getDependentDaemons() { return null; } public void parseCommandLine(String[] args) { } Vector listenlist=new Vector(); private void load_socketlist() throws IOException { int i,listenport,remoteport; String key,value; InetAddress remotehost; for (i=1;i<100;i++) { key="redirect"+i+".listenport"; value=System.getProperty(key); if (value==null) break; listenport=Integer.parseInt(value); key="redirect"+i+".address"; value=System.getProperty(key); try { remotehost=InetAddress.getByName(value); } catch(UnknownHostException uhe) { System.out.println("can't resolve '"+value+"'"); break; } key="redirect"+i+".port"; value=System.getProperty(key,""+listenport); remoteport=Integer.parseInt(value); listenlist.addElement(new SocketListen(listenport, remotehost,remoteport)); } } public boolean tryToStart(JDaemonController c) { int checkpoint=0; c.daemonStarting(checkpoint++, 4000); try { System.out.println("Loading socket list..."); load_socketlist(); System.out.println("Socket list loaded."); } catch(IOException ioe) { ioe.printStackTrace(); } int i,n=listenlist.size(); if (n==0) { System.err.println("No ports to redirect!!"); return false; } for (i=0;i c.daemonStarted(); System.out.println("IPRedirectDaemon started."); System.out.flush(); return true; } Object waitobject=new Object(); boolean running; public void main() { running=true; while(running) { synchronized(waitobject) { try { waitobject.wait(2000); } catch(InterruptedException ie) { ie.printStackTrace(System.err); } } Date rightnow=new Date(); System.out.println("now="+rightnow.toString()); System.out.flush(); } } public boolean tryToStop(JDaemonController c) { // first tell main it can exit System.out.println("IPRedirectDaemon: got signal to stop..."); synchronized(waitobject) { running=false; waitobject.notify(); } // now stop all of the socket listeners int i,n=listenlist.size(); c.daemonStopping(0, 700); for (i=0;i public boolean tryToPause(JDaemonController c) { return false; } public boolean tryToContinue(JDaemonController c) { return false; } public boolean tryToShutdown(JDaemonController c) { c.daemonNothingInteresting(); return true; } public boolean tryToGetStatus(JDaemonController c) { c.daemonNothingInteresting(); return true; } public static void main(String[] args) { try { IPRedirectDaemon redirector=new IPRedirectDaemon(); redirector.launchStandalone(); } catch(Exception ex) { ex.printStackTrace(System.err); System.err.flush(); System.exit(1); } } private void launchStandalone() throws Exception { File propsfile=new File(System.getProperty("user.dir"),"javaservice.properties"); System.out.println("adding "+propsfile+" to system properties"); Properties systemprops=System.getProperties(); FileInputStream fis=new FileInputStream(propsfile); systemprops.load(fis); fis.close(); load_socketlist(); if (listenlist.size()==0) throw new Exception("No ports to redirect!"); Enumeration enum=listenlist.elements(); while(enum.hasMoreElements()) { SocketListen sl=(SocketListen)enum.nextElement(); sl.start(); } } }Back to Article
http://www.drdobbs.com/jvm/how-can-you-establish-a-network-connecti/184404324
CC-MAIN-2015-11
en
refinedweb
#include <hallo.h> * Andreas Metzler [Fri, Jul 23 2004, 09:47:04AM]: > > Sure. But for Sarge, we did have the goal of a new installer and > > that took a long time. Right now, I don't seem to recall any equally > > big release goals for sarge+1 > [...] > > The obvious big blocker is the non-free "data|doc" issue. *Personally* How "obvious" is that? I wish the RM would finaly say a word about all this assumptions - I don't like the current situation, first FTP masters decided to ignore everyone (except of the few rumors about people that managed to contact them somehow) and now we speculate about the issues that finaly the RM will decides (and he normaly decides without considering other opinions IIRC). Regards, Eduard. -- Es war einmal, es ist nicht mehr, ein rosaroter Teddybär. Er aß die Milch und trank das Brot, und als er starb, da war er tot.
https://lists.debian.org/debian-devel/2004/07/msg01728.html
CC-MAIN-2015-11
en
refinedweb
{-# LANGUAGE Trustworthy #-} {-# LANGUAGE CPP, NoImplicitPrelude, MagicHash #-} ----------------------------------------------------------------------------- -- | -- Module : Data.List -- Copyright : (c) The University of Glasgow 2001 -- License : BSD-style (see the file libraries/base/LICENSE) -- -- Maintainer : libraries@haskell.org -- Stability : stable -- Portability : portable -- -- Operations on lists. -- ----------------------------------------------------------------------------- module Data.List ( -- * Basic functions (++) , head , last , tail , init ,.Char ( isSpace ) findIndices p ls = loop 0# ls where loop _ [] = [] loop n (x:xs) | p x = I# n : loop (n +# 1#) xs | otherwise = loop (n +# 1#) xs . -- Both lists must be finite. isSuffixOf :: (Eq a) => [a] -> [a] -> Bool isSuffixOf x y = reverse x `isPrefixOf` reverse] #ifdef USE_REPORT_PRELUDE nub = nubBy (==) #else -- stolen from HBC nub l = nub' l [] -- ' where nub' [] _ = [] -- ' nub' (x:xs) ls -- ' | x `elem` ls = nub' xs ls -- ' | otherwise = x : nub' xs (x:ls) -- ' #endif -- | nubBy eq l = nubBy' l [] where nubBy' [] _ = [] nubBy' (y:ys) xs | elem_by eq y xs = nubBy' ys xs | otherwise = y : nubBy' ys (y:xs) -- Not exported: -- Note that we keep the call to `eq` with arguments in the -- same order as in the reference -- | The 'intersect' function takes the list intersection of two lists. -- For example, -- -- > [1,2,3,4] `intersect` [2,4,6,8] == [2,4] -- -- If the first list contains duplicates, so will the result. -- -- > [1,2,2,3,4] `intersect` [6,4,4,2] == [2,2,4] -- -- It is a special case of 'intersectBy', which allows the programmer to -- supply their own equality test. mapAccumL _ s [] = (s, []) mapAccumL f s (x:xs) = (s'',y:ys) where (s', y ) = f s x (s'',ys) = mapAccumL f s' xs -- | "List.genericIndex: negative argument." genericIndex _ _ = error _|_ = [] : _|_@ inits :: [a] -> [[a]] xs = xs : case xs of [] -> [] _ : xs' -> tails xs' -- | cmp rge r) qpart cmp x (y:ys) rlt rge r = case cmp x y of GT -> qpart cmp x ys (y:rlt) rge r _ -> qpart cmp x ys rlt (y:rge) r -- rqsort is as qsort but anti-stable, i.e. reverses equal elements cmp rgt r) rqpart cmp x (y:ys) rle rgt r = case cmp y x of GT -> rqpart cmp x ys rle (y:rgt) r _ -> rqpart cmp x ys (y:rle) rgt r -} #endif /* USE_REPORT_PRELUDE */ -- |] -- -- | 'foldl1' is a variant of 'foldl' that has no starting value argument, -- and thus must be applied to non-empty lists. foldl1 :: (a -> a -> a) -> [a] -> a foldl1 f (x:xs) = foldl f x xs foldl1 _ [] = errorEmptyList "foldl1" -- | A strict version of 'foldl1' foldl1' :: (a -> a -> a) -> [a] -> a foldl1' f (x:xs) = foldl' f x xs foldl1' _ [] = errorEmptyList "foldl1'" -- ----------------------------------------------------------------------------- -- List sum and product {-# -- ----------------------------------------------------------------------------- -- Functions on strings -- | 'lines' breaks a string up into a list of strings at newline -- characters. The resulting strings do not contain newlines. #endif
http://hackage.haskell.org/package/base-4.7.0.0/docs/src/Data-List.html
CC-MAIN-2015-11
en
refinedweb
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards Returns the result type of remove_if, given the input sequence and unary MPL Lambda Expression predicate types. template< typename Sequence, typename Pred > struct remove_if { typedef unspecified type; }; result_of::remove_if<Sequence, Pred>::type Return type: Sequenceimplements the Associative Sequence model. Semantics: Returns a sequence containing the elements of Sequence for which Pred evaluates to boost::mpl::false_. Constant. #include <boost/fusion/algorithm/transformation/remove_if.hpp> #include <boost/fusion/include/remove_if.hpp>
http://www.boost.org/doc/libs/1_43_0/libs/fusion/doc/html/fusion/algorithm/transformation/metafunctions/remove_if.html
CC-MAIN-2015-11
en
refinedweb
Free Email Notification Receive emails when we post new items of interest to you.Subscribe or Modify your profile Financial markets can play a valuable role in addressing climate change It is not immediately obvious what role financial markets can play in addressing climate change. Climate change happens slowly and has a global impact on the physical environment, whereas financial markets react to news in fractions of a second and are almost liberated from specific physical locations. The low energy intensity of the financial sector means that reductions in greenhouse gas (GHG) emissions would have little impact on the physical operations of financial markets and institutions (unlike, for instance, their effects on electricity production or transport). Nevertheless, financial markets potentially play two important roles in the policy response to climate change (see table). First, they foster mitigation strategies—that is, the steps taken to reduce GHG emissions for a given level of economic activity—by improving the efficiency of schemes to price and reduce emissions (for example, carbon permit trading) and the allocation of capital to cleaner technologies and producers. Second, financial markets can cut the costs of adaptation—that is, how economies respond to climate change—by reallocating capital to newly productive sectors and regions and hedging weather-related risks. In recent years, markets in carbon permit trading, weather derivatives, and catastrophe (CAT) bonds have seen sharp increases in activity and innovation, which bodes well for the future. But if a basic understanding of finance is not reflected in policy design, climate change policy can suffer setbacks. Hence, recognizing how financial markets will react to climate change initiatives, and how they can best promote mitigation and adaptation, will become crucial to shaping future policy and minimizing its costs. Reducing GHG emissions On the mitigation front, a large number of countries have committed, or are likely to commit, to targets to curb GHG emissions by 2012 under the Kyoto Protocol or its successor arrangement. In addition to regulatory restrictions, this policy goal can be achieved through either emissions taxes or schemes to cap emissions and allow trading of permits. In such an environment, financial markets can reinforce commercial pressures on firms to reduce emissions. One such mechanism is the “green” investment fund. Originally part of the movement for “socially responsible” or “ethical” investment, such funds were established in the 1980s to invest only in companies working to limit the environmental damage they caused. Since then, more specialist funds have been launched that invest in companies, projects, and technologies involved in reducing GHG emissions. In fact, some recently launched equity indices comprise only shares of companies that have low GHG emissions or are investing in abatement technologies. The amounts invested in green funds are as yet too small to have a significant impact on overall equity performance. But if the post-Kyoto settlement results in a significant tax on, or price for, GHG emissions, then companies with low current emissions or investments in abatement technologies should outperform the market. Indeed, this already seems to have been anticipated by equity investors. When launched in October 2007, the 300 stocks comprising the HSBC Global Climate Change Index had outperformed the MSCI World Index by 70 percent since 2004. More generally, as GHG emissions are taxed or rationed, to the extent that companies are unable fully to pass on these costs, the cost of capital for heavy emitters will suffer relative to their competitors. Such price signals will reallocate capacity to sectors and regions in which production, investment, and research are most profitable, given a higher price for emitting GHGs. A second mechanism is the Kyoto Protocol’s Clean Development Mechanism (CDM), which allows cheaper emissions reductions in emerging markets and low-income countries to be certified by the UN and then sold as credits to offset emissions in cap-and-trade schemes in high-income countries. Substantial funds have been raised to invest in projects to benefit from certified emissions reductions under the CDM. Credits worth €12 billion were sold into the European Union’s Emissions Trading Scheme (ETS ) in 2007, and funds dedicated to carbon reduction projects now exceed €10 billion. However, the CDM’s effectiveness is limited by slow project accreditation and concerns about both project quality and whether they make any appreciable difference to GHG emissions growth in emerging economies. A third mechanism—and the clearest example of a financial market playing a central role in climate change mitigation policy—is carbon emissions trading. Following the precedent of the U.S. market for sulphur dioxide (SO2) permits—which reduced SO2 emissions at low cost—provision for permit trading was included in the Kyoto Protocol, and trading schemes have been developed in the European Union, Australia, and the United States. Heavy EU trading The European Union ETS is the largest such market, with €9.4 billion in EU allowances traded in 2005, €22.4 billion in 2006, and €28 billion in 2007. In volume terms, trading has grown considerably since 2005 (see Chart 1). The European Union ETS began in 2005 with a trial phase, and in early 2008 it moved into Phase II—which is designed to implement the European Union’s Kyoto Treaty emissions reduction target from 2008 to 2012. Futures trading in EU allowances started in 2004, and futures and spot EU allowances are now traded on five exchanges and by seven brokers, concentrated in London. Weekly turnover has grown to more than 20 million tonnes of carbon dioxide (CO2) equivalent, roughly 70 percent of which is traded through brokers. Liquidity has improved substantially, with instantaneous trades now possible at tight bid-offer spreads. Initially, energy companies were the primary market participants, but investment banks and hedge funds have also become active traders. Such cap-and-trade schemes are intended to minimize the cost of a given level of pollution abatement by creating property rights to emit, administratively limiting the supply of permits to the target level, distributing permits (either by auction or by direct allocation), and allowing them to be traded so that emitters short of permits are forced to buy them from those that are “long” because of abatement. Theoretically, this should result in the marginal cost of abatement equaling the price of a permit within the scheme, with emissions being cut by the most cost-efficient producers—a result that is equivalent to an optimal GHG emissions tax (see “Paying for Climate Change” in this issue). Has the European Union ETS proved successful? A liquid market for carbon has been created whose price has reflected changing market fundamentals. The significant price of emissions permits has generated some incentives toward abatement. Nevertheless, some lessons have been learned. First, price volatility has been higher than necessary. Most notably, permit prices in April 2006 dropped sharply because of rumors and selective publication of information by some EU members, indicating that permits had been overallocated in Phase I (see Chart 1). Subsequent confirmation that the scheme as a whole was net “long” resulted in the collapse of the Phase I price to close to zero. Allowing unused Phase I permits to be banked for use in Phase II would have limited price sensitivity and reduced reputational damage to the scheme. In addition, more frequent and careful release of market-sensitive data would have reduced unnecessary volatility and increased confidence in price reliability. Second, so far the European Union ETS has fostered trading of EU allowances with little impact on long-term investment. When the price of EU allowances was at the higher end of its range, some energy companies reportedly switched marginal production from dirtier coal to cleaner, gas-fired power stations. Some producers also say that a significant price for carbon is encouraging energy-saving investment. However, attention has focused on buying credits from outside the EU scheme (principally from China), where abatement costs are substantially lower. In addition, Phase II of the scheme is insufficiently long lived to provide credible incentives for investment in cleaner energy technologies. Consequently, the fall in EU carbon intensity has slowed, despite the ETS , and recent performance has been worse than in the United States. These lessons have prompted a comparison with the prerequisites for successful emissions trading and those for credible monetary policy. For a cap-and-trade scheme to develop long-term credibility, authority should be delegated to an independent central bank–type institution that is given a politically driven target to abate emissions at the lowest cost. This institution would be charged with the transparent and careful release of data, enforcement of long-term property rights, and discretion to change bankability and safety valve provisions to keep the price of permits within a set range to achieve its goal. Adapting to climate change On the adaptation front, financial markets can help to reduce the costs of climate change in several ways. First, markets should generate price signals to reallocate capital to newly productive sectors and regions. By shifting investment to sectors and countries with higher rates of return (for example, water and agricultural commodities), the costs of adaptation would be reduced below those that would arise from an inflexible capital stock. For instance, climate change is likely to change the dispersion and intensity of rainfall, leading to greater conservation investment in newly arid regions and in crops that use less water. The recent outperformance of companies specializing in water purification and distribution suggests that such factors are beginning to be reflected in equity prices. But perhaps the clearest way in which financial markets can help with adaptation to climate change is through the increased ability to trade and hedge weather-related risk, which, some meteorologists believe, will increase as a result of climate change. Weather derivatives offer producers whose revenue is vulnerable to short-term fluctuations in temperature or rainfall a way to hedge that exposure. Exchange-traded weather derivatives focus primarily on the number of days that are hotter or colder than the seasonal average within a defined future period. For instance, if there are more cold days than average over the contract period, those that have bought the “cooling degree day” future will enjoy a payout proportionate to the excess number of cold days. Futures enjoy low transaction costs and often relatively high liquidity. However, the parameter used to determine the futures contract payout may not be correlated exactly with a firm’s actual losses if extreme weather occurs. Hence, trading such derivatives often provides only an approximate hedge for firms’ weather-related exposures. After a slow start in the late 1990s, the exchange-traded weather derivatives and insurance markets have grown strongly in recent years (see Chart 2), with a reported turnover of weather contracts exceeding $19 billion in 2006–07, from $4–5 billion in 2001–04. Exchange-traded contracts have focused primarily on short-term trading of temperature in selected U.S. and European cities, with liquidity now concentrating in near-term contracts as hedge funds and investment banks take a larger share of turnover. Weather derivatives are complemented by weather swaps and insurance contracts that hedge adverse weather and agricultural outcomes. For instance, insurance contracts are being sold that pay out if temperature or rainfall in a specified area exceeds the seasonal average by a sufficient margin. Governments in some lower-income countries (for example, India and Mongolia) are offering crop and livestock insurance as a way to protect their most vulnerable farmers. Ethiopia pioneered drought insurance in 2006. Governments can assist in developing weather derivatives and insurance by providing reliable and independent data on weather patterns. These data enable market participants to model weather risk at a particular location with greater accuracy and so offer a lower price for insurance. Similarly, neutral tax, legal recognition, and regulatory treatment of weather derivatives and insurance are necessary to ensure that artificial barriers to the market do not arise unintentionally. Given that climate change is predicted to produce more extreme weather events, CAT bonds offer a new way for financial markets to disperse catastrophic weather risk (Hofman, 2007). At their simplest, CAT bonds entail the proceeds of the bond issue being held in an escrow account and surrendered to the issuer if a parameter(s) measuring an extreme natural catastrophe, such as a hurricane or an earthquake, breaches a specified trigger level. For this insurance, bond investors are paid a yield premium, and the principal is returned if the trigger is not breached by the time the bond matures. The results are potentially profound for the continuing supply (or extension) of weather catastrophe insurance and the protection of vulnerable sectors, such as agriculture and coastal property. They offer insurers many more flexible ways to access the global capital markets to undertake catastrophe risk, thus allowing insurance to continue to be provided despite climate change. CAT bonds were devised in the early 1990s, following the large payouts resulting from Hurricane Andrew in 1992, to enable reinsurance companies to divest themselves of extreme CAT risk and economize on capital. Until 2005, CAT bond issuance was less than $2 billion a year. But after Hurricane Katrina depleted industry capital, issuance has risen dramatically, with $4.9 billion in 2006 and $7.7 billion in 2007 (see Chart 3). Demand for CAT bonds has been strong from hedge funds and institutional investors looking for higher yields uncorrelated with other bond markets. Although CAT bonds and other innovative ways of raising capital for weather-related reinsurance constitute only about 10–15 percent of global reinsurance capacity for extreme weather risk, their establishment as a global asset class should ensure that, if weather catastrophes do deplete the capital of the reinsurance industry in the future, it can be replenished more rapidly through the global capital markets. Premiums for weather risk insurance are already more stable following extreme weather events, and future insurability should be maintained at a reasonable cost, even if climate change results in their greater intensity. How can governments respond to maintain insurability of weather-related risks despite climate change? First, authorities can restrict development in areas vulnerable to flooding or wind damage. Second, they can invest in flood defenses or water conservation measures to help private insurers continue to provide flood or drought coverage at a reasonable cost. Third, governments should refrain from subsidizing or capping flood or hurricane insurance premiums, because doing so encourages risky behavior and prevents the private insurance market from generating price signals to smooth adaptation to climate change. Higher premiums, or the withdrawal of insurance coverage, will provide incentives to curtail risky behavior and exposure to extreme weather. Permitting vulnerable property developments can make weather catastrophes an unnecessarily large fiscal threat—perhaps even for high-income countries. Governments could consider hedging their fiscal exposures to catastrophes by directly issuing CAT bonds (as Mexico did in 2006 to provide earthquake insurance) or by participating in collective schemes to pool their weather-related risks, such as of hurricanes (as 16 Caribbean countries did in conjunction with the World Bank in 2007 through the Caribbean Risk Insurance Facility—a $120 million regional disaster insurance facility). Demand for new CAT risks for diversification is exceptionally strong in the CAT bond market at present, so the insurance offered for new risks should be of relatively good value. Rating agencies could consider raising the credit ratings of low-income sovereign borrowers vulnerable to weatherrelated catastrophes if they cap their extreme fiscal risks through insurance. As with weather derivatives, providing longer runs of reliable and independent weather data enables insurance modelers to project weather patterns with greater confidence, thereby reducing the cost. Benefiting from innovations It seems likely that financial markets will play an integral role in climate change mitigation and adaptation in the future. Securities markets will reward those companies that successfully develop or adopt cleaner technologies. Cap-and-trade seems to be becoming the mitigation policy of choice in highincome countries, in which case the global market in permits for GHG emissions is likely to become the largest global commodity market. Although weather derivatives and CAT bonds do not offer a complete panacea—as yet, only hedges against weather and catastrophe risks are available out to five years—recent rapid innovation and deepening in these markets prompt optimism that they will continue to innovate and further help adaptation to climate change. The growth of hedge funds and the appetite for risks that are uncorrelated with other financial markets mean that there is likely to be continuing demand for financial instruments that provide investors a premium to assume weather risk despite climate change. The ingredients for innovation exist, and governments should consider ways in which they can foster and take advantage of such innovations.
http://www.imf.org/external/pubs/ft/fandd/2008/03/mills.htm
CC-MAIN-2015-11
en
refinedweb
Details Description stats. Activity Also, food for thought, when (hopefully not if) the VelocityResponseWriter is moved into core, we can deprecate stats.jsp and skin the output of this request handler for a similar pleasant view like stats.jsp+client-side xsl does now. Any thoughts on the naming of this beast? How about SysInfoRequestHandler - bonus: SIRH evokes RFK's assassin "stats" is a bit overloaded (StatsComponent). as is "system" (SystemInfoHandler). I swear when I read this, before I suggested SIRH, you had written "SystemStatsHandler" instead of "SystemInfoHandler". Not sure how you changed it without a red "edited" annotation in the header for your comment.... Et tu, Atlassian? Anyway, pathological paranoia aside, SIRH is too close to SystemInfoHandler - I hereby begin the process of formally withdrawing it from consideration. Ok, done. stats.xsl creates a title prefix "Solr Statistics" - how about SolrStatsRequestHandler? +1 on SolrStatsRequestHandler You might want to consider either omitting or making optional the Lucene Fieldcache stats; they can often be very slow to be generated ( see ). One use case for this request handler that I can see is high frequency (every few seconds) monitoring as part of performance testing, for which a fast response is pretty mandatory. Any thoughts on the naming of this beast? SystemInfoHandler sounds good. This would probably also be a good time to retire "registry.jsp" ... all we need to do is add a few more pieces of "system info" to this handler (and add some param options to disable the "stats" part of the output) Also, food for thought, when (hopefully not if) the VelocityResponseWriter is moved into core, we can deprecate stats.jsp and skin the output of this request handler for a similar pleasant view like stats.jsp+client-side xsl does now. Even if/when VelocityResponseWRiter is in the core, i'd still rather just rely on client side XSLT for this to reduce the number of things that could potentially get missconfigured and then confuse people why the page doesn't look right ... the XmlResponseWRriter has always supported a "stylesheet" param that (while not generally useful to most people) let's you easily reference any style sheet that can be served out of the admin directory ... all we really need is an updatd .xsl file to translate the standard XML format into the old style stats view. Some updates to Erik's previous version... - adds everything from registry.jsp - lucene/solr version info - source/docs info for each object - forcibly disable HTTP Caching - adds params to control which objects are listed - (multivalued) "cat" param restricts category names (default is all) - (multivalued) "key" param restricts object keys (default is all) - adds (boolean) "stats" param to control if stats are outputed for each object - per-field style override can be used to override per object key - refactored the old nested looping that stast.jsp did over every object and every category into a single pass - switch all HashMaps to NamedLists or SimpleOrderedMaps to preserve predictable ordering Examples... - ?cat=CACHE - return info about caches, but nothing else (stats disabled by default) - ?stats=true&cat=CACHE - return info and stats about caches, but nothing else - ?stats=true&f.fieldCache.stats=false - Info about everything, stats for everything except fieldCache - ?key=fieldCache&stats=true - Return info and stats for fieldCache, but nothing else I left the class name alone, but i vote for "SystemInfoRequestHandler" with a default registration of "/admin/info" Whoops .. i botched the HTTP Caching prevention in the last version Committed revision 917812. I went ahead and commited the most recent attachment under the name "SystemInfoRequestHandler" with slightly generalized javadocs. Leaving the issue open so we make sure to settle the remaining issues before we release... - decide if we want to change the name - add default registration as part of the AdminRequestHandler (ie: /admin/info ?) - add some docs (didn't wnat to make a wiki page until we're certain of hte name) - decide if we want to modify the response structure (should all of the top level info be encapsulated in a container?) Thanks Hoss for committing! naming: I'm fine with how it is, but fine if the name changes too and +1 to adding default Correcting Fix Version based on CHANGES.txt, see this thread for more details.... Please add an option that just lists the catalog of MBeans. Please add an option that just lists the catalog of MBeans. It's already there – if stats=false it just returns the list of SolrInfoMBeans from the registry (like registry.jsp) what do you think of the proposed name change & path: SolrInfoMBeanHandler & /admin/mbeans ? - rename to o.a.s.handler.admin.SolrInfoMBeanHandler - add default registration as part of the AdminRequestHandler /admin/mbeans - eliminate duplication of functionality w/SystemInfoHandler - "docs" are left in explicit order returned by plugin - if "cats" param is used, categories are returned in that order Committed revision 953886. ... trunk Committed revision 953887. ... branch 3x re: naming. If you're someone like me who is becoming fairly familiar with using solr, but not with the solr code – then "SolrInfoMBeanHandler" or "admin/mbean" doesn't mean anything to me, and is kind of confusing. I want to get info on my indexes and caches-- it would be very non-obvious to me (if i hadn't read this ticket) that "MBean" has anything to do with this, since I don't know what an MBean is – and probably shouldn't have to to use solr through it's APIs. So seems to me that a name based on the functions provided (not the underlying internal implementation) is preferable. But i recognize the namespace conflict problems, so much stuff in Solr already (some of it deprecated or soon to be deprecated or removed, some of it not) that it's hard to find a non-conflicting name. Even if the underlying class is SolrInfoMBeanHandler, would it be less (or more) confusing for the path to be /admin/info still? That might be less confusing, as someone like me would still see /admin/info in the config and think, aha, that might be what I want. Or the lack of consistency might just be more confusing in the end. I don't know what the current SystemInfoHandler does, what's the difference between that and this new one? There might be hints to naming in that. If the new one does everything the old one does, perhaps call it NewSystemInfoHandler, but still register it at /admin/info, with the other one being deprecated? Just brainstorming. Or rename the other one to OldSystemInfoHandler. Bulk close for 3.1.0 release The /admin/stats handler is not registered by default, nor is it included in example config. I had to add <requestHandler name="/admin/stats" class="org.apache.solr.handler.admin.SolrInfoMBeanHandler" /> to my solrconfig to get it working. Jan: as stated above the registration i picked was /admin/mbeans - stats is too specific since the component can be used for other purposes then getting stats. it's also not a "default" handler – it's registered if you register the AdminHandler Jonathan: i overlooked your comment until now. the existing SystemInfoHandler isn't deprecated – it's still very useful and provides information about the entire "system" solr is running in (the jvm, the os, etc...) I'll commit this in the near future. Any thoughts on the naming of this beast? "stats" is a bit overloaded (StatsComponent). as is "system" (SystemInfoHandler).
https://issues.apache.org/jira/browse/SOLR-1750?attachmentOrder=desc
CC-MAIN-2015-11
en
refinedweb
Language.Haskell.TH.Syntax Description Abstract syntax definitions for Template Haskell. Synopsis - class (Monad m, Applicative m) => Quasi m where - qNewName :: String -> m Name - qReport :: Bool -> String -> m () - qRecover :: m a -> m a -> m a - qLookupName :: Bool -> String -> m (Maybe Name) - qReify :: Name -> m Info - qReifyInstances :: Name -> [Type] -> m [Dec] - qReifyRoles :: Name -> m [Role] - qReifyAnnotations :: Data a => AnnLookup -> m [a] - qReifyModule :: Module -> m ModuleInfo - qLocation :: m Loc - qRunIO :: IO a -> m a - qAddDependentFile :: FilePath -> m () - qAddTopDecls :: [Dec] -> m () - qAddModFinalizer :: Q () -> m () - qGetQ :: Typeable a => m (Maybe a) - qPutQ :: Typeable a => a -> m () -Instances :: Name -> [Type] -> Q [InstanceDec] - reifyRoles :: Name -> Q [Role] - reifyAnnotations :: Data a => AnnLookup -> Q [a] - reifyModule :: Module -> Q ModuleInfo - isInstance :: Name -> [Type] -> Q Bool - location :: Q Loc - runIO :: IO a -> Q a - addDependentFile :: FilePath -> Q () - addTopDecls :: [Dec] -> Q () - addModFinalizer :: Q () -> Q () - getQ :: Typeable a => Q (Maybe a) - putQ :: Typeable a => a -> Q () - returnQ :: a -> Q a - bindQ :: Q a -> (a -> Q b) -> Q b - sequenceQ :: [Q a] -> Q [a] - class Lift t where - liftString :: String -> Q Exp - trueName :: Name - falseName :: Name - nothingName :: Name - justName :: Name - leftName :: Name - rightName :: - - TySynEqn - | ClosedTypeFamilyD Name [TyVarBndr] (Maybe Kind) [TySynEqn] - | RoleAnnotD Name [Role] - data TySynEqn = TySynEqn [Type] Type - data FunDep = FunDep [Name] [Name] - data FamFlavour - data Foreign - data Callconv - data Safety - = Unsafe - | Safe - | Interruptible - data Pragma - data Inline - data RuleMatch - data Phases - data RuleBndr - data AnnTarget - - data Role - data AnnLookup - type Kind = Type - cmpEq :: Ordering -> Bool - thenCmp :: Ordering -> Ordering -> Ordering Documentation class (Monad m, Applicative m) => Quasi m where Source Methods Arguments Arguments Arguments#namelookup" for more details. lookupValueName :: String -> Q (Maybe Name) Source Look up the given name in the (value namespace of the) current splice's scope. See "Language.Haskell.TH.Syntax#namelookup". Exp Source#namecapture" data ModuleInfo Source Obtained from reifyModule in the Q Monad. Constructors Instances type ParentName = Nameor ParensP, which are of use for parsing expressions like (a + b * c) + d * e InfixEand InfixPex will never contain UInfixE, UInfixP, ParensE, or ParensPconstructors. One equation of a type family instance or closed type family. The arguments are the left-hand-side type patterns and the right-hand-side result. Constructors Instances data FamFlavour Source Constructors Instances type StrictType = (Strict, Type) Source type VarStrictType = (Name, Strict, Type) Source.
http://hackage.haskell.org/package/template-haskell-2.9.0.0/docs/Language-Haskell-TH-Syntax.html
CC-MAIN-2015-11
en
refinedweb
Bjorn Helgass wrote:> Allow the default i8042 register locations to be changed at run-time.> This is a prelude to adding discovery via the ACPI namespace.> > Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>> > > +static unsigned long i8042_command_reg = I8042_COMMAND_REG;> +static unsigned long i8042_status_reg = I8042_STATUS_REG;> +static unsigned long i8042_data_reg = I8042_DATA_REG;> +Hi Bjorn,This will not work as these macros are not constants, see i8042-*io.hand i8042_platform_init(). What you need to do is add ACPI hooks toi8042_platform_init for i386/ia64/etc.--Dmitry-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/9/2/188
CC-MAIN-2015-11
en
refinedweb
Victor Duvanenko is an IC design engineer. He has worked on several graphics chips at Intel. Presently he is a consultant at Silicon Engineering Inc. in Santa Cruz. He can be reached at 1040 S. Winchester Blvd., #11, San Jose, CA 95128. Image compilation is a unique method of image compression whereby an image is transformed into a graphics coprocessor instruction stream. Even a simple implementation of the image compilation technique has lead to a 2.5:1 compression ratio with a corresponding reduction in the image reconstruction time. At present, resolutions for graphics displays vary from 320 x 200 on the very low-end to 2K x 2K on the highend, with 640 x 480 being fairly common. A 640 x 480 display consists of 307,200 pixels. If the depth is 8 bits per pixel (a byte for each dot on the screen), then it takes 307,200 bytes to define a full screen image. This is a considerable amount of data to move or manipulate interactively. Considering that resolution of raster displays is increasing every year, the job of graphic data manipulation will not get any easier. To put into perspective what it means to manipulate large amounts of data, consider that to transmit 307,200 bytes of data would take 43 minutes over a 1200-baud modem and about five minutes over a 9600-baud modem. At 9600 baud, you could not manipulate an image interactively. While you could manipulate text, it would be impractical to manipulate high-resolution bitmaps. With high-resolution color scanners rapidly becoming an inexpensive reality, interactive image manipulation will be a necessity. With applications such as large graphics databases, any compression is welcome. With other applications (such as interactive graphics systems) bitmap reconstruction time is of paramount importance. Consequently, any reduction in bitmap size would directly translate into improvement of response time. You can reduce transmission time to enable more efficient graphics data manipulation by using the following methods: One such graphics engine is the Intel 82786, a chip that contains many graphics instructions/primitives embedded in hardware. The 82786 also provides a 40-Mbyte/second bandwidth to graphics memory (a power that will be discussed later). The programs described in this article are designed to illustrate the specific advantages provided by a dedicated graphics coprocessor like the 82786. The image compilation code consists of two stand-alone programs one program for image compression and the other for image reconstruction. The compression is performed entirely by the microprocessor, which converts (compiles) an image/bitmap into a series of higher-level graphics instructions. The graphics program, not the bitmap, is stored on the hard disk. To reconstruct an image, the CPU loads the graphics program into graphics memory. The graphics coprocessor then executes the code and the bitmap is perfectly restored. Parallelism can be harvested for improved performance. The graphics program can be split into subprograms. As the CPU loads one subprogram, the graphics coprocessor can be executing another, thus reconstructing an image piece-by-piece in parallel. If this method of data compression is so great, you might ask, why isn't it being used more widely? Several reasons can be given for this. For one thing, inexpensive dedicated graphics hardware is relatively new. People are just beginning to appreciate the power and flexibility that these machines offer. The cost of these machines may still be prohibitive for some applications. Another reason is that compression takes time--the bigger and more complex the bitmap, the longer it takes. In some cases, this outweighs the gain. Yet another reason is that it takes time, money, and programming skills to refine and solidify compression algorithms. (It took two weeks to write the software in Listing One and Listing Two.) Finally, data compression does not work in all cases. Any general-purpose compression method must make some files longer (otherwise you could continually apply the method to produce an arbitrarily small file).<fn1> However, you could always apply the compression method, then see if the file gets smaller. If it does not get smaller, you could then stay with the original. Therefore, the resulting file is always less than or equal to, in size, to the original file. The Compression Program The compression program shown in Listing One, expects an 8-bit-per-pixel 640 x 480 bitmap to reside in graphics memory (graphics memory is mapped into part of system memory space). The only necessity is graphics memory accessibility to the CPU. The easiest (and most obvious) way to compile an image is to: This procedure would be repeated for all possible colors (in this case 0 through 255). The bitmap will be scanned as many times as there are possible colors. This could get out of hand with 2K x 2K 32-bit-per-pixel bitmaps. Even with 8 bits per pixel, scanning a 640 x 480 bitmap 256 times takes over an hour. The obvious way is usually not the best way. The next most obvious approach is to process only the colors that the bitmap contains. This requires an overhead of a single pass through the bitmap (one would have an array of flags that would be set, once a color was detected). This helps, but more can be done. Because the x and y coordinates are always known during the image scan, it is easy to find the maximum and minimum x and y coordinates for each color. A smaller region could be scanned during the compilation stage. This is exactly what the find_all_colors procedure of the compression program does. It scans through the bitmap once and establishes the region of color existence. A special structure color_struct, handles this. Each element in the structure contains the maximum and minimum x and y coordinates and the number of times that the color is detected during the scan. To make life easier, a type color_t of color_struct is defined. The next stage is to compile each of the regions of color, the extract_scan_lines procedure. Adding some initialization instructions is appropriate since you are making a graphics coprocessor instruction stream. Place a define color and a scan lines instruction (see sidebar for a discussion of the scan lines instruction) into a buffer. The array of scan lines will follow. Then proceed one scan line at a time, starting at minimum y and x coordinate for this color. The run-length encoding is quite simple: as the desired color is found, a flag is set. If the next pixel is of the same color, the count is incremented. If it is not of the same color, the end of the run has been reached and its length is stored in the buffer. This goes on, line-by-line of an image, until maximum y and x have been reached. This procedure is repeated for each color in the image. The graphics coprocessor instruction stream is stored in an array called buff. This array is several disk blocks in size. When it is filled, an end instruction is added and the array is written to a file on hard disk. The Loader Program The loader program, found in Listing Two expects a file to be in a specific format, which "compact" conforms to. This format consists of packets of data that are preceded by a header. The header describes the address in graphics memory where the data is to be placed and how many bytes of data are to be placed. The loader program performs some initialization tasks before loading the graphics coprocessor instructions into graphics memory. It waits for the graphics coprocessor to finish any instructions that may still be executing. It then loads the first packet of graphics instructions (scan lines) and directs the graphics coprocessor to execute them. While the graphics coprocessor is executing these instructions, the main processor gets another packet from the disk. This process continues until all packets have been loaded. The loader takes a time stamp before and after the loading process and reports the total time that it took to load and reconstruct the bitmap. Benchmark Results To further illustrate some of the performance advantages that a graphics coprocessor provides, Table 1, page 43, lists the test results. These tests were run on Intel's Xenix 311 system (which uses a 6-MHz 80286 microprocessor) with an 82786 add-in board (20 MHz with 1 Mbyte of graphics memory). All bitmaps were 640 x 480 at 8 bits per pixel. Bitmaps 1 - 3 are of Mandelbrot fractals. Bitmap 1, which consists of a straight uncompressed bitmap, is the control. Bitmap 4, a solid, single color bitmap, illustrates the best case. Table 1: Test results of image compilation routines Compression Compression Reconstruction File Compression Method Time Time Size Ratio Bitmap 1 none n/a 3.5 sec 307K 1:1 Bitmap 2 scan lines 612 sec. 1.3 sec. 120K 2.5:1 Bitmap 3 scan lines 613 sec. 1.3 sec. 121K 2.5:1 Bitmap 4 scan lines 11 sec. 0.02 sec. 2.9K 105:1 After reviewing the test results for bitmaps shown in Table 1, you may think that 10 minutes is an excessive price to pay for a 2 1/2 times reduction in size and reconstruction time. It is for some applications. For other applications, it allows them to place 2 1/2 times more information on the storage medium to present 2 1/2 times the amount of data to the user in a given amount of time or to retrieve images 2 1/2 times faster. This could make or break an application. Effectively, an 8-bits-per-pixel bitmap has been reduced down to about 3-bits per pixel, without losing any information. Some of the routines developed are useful for digital image processing. For example, find_all_colors routine generates a profile of the bitmap by quoting the number of times a color appeared in the bitmap. This could be turned into the probability density quote by dividing that count by the total number of pixels in the bitmap. This could be taken one step farther. Based on the probability density quote of a color, you could decide that the color is rare in a bitmap and is not worth processing. This could reduce the compression time as well as compressing the file, at a cost of losing some information. Using this method, you could minimize the picture content that is lost. In any case, the compaction algorithm speed could be improved (about three times by my estimates). If you used a 20-MHz 80386, the compression time would dip to less than a minute. Conclusion This image compilation technique offers infinite avenues for further exploration. This article has explored only a single instruction. Other instructions (bit_blit, for example) offer other possibilities. The graphics coprocessors that are presently available have a great variety of instructions, and this method of image compilation can lead to an adaptive image compression. It is possible to predict whether an image can be compressed by using run-length encoding. An average length of a run can be calculated for an image. If the average run takes up more space than one scan lines array element (dx, dy, and length use 6 bytes) then run-length encoding is worthwhile. For example, at 8 bits (one byte) per pixel, the average run for an image must be greater than 6 pixels. This is the break-even point for a compression technique. The compression ratio can be improved by removing redundancy from the compressed file. This can be accomplished by encoding repetitive scan lines sequences in the 82786 macros/subroutines. This achieves a two-dimensional compression. Another method is to reduce the number of bits used for dx, dy, and length elements in the scan lines array by scanning through the array and determining the maximum values ever used. This would require extra processing by the CPU, which would increase encoding and reconstruction time. A differential encoding technique could also be used to encode the differences between successive dx, dy, and length values.<fn2> This also requires some extra CPU processing. By using combinations of these techniques, you could increase compression ratios to above 10:1. The penalty of added CPU processing time may still outweigh the benefit of reduced I/O time caused by improved compression. Compression techniques are still in the infancy, and many promising techniques are emerging, such as "Chaotic Compression" (see Computer Graphics World, November 1987) promises a 10,000:1 compression ratio. References 2. 82786 Graphics Coprocessor User's Manual. Intel, 1987. The Scan Lines Instruction and Run-Length Encoding One of the most powerful 82786 instructions is scan lines, which is used to draw a series of horizontal lines. It is the fastest 82786 drawing instruction, executing at up to 2.5 million pixels per second, at 8 bits per pixel (faster at lower pixel depths). This instruction is generally used for area fills, and has pattern capabilities. The scan lines instruction looks like this: 1. the instruction itself, 16 bits (0BA00H); 2. the address of the scan lines array, a 32-bit value, low word followed by the high word; and 3. the number of horizontal lines to be drawn. The scan lines array elements have a particular format, consisting of: 1. dx, an offset from the beginning of the previous line in the x direction (16 bits, negative or positive); 2. dy, an offset from the beginning of the previous line in the y direction (16 bits, negative or positive); and 3. the length of the line (16 bits, negative or positive). The scan lines instruction starts at the present graphics location, adds dx and dy values of the first array element to it, and draws a line of the specified length. The graphics location is left at the beginning of the line. The new graphics location is then adjusted by dx and dy values of the second array element. The second line is then drawn. This process is repeated until the specified number of lines have been drawn. You could thus draw a scan line at the bottom of the screen and then one at the top of the screen, all within the same scan lines array. Areas of any shape can be drawn by using the scan lines instruction, and they do not have to be contiguous. You can jump to any position on the screen and continue drawing at any time, all with a single array (as long as the color and the pattern can remain constant). You can break up an image into areas of constant color and use a single scan lines array to describe each of them (one for black, one for white, one for a shade of green, and so forth). It would take as many scan lines arrays as there are colors in an image to completely describe that image. The scan lines instruction closely resembles run-length encoding. Run-length encoding removes redundancy from a data file by replacing data elements with the count followed by the element itself. For example, a sequence of AAAAAA can be replaced with 6A, resulting in 3:1 compression. The same technique can be applied to images. A sequence of six black pixels can be replaced by the number 60 (where 6 is the count and 0 is the color black). For an area of constant color, specifying the color along with every run is redundant. You can specify the offset from the current graphical location followed by the run-length. This should begin to resemble the scan lines definition--dx, dy, and length. For many images, encoding areas in this fashion is an efficient way of removing redundancy, thereby compressing an image. The less space an image takes, the faster it can be transmitted or retrieved from storage. _IMAGE COMPRESSION VIA COMPILATION_ by Victor J. Duvanenko [LISTING ONE] <a name="01f9_000c"> /* Bitmap compaction program. */ /* Converts bitmaps in 82786 graphics memory to Graphics processor instruc- */ /* tions. Simulates run length encoding, causing data compression for most */ /* bitmaps. Compression of an 8 bits per pixel down to 3 bits per pixel is */ /* not uncommon. */ /* Created by Victor J. Duvanenko */ /* Usage: compact output_file_name */ /* Input: */ /* An 82786 eight bits per pixel bitmap at address of 0x10000 (this */ /* can be easily changed). The bitmap is 640 pixels wide and 480 */ /* pixels heigh (this can also be changed). This was done for the */ /* simplicity of this example. */ /* Output: */ /* Binary file in the following) define color instruction */ /* 2) scan line instruction */ /* 3) link instruction (link around the array) */ /* 4) scan line array */ /* Caution: The loader must add a 'stop' instruction to the data */ /* (a NOP instruction with GCL bit set). See loader source */ /* code for example. */ /* */ /* This program was written for the XENIX environment, but can be easily */ /* and quickly ported to the PC. Just concentrate on the ideas and not on */ /* the implementation specifics. I'll try to point out XENIX specific sec- */ /* tions. */ #include<stdio.h> #include<fcntl.h> #include<sys/param.h> #include "/usr/tls786/h/tls786.h" #include "/usr/tls786/h/tv1786.h" #defineMAX_NUM_COLORS 256 /* maximum number of colors */ #define GP_BUFF_SIZE 8192 /* GP instruction buffer size */ #define BITMAP_WIDTH 640 #define TRUE 1 #define FALSE 0 #define OK TRUE /* return status for procedures */ #define PMODE 0644 /* protection mode of the output file */ #define MOVSW_ON TRUE /* enable 'move strings' in XENIX driver */ #define DEBUG FALSE /* top debug level - a fun one to turn on */ #define DEBUG_1 FALSE /* next debug level deeper */ #define DEBUG_2 FALSE /* everything you'd ever care to trace */ /* Global data buffers - used by several routines */ unsigned int gp_buff[ GP_BUFF_SIZE ]; /* GP instruction buffer */ unsigned int buff[ GP_BUFF_SIZE ]; /* temporary storage buffer */ unsigned char line_buff[ BITMAP_WIDTH ]; /* line buffer */ /* line_buff should ideally be dynamically alocated to allow any */ /* bitmap width. Left as an excersize for a C hacker. */ int p6handle; /* XENIX file descriptor for the 82786 memory */ int bm_height = 480; /* bitmap height */ int bm_width = 640; /* bitmap width */ long bm_address = 0x10000L; /* bitmap address */ /* Description of each color, histogram, plus additional fields for added */ /* dramatic performance improvement (defines a window of color existance). */ /* Enable DEBUG to see time savings that result from this technique. */ struct color_struct { long count; /* number of times a color appears in the bitmap */ int begin_y, /* scan line where a color first appears */ end_y, /* scan line where a color last appears */ begin_x, /* x position where a color first appears */ end_x; /* x position where a color last appears */ }; typedef struct color_struct color_t; /* Array containing color information about a bitmap under analysis */ color_t colors[ MAX_NUM_COLORS ]; /*---------------------------------------------------------------------*/ /* The body of the data compression program. */ /*---------------------------------------------------------------------*/ main(argc,argv) int argc; char *argv[]; { register i, index, j, num_colors; int n, value, x, y; int f1, f2, /* file descriptors */ buff_overflow; /* the following variables are for debug purposes only */ long average_begin_y, average_end_y; long average_begin_x, average_end_x; float percentage_y, percentage_x; color_t *clr_p; /* XENIX needed structures needed for movsw section of the driver */ union { struct tv1_cntrl brddata; byte rawdata[ sizeof( struct tv1_cntrl ) ]; struct io_data rdata; } regdata; /* Check the command line for proper syntax, a little */ if (argc < 2) { fprintf( stderr,"Usage is: %s output_file_name\n", argv[0] ); exit(1); } /* Open the file where compacted bitmap will be placed. If it doesn't */ /* exist create it. */ if (( f2 = open( argv[1], O_CREAT | O_WRONLY, PMODE )) == -1 ) { fprintf( stderr, "%s: Can't create %s.\n", argv[0], argv[1] ); exit(1); } /* XENIX specific functions to allow one to treat 82786 graphics memory */ /* as a file - a file descriptor gets passed to every routine that talks */ /* to the 82786. This allows a very flexible multiple 82786 environment. */ p6handle = open786( MASTER_786 ); if ( p6handle == NULL ) { fprintf( stderr, "%s: Can't access 82786.\n", argv[0] ); exit(1); } #if MOVSW_ON /* XENIX specific - enable movsw in the 82786 driver */ ioctl( p6handle, TEST_SFLAG, ®data ); if ( regdata.rdata.regval == FALSE ) ioctl( p6handle, SET_SFLAG, &value ); #endif /* Find all unique colors in the bitmap file. */ num_colors = find_all_colors( colors, bm_height ); #if DEBUG /* histogram and performance improvement information of the color */ /* existance window technique. */ printf( "num_colors = %d\n", num_colors ); average_begin_y = average_end_y = 0L; average_begin_x = average_end_x = 0L; for( i = 0; i < MAX_NUM_COLORS; i++ ) { clr_p = &colors[i]; /* for denser and cleaner notation purposes */ printf( "c %4d %7ld b_y%4d e_y%4d", i, clr_p->count, clr_p->begin_y, clr_p->end_y ); printf( " b_x%5d e_x%5d\n", clr_p->begin_x, clr_p->end_x ); /* average only the existing colors */ if ( clr_p->count != 0 ) { average_begin_y += (long)clr_p->begin_y; average_end_y += (long)clr_p->end_y; average_begin_x += (long)clr_p->begin_x; average_end_x += (long)clr_p->end_x; } } printf( "\n" ); average_begin_y /= (long)num_colors; average_end_y /= (long)num_colors; printf( "average Y begin = %ld\t\taverage Y end = %ld\n", average_begin_y, average_end_y ); percentage_y = ((float)( average_begin_y ) / bm_height * 100 ); percentage_y += ((float)((long)( bm_height ) - average_end_y ) / bm_height * 100 ); printf( "percentage Y savings = %2.2f\n", percentage_y ); average_begin_x /= (long)num_colors; average_end_x /= (long)num_colors; printf( "average X begin = %ld\t\taverage X end = %ld\n", average_begin_x, average_end_x ); percentage_x = ((float)( average_begin_x ) / bm_width * 100 ); percentage_x += ((float)((long)( bm_width ) - average_end_x ) / bm_width * 100 ); printf( "percentage X savings = %2.2f\n", percentage_x ); #endif /* Relying on the loader to execute a def_bitmap instruction before */ /* loading the GP instruction list generated by this program. */ /* Convert each color in the bitmap into scan lines - one color at a time */ for( i = index = 0; i < MAX_NUM_COLORS; i++ ) { if ( colors[i].count == 0L ) continue; /* skip non-existant colors */ buff_overflow = FALSE; n = extract_scan_lines((long)(index << 1),colors, i, buff, &buff_overflow); if ( buff_overflow ) { fprintf( stderr, "GP instruction list overflow.\n" ); exit( 1 ); } /* If the newly extracted scan lines array can't fit into the GP */ /* instruction buffer, store instruction built up so far, and */ /* start filling the buffer from the begining. */ if (( index + n ) > GP_BUFF_SIZE ) { /* Flag the user if a color generates more lines than there is */ /* space in the instruction buffer. Very unlikely. */ if ( index <= 0 ) { fprintf( stderr, "Instruction list overflow.\n" ); exit( 1 ); } /* store GP instruction built up so far */ write_buffer_to_file( f2, gp_buff, index ); /* adjust the addresses in the GP instruction set */ /* since the GP code is not relocatable. */ index = 0; buff[ 4 ] = (int)(( 20L ) & 0xffffL ); /* scan line array address */ buff[ 5 ] = (int)(( 20L ) >> 16 ); buff[ 8 ] = (int)(( (long)( n << 1 )) & 0xffffL ); /* link address */ buff[ 9 ] = (int)(( (long)( n << 1 )) >> 16); } /* copy elements from temporary buffer into instruction buffer */ for ( j = 0; j < n; ) gp_buff[ index++ ] = buff[ j++ ]; #if DEBUG printf( "index = %d\n", index ); #endif } /* store whatever is left in the very last buffer */ if ( index > 0 ) write_buffer_to_file( f2, gp_buff, index ); #if MOVSW_ON /* XENIX specific - disable movesw in the 82786 driver */ if ( regdata.rdata.regval == FALSE ) ioctl( p6handle, CLEAR_SFLAG, &value ); #endif return( 0 ); /* DONE!!! Wasn't that simple?! */ } /*-------------------------------------------------------------------------*/ /* Scan through the bitmap once and fill the 'colors' array with some */ /* very useful data about each color - how many times it apears in the */ /* bitmap and where in the bitmap it resides (define a window of existance */ /* for each color. Return number of colors that were found in the bitmap. */ /*-------------------------------------------------------------------------*/ find_all_colors( colors, num_lines ) color_t colors[]; /* array of colors - 256 elements */ int num_lines; /* number of lines in the bitmap */ { register int x; /* x coordinate on a scan line */ register color_t *color_ptr; /* pointer - for speed */ register int n; /* number of bytes in a scan line */ int line, /* present scan line in the bitmap */ num_colors; /* number of colors found in the bitmap */ #if DEBUG_1 printf("Entered find_all_colors routine. num_lines = %d\n", num_lines ); #endif /* Initialize the 'colors' array. */ for( x = 0; x < MAX_NUM_COLORS; ) { color_ptr = &colors[x++]; /* use a pointer for speed */ color_ptr->count = 0L; color_ptr->begin_y = color_ptr->end_y = 0; color_ptr->begin_x = color_ptr->end_x = 0; } /* Scan and analyze the bitmap one line at a time. */ for ( line = 0; line < num_lines; line++ ) { n = get_scan_line( bm_address, line_buff, line, bm_width ); for( x = 0; x < n; x++ ) { color_ptr = &( colors[ line_buff[x] ]); /* mark the begining scan line for this color */ if ( color_ptr->count++ == 0L ) { color_ptr->begin_y = line; color_ptr->begin_x = x; } /* adjust the ending scan line each time a color is detected */ color_ptr->end_y = line; /* adjust x window for a color if needed */ if ( x < color_ptr->begin_x ) color_ptr->begin_x = x; if ( x > color_ptr->end_x ) color_ptr->end_x = x; } } for ( x = num_colors = 0; x < MAX_NUM_COLORS; ) if ( colors[x++].count > 0L ) num_colors++; #if DEBUG_1 printf( "Exited find_all_colors routine.\n" ); #endif return( num_colors ); } /*-------------------------------------------------------------------------*/ /* The heart of compression. */ /* Procedure to extract scan lines from a bitmap file (with some help from */ /* 'colors' array). Assumes that the GP buffer is impossible to overrun */ /* (left as an exercise to correct). The best way to understand this one */ /* is to go through it with a particular bitmap in mind. */ /*-------------------------------------------------------------------------*/ extract_scan_lines( start_addr, colors, color, buff, overflow ) long start_addr; /* starting address of this GP instruction list */ color_t colors[]; /* colors description array */ int color, /* color that is being extracted */ buff[], /* gp instruction buffer */ overflow; /* overflow flag, gets set if the instruction buffer end */ /* is reached = ( num_elements - GP_BUFF_THRESHOLD ) */ { /* Keep x and y coordinates from call to call - needed to calculate */ /* dx and dy of the next scan line array. */ static int x = 0, /* present x coordinate */ y = 0; /* present y coordinate */ register i, count, line; int index, dy; /* gp instruction buffer index */ int n, num_lines, num_lines_index, first_time; int link_lower, link_upper; BOOLEAN within_scan_line; #if DEBUG printf( "color = %d\n",color ); #endif /* Start at the begining of a buffer and add the GP instructions */ /* to define color, scan lines, and link (around the array). */ /* Relies on the loader to def_bitmap, texture, and raster operation. */ index = 0; /* def_color instruction */ buff[ index++ ] = 0x3d00; buff[ index++ ] = (((int)color ) | (((int)color ) << 8)); buff[ index++ ] = 0; /* scan_lines instruction */ buff[ index++ ] = 0xba00; buff[ index++ ] = (int)(( start_addr + 20L ) & 0xffffL ); buff[ index++ ] = (int)(( start_addr + 20L ) >> 16 ); num_lines_index = index++; /* number of lines in the scan lines */ /* array is not yet known. */ /* link instruction - jump around the array */ buff[ index++ ] = 0x0200; link_lower = index++; /* fill in when the number of elements is */ link_upper = index++; /* known. */ num_lines = 0; first_time = TRUE; /* start at the bottom of the window (of this color) */ /* and process one line at a time until the top of the window. */ dy = line = colors[ color ].begin_y; for ( ; line <= colors[ color ].end_y; line++ ) { n = get_scan_line( bm_address, line_buff, line, bm_width ); count = 0; within_scan_line = FALSE; /* Process the line one pixel at a time */ n = colors[ color ].end_x; for( i = colors[ color ].begin_x; i <= n; i++ ) { if ( line_buff[i] != color ) { /* found a pixel that is not of desired color */ if ( within_scan_line ) { /* reached the end of scan line of desired color */ buff[ index++ ] = --count; /* length of it */ within_scan_line = FALSE; y += dy; count = dy = 0; num_lines++; } continue; /* to the next pixel */ } else /* found a pixel of desired color */ { if ( ! within_scan_line ) /* found the begining */ { buff[ index++ ] = i - x; /* dx for scan line instruction */ x = i; if ( first_time ) { /* first time for this color */ #if DEBUG_2 printf( "first time, y = %d, dy = %d\n", y, dy ); #endif buff[ index++ ] = dy - y; /* dy for scan line */ y = dy; dy = 0; /* reset dy, now that we've moved */ first_time = FALSE; } else buff[ index++ ] = dy; /* dy for scan line instr. */ within_scan_line = TRUE; /* signal the begining edge */ } count++; /* Take care of the last pixel == color case */ if ( i == n ) { buff[ index++ ] = --count; within_scan_line = FALSE; y += dy; count = dy = 0; num_lines++; } } } #if DEBUG_1 printf( "x = %d,\t y = %d\n", x, y ); #endif dy++; } /* Now, the number of lines of this color is known. */ /* Therefore, scan line array instruction and link address can be filled.*/ buff[ num_lines_index ] = num_lines; buff[ link_lower ] = (int)(( start_addr + (long)( index << 1)) & 0xffffL ); buff[ link_upper ] = (int)(( start_addr + (long)( index << 1)) >> 16); #if DEBUG_2 printf( "num_lines = %d,\tx = %d,\t y = %d\n", num_lines, x, y ); #endif return( index ); } /*--------------------------------------------------------------*/ /* Procedure that writes the GP instruction list to a file. */ /* An appropriate header is added before the GP list. */ /*--------------------------------------------------------------*/ write_buffer_to_file( fd, buff, num_of_elements ) int fd, /* output file descriptor */ buff[], /* pointer to the buffer */ num_of_elements; /* number of elements to be written */ /* each element is 16 bits (integer) */ { /* Header - placed before every block (8 bytes) */ struct header { int type; /* 0 - GP instructions, 1 - bitmap */ long addr; /* load address, ffffffff - don't care */ int num_bytes; /* number of bytes */ }; typedef struct header header_t; header_t hdr; /* Write the header into the file */ hdr.type = 0; hdr.addr = 0L; /* tell the loader to place instructions */ /* address 0 in 82786 memory. */ hdr.num_bytes = num_of_elements << 1; /* Write the header into the output file */ if ( write( fd, &hdr, sizeof( hdr )) != sizeof( hdr )) { fprintf( stderr, "compact: Write error.\n" ); exit(1); } /* Write the GP instruction list into the output file */ if ( write( fd, buff, num_of_elements << 1 ) != ( num_of_elements << 1 )) { fprintf( stderr, "compact: Write error.\n" ); exit(1); } return( OK ); } /*--------------------------------------------------------------------*/ /* Procedure to read any scan line from the bitmap stored in graphics */ /* memory. Swap bytes to make scanning easier. */ /*--------------------------------------------------------------------*/ get_scan_line( base_addr, buff_gsl, line, line_width ) long base_addr; /* starting address of the bitmap */ unsigned char *buff_gsl; /* scan line buffer */ int line, /* which line to read */ line_width; /* how many pixels in a line */ { long address; #if DEBUG_1 printf( "Entered get_scan_line routine. addr = %lx", addr ); printf( "\tline = %d\tline_width = %d\n", line, line_width ); #endif address = base_addr + ((long)( line ) * (long)( line_width )); getmem( p6handle, address, buff_gsl, line_width >> 1 ); swab( buff_gsl, buff_gsl, line_width ); /* be carefull with swab (note that source and destination are the same) */ /* functionality depends on implementation of the swab routine. */ #if DEBUG_1 printf("Exited get_scan_line routine.\n"); #endif return( line_width ); } <a name="01f9_000d"><a name="01f9_000d"> <a name="01f9_000e">[LISTING TWO] <a name="01f9_000e"> /* A simple GP instruction list loader. */ /* Created by Victor J. Duvanenko */ /* */ /* Loads a GP instruction list from a file into 82786 memory and instructs */ /* the GP to execute them. If the file contains more instructions they are */ /* read in. The loader then waits for the GP to finish the previous list. */ /* Only when the GP is finished does the loader place the new list in 82786 */ /* memory. */ /* */ /* Usage: load file_name */ /* */ /* Input: Binary file of the followind) GP instruction list. */ /* */ /* Output: GP instructions are loaded into 82786 memory. */ /* */ /* The loader provides the following services (may harm some applications) */ /* 1) def_bitmap, def_texture, and raster_op instructions with certain */ /* defaults are executed before loading the GP instruction list. */ /* 2) "stop" instruction is placed at the end of every GP list - 'nop' with */ /* GCL bit set. */ /* 3) Load time quote in milliseconds. */ #include<stdio.h> #include<fcntl.h> #include<sys/types.h> #include<sys/timeb.h> #include "/usr/tls786/h/tls786.h" #include "/usr/tls786/h/tv1786.h" #define BUFF_SIZE 32600 #define BOOLEAN int #define TRUE 1 #define FALSE 0 #define OK 1 #define INTERVAL 1 /* Sampling period in milliseconds */ #define WAIT 5000 /* wait period for the GP or DP to finish */ #define COMM_BUF_BOTTOM 0L #define DEBUG FALSE #define DEBG_1 FALSE /* GP instruction list buffer */ unsigned char buff[ BUFF_SIZE ]; main(argc,argv) int argc; char *argv[]; { register i; int p6handle, f1, n_items, ellapsed_time, value; struct timeb time_before, time_after; long addr, addr_bm; /* bitmap base address */ /* Header - placed before every block (8 bytes) */ struct header { int type; /* 0 - GP instructions, 1 - Bitmap */ long addr; /* load address, ffffffff - don't care */ int num_bytes; /* number of bytes */ }; typedef struct header header_t; header_t hdr; /* XENIX specific - turns on movsw instruction */ union { struct tv1_cntrl brddata; byte rawdata[ sizeof( struct tv1_cntrl )]; struct io_data rdata; }regdata; /* Check command line for proper usage - just a little. */ if (argc == 1) { fprintf( stderr, "Usage is: %s file_name\n", argv[0] ); exit(1); } /* Open the input file for reading only. */ if (( f1 = open( argv[1], O_RDONLY )) == -1 ) { fprintf( stderr, "%s: Can't open %s.\n", argv[0], argv[1] ); exit(1); } /* XENIX specific - enable the 82786 driver. */ p6handle = open786( MASTER_786 ); if ( p6handle == NULL ) { fprintf( stderr, "%s: Can't access 82786.\n", argv[0] ); exit(1); } /* XENIX specific - enable the use of movsw instruction driver. */ value = 0; ioctl( p6handle, TEST_SFLAG, ®data ); if ( regdata.rdata.regval == FALSE ) ioctl( p6handle, SET_SFLAG, &value ); addr = 0L; addr_bm = 0x10000L; ftime( &time_before ); /* Get present time stamp */ /* A bit of overhead to make sure that the bitmap and texture are defined */ /* before the GP command list is loaded. */ i = 0; buff[ i++ ] = 0x00; /* Def_bitmap */ buff[ i++ ] = 0x1a; buff[ i++ ] = 0x00; buff[ i++ ] = 0x00; buff[ i++ ] = 0x01; buff[ i++ ] = 0x00; buff[ i++ ] = 0x7f; /* 640 (for now) */ buff[ i++ ] = 0x02; buff[ i++ ] = 0xdf; /* by 480 (for now) */ buff[ i++ ] = 0x01; buff[ i++ ] = 0x08; /* 8bpp (for now) */ buff[ i++ ] = 0x00; buff[ i++ ] = 0x00; /* Def_logical_op */ buff[ i++ ] = 0x41; buff[ i++ ] = 0xff; buff[ i++ ] = 0xff; buff[ i++ ] = 0x05; buff[ i++ ] = 0x00; buff[ i++ ] = 0x00; /* Def_texture */ buff[ i++ ] = 0x06; buff[ i++ ] = 0xff; buff[ i++ ] = 0xff; buff[ i++ ] = 0x01; /* stop */ buff[ i++ ] = 0x03; /* Wait for a previous GP command list to finish */ if ( waitgp( p6handle, INTERVAL, WAIT ) < 0 ) { printf("GP is hung!!!\n"); exit(1); } /* Place it in 786 graphics memory */ putmem( p6handle, addr, buff, i >> 1 ); /* Direct the GP to execute the command */ putreg( p6handle, GRP_GR1, (int)( addr & 0xffff )); putreg( p6handle, GRP_GR2, (int)( addr >> 16 )); putreg( p6handle, GRP_GR0, 0x200 ); /* Now, for the GP list from an input file. */ /* Read the header and then the data. */ while (( n_items = read( f1, &hdr, sizeof( hdr ))) > 0 ) { i = 0; if ( n_items != sizeof( hdr )) { printf( stderr, "%s: Read error.\n", argv[0] ); exit(1); } /* does it matter where the GP list is placed? */ if ( hdr.addr != 0xffffffffL ) addr = hdr.addr; /* GP instruction list */ if ( hdr.type == 0 ) { if (( n_items = read( f1, buff, hdr.num_bytes )) == hdr.num_bytes ) { /* Add a "stop" command to the GP instruction list */ i += n_items; buff[ i++ ] = 0x01; buff[ i++ ] = 0x03; /* Wait for the GP to finish any previous instruction */ if ( waitgp( p6handle, INTERVAL, WAIT ) < 0 ) { fprintf( stderr, "GP is hung!!!\n" ); exit( 1 ); } /* Place it in 786 graphics memory */ putmem( p6handle, addr, buff, i >> 1 ); } else { printf( stderr, "%s: Read error.\n", argv[0] ); exit(1); } /* Direct the GP to execute the command */ putreg( p6handle, GRP_GR1, (int)( addr & 0xffff )); putreg( p6handle, GRP_GR2, (int)( addr >> 16 )); putreg( p6handle, GRP_GR0, 0x200 ); } /* Is it bitmaps - then place the data at that address */ if ( hdr.type == 1 ) { if (( n_items = read( f1, &buff[i], hdr.num_bytes )) == hdr.num_bytes ) { /* Place it in 786 graphics memory */ putmem( p6handle, addr_bm + hdr.addr, buff, n_items >> 1 ); } } } /* Get the time stamp after the loading is done. */ ftime( &time_after ); ellapsed_time = (int)( time_after.time - time_before.time ) * 1000; ellapsed_time += ( time_after.millitm - time_before.millitm ); printf( "%dms\n", ellapsed_time ); /* XENIX specific - disable movesw in the 786 driver */ if ( regdata.rdata.regval == FALSE ) ioctl( p6handle, CLEAR_SFLAG, &value ); return(0); } <pre>
http://www.drdobbs.com/database/image-compression-via-compilation/184408024
CC-MAIN-2015-11
en
refinedweb
. Sapphire includes a modeling framework that is tuned to the needs of the Sapphire UI framework and is designed to be easy to learn. It is also optimized for iterative development. A Sapphire model is defined by writing Java interfaces and using annotations to attach metadata. An annotation processor that is part of Sapphire SDK then generates the implementation classes. Sapphire leverages Eclipse Java compiler to provide quick and transparent code generation that runs in the background while you work on the model. The generated classes are treated as build artifacts and are not source controlled. In fact, you will rarely have any reason to look at them. All model authoring and consumption happens through the interfaces. In this article we will walk through a Sapphire sample called EzBug. The sample is based around a scenario of building a bug reporting system. Let's start by looking at IBugReport. @GenerateImpl public interface IBugReport extends IModelElement { ModelElementType TYPE = new ModelElementType( IBugReport.class ); // *** CustomerId *** @XmlBinding( path = "customer" ) @Label( standard = "customer ID" ) ValueProperty PROP_CUSTOMER_ID = new ValueProperty( TYPE, "CustomerId" ); Value<String> getCustomerId(); void setCustomerId( String value ); // *** Title *** @XmlBinding( path = "title" ) @Label( standard = "title" ) @Required ValueProperty PROP_TITLE = new ValueProperty( TYPE, "Title" ); Value<String> getTitle(); void setTitle( String value ); // *** Details *** @XmlBinding( path = "details" ) @Label( standard = "details" ) @LongString @Required ValueProperty PROP_DETAILS = new ValueProperty( TYPE, "Details" ); Value<String> getDetails(); void setDetails( String value ); // *** ProductVersion *** @Type( base = ProductVersion.class ) @XmlBinding( path = "version" ) @Label( standard = "version" ) @DefaultValue( text = "2.5" ) ValueProperty PROP_PRODUCT_VERSION = new ValueProperty( TYPE, "ProductVersion" ); Value<ProductVersion> getProductVersion(); void setProductVersion( String value ); void setProductVersion( ProductVersion value ); // *** ProductStage *** @Type( base = ProductStage.class ) @XmlBinding( path = "stage" ) @Label( standard = "stage" ) @DefaultValue( text = "final" ) ValueProperty PROP_PRODUCT_STAGE = new ValueProperty( TYPE, "ProductStage" ); Value<ProductStage> getProductStage(); void setProductStage( String value ); void setProductStage( ProductStage value ); // *** Hardware *** @Type( base = IHardwareItem.class ) @XmlListBinding( mappings = { @XmlListBinding.Mapping( element = "hardware-item", type = IHardwareItem.class ) } ) @Label( standard = "hardware" ) ListProperty PROP_HARDWARE = new ListProperty( TYPE, "Hardware" ); ModelElementList<IHardwareItem> getHardware(); } As you can see in the above code listing, a model element definition in Sapphire is composed of a series of blocks. These blocks define properties of the model element. Each property block has a PROP_* field that declares the property, the metadata in the form of annotations and the accessor methods. All metadata about the model element is stored in the interface. There are no external files. When this interface is compiled, Java persists these annotation in the .class file and Sapphire is able to read them at runtime. Sapphire has four types of properties: value, element, list and transient. Value properties hold simple data, such as strings, integers, enums, etc. Any object that is immutable and can be serialized to a string can be stored in a value property. An element property holds a reference to another model element. You can specify whether this nested model element should always exist (implied element property) or if it should be possible to create and delete it. A list property holds zero or more model elements. A list can be homogeneous (only holds one type of elements) or heterogeneous (holds elements of various specified types). A transient property holds an arbitrary object reference that does not need to be persisted to permanent storage. Using a combination of list and element properties, it is possible to create an arbitrary model hierarchy. In the above listing, there is one list property. It is homogeneous and references IHardwareItem element type. Let's look at that type next. @GenerateImpl public interface IHardwareItem extends IModelElement { ModelElementType TYPE = new ModelElementType( IHardwareItem.class ); // *** Type *** @Type( base = HardwareType.class ) @XmlBinding( path = "type" ) @Label( standard = "type" ) @Required ValueProperty PROP_TYPE = new ValueProperty( TYPE, "Type" ); Value<HardwareType> getType(); void setType( String value ); void setType( HardwareType value ); // *** Make *** @XmlBinding( path = "make" ) @Label( standard = "make" ) @Required ValueProperty PROP_MAKE = new ValueProperty( TYPE, "Make" ); Value<String> getMake(); void setMake( String value ); // *** ItemModel *** @XmlBinding( path = "model" ) @Label( standard = "model" ) ValueProperty PROP_ITEM_MODEL = new ValueProperty( TYPE, "ItemModel" ); Value<String> getItemModel(); void setItemModel( String value ); // *** Description *** @XmlBinding( path = "description" ) @Label( standard = "description" ) @LongString ValueProperty PROP_DESCRIPTION = new ValueProperty( TYPE, "Description" ); Value<String> getDescription(); void setDescription( String value ); } The IHardwareItem listing should look very similar to IBugReport and that's the point. A Sapphire model is just a collection of Java interfaces that are annotated in a certain way and reference each other. A bug report is contained in IFileBugReportOp, which serves as the top level type in the model. @GenerateImpl @RootXmlBinding( elementName = "report" ) public interface IFileBugReportOp extends IModelElement { ModelElementType TYPE = new ModelElementType( IFileBugReportOp.class ); // *** BugReport *** @Type( base = IBugReport.class ) @Label( standard = "bug report" ) @XmlBinding( path = "bug" ) ImpliedElementProperty PROP_BUG_REPORT = new ImpliedElementProperty( TYPE, "BugReport" ); IBugReport getBugReport(); } Let's now look at the last bit of code that goes with this model, which is the enums. @Label( standard = "type", full = "hardware type" ) public enum HardwareType { @Label( standard = "CPU" ) CPU, @Label( standard = "main board" ) @EnumSerialization( primary = "Main Board" ) MAIN_BOARD, @Label( standard = "RAM" ) RAM, @Label( standard = "video controller" ) @EnumSerialization( primary = "Video Controller" ) VIDEO_CONTROLLER, @Label( standard = "storage" ) @EnumSerialization( primary = "Storage" ) STORAGE, @Label( standard = "other" ) @EnumSerialization( primary = "Other" ) OTHER } @Label( standard = "product stage" ) public enum ProductStage { @Label( standard = "alpha" ) ALPHA, @Label( standard = "beta" ) BETA, @Label( standard = "final" ) FINAL } @Label( standard = "product version" ) public enum ProductVersion { @Label( standard = "1.0" ) @EnumSerialization( primary = "1.0" ) V_1_0, @Label( standard = "1.5" ) @EnumSerialization( primary = "1.5" ) V_1_5, @Label( standard = "1.6" ) @EnumSerialization( primary = "1.6" ) V_1_6, @Label( standard = "2.0" ) @EnumSerialization( primary = "2.0" ) V_2_0, @Label( standard = "2.3" ) @EnumSerialization( primary = "2.3" ) V_2_3, @Label( standard = "2.4" ) @EnumSerialization( primary = "2.4" ) V_2_4, @Label( standard = "2.5" ) @EnumSerialization( primary = "2.5" ) V_2_5 } You can use any enum as a type for a Sapphire value property. Here, once again, you see Sapphire pattern of using Java annotations to attach metadata to model particles. In this case the annotations are specifying how Sapphire should present enum items to the user and how these items should be serialized to string form. The bulk of the work in writing UI using Sapphire is modeling the data that you want to present to the user. Once the model is done, defining the UI is simply a matter of arranging the properties on the screen. This is done via an XML file. <definition> <import> <package>org.eclipse.sapphire.samples.ezbug</package> </import> <composite> <id>bug.report</id> <documentation> <title>EzBug</title> <content>This would be the help content for the EzBug system.</content> </documentation> <content> <property-editor>CustomerId</property-editor> <property-editor>Title</property-editor> <property-editor> <property>Details</property> <hint> <name>expand.vertically</name> <value>true</value> </hint> </property-editor> <property-editor>ProductVersion</property-editor> <property-editor>ProductStage</property-editor> <property-editor> <property>Hardware</property> <child-property> <property>Type</property> </child-property> <child-property> <property>Make</property> </child-property> <child-property> <property>ItemModel</property> </child-property> </property-editor> <composite> <indent>true</indent> <content> <separator> <label>details</label> </separator> <switching-panel> <list-selection-controller> <property>Hardware</property> </list-selection-controller> <panel> <key>IHardwareItem</key> <content> <property-editor> <property>Description</property> <hint> <name>show.label.above</name> <value>true</value> </hint> <hint> <name>height</name> <value>5</value> </hint> </property-editor> </content> </panel> <default-panel> <content> <label>Select a hardware item above to view or edit additional parameters.</label> </content> </default-panel> </switching-panel> </content> </composite> </content> <hint> <name>expand.vertically</name> <value>true</value> </hint> <hint> <name>width</name> <value>600</value> </hint> <hint> <name>height</name> <value>500</value> </hint> </composite> <dialog> <id>bug.report.dialog</id> <label>create bug report (sapphire sample)</label> <initial-focus>Title</initial-focus> <content> <include>bug.report</include> </content> <hint> <name>expand.vertically</name> <value>true</value> </hint> </dialog> </definition> A Sapphire UI definition is a hierarchy of parts. At the lowest level we have the property editor and a few other basic parts like separators. These are aggregated together into various kinds of composities until the entire part hierarchy is defined. Some hinting here and there to guide the UI renderer and the UI definition is complete. Note the top-level composite and dialog elements. These are parts that you can re-use to build more complex UI definitions or reference externally from Java code. Next we will write a little bit of Java code to open the dialog that we defined. IFileBugReportOp op = IFileBugReportOp.TYPE.instantiate(); IBugReport report = op.getBugReport(); SapphireDialog dialog = new SapphireDialog( shell, report, "org.eclipse.sapphire.samples/org/eclipse/sapphire/samples/ezbug/EzBug.sdef!bug.report.dialog" ); if( dialog.open() == Dialog.OK ) { // Do something. User input is found in the bug report model. } Pretty simple, right? We create the model and then use the provided SapphireDialog class to instantiate the UI by referencing the model instance and the UI definition. The pseudo-URI that's used to reference the UI definition is simply bundle id, followed by the path within that bundle to the file holding the UI definition, followed by the id of the definition to use. Let's run it and see what we get... There you have it. Professional rich UI backed by your model with none of the fuss of configuring widgets, trying to get layouts to do what you need them to do or debugging data binding issues. A dialog is nice, but really a wizard would be better suited for filing a bug report. Can Sapphire do that? Sure. Let's first go back to the model. A wizard is a UI pattern for configuring and then executing an operation. Our model is not really an operation yet. We can create and populate a bug report, but then we don't know what to do with it. Any Sapphire model element can be turned into an operation by adding an execute method. We will do that now with IFileBugReportOp. In particular, IFileBugReportOp will be changed to extend IExecutableModelElement and will acquire the following method definition: // *** Method: execute *** @DelegateImplementation( FileBugReportOpMethods.class ) Status execute( ProgressMonitor monitor ); Note how the execute method is specified. We don't want to modify the generated code to implement it, so we use delegation instead. The @DelegateImplementation annotation can be used to delegate any method on a model element to an implementation located in another class. The Sapphire annotation processor will do the necessary hookup. public class FileBugReportOpMethods { public static final Status execute( IFileBugReportOp context, ProgressMonitor monitor ) { // Do something here. return Status.createOkStatus(); } } The delegate method implementation must match the method being delegated with two changes: (a) it must be static, and (b) it must take the model element as the first parameter. Now that we have completed the bug reporting operation, we can return to the UI definition file and add the following: <wizard> <id>wizard</id> <label>create bug report (sapphire sample)</label> <page> <id>main.page</id> <label>create bug report</label> <description>Create and submit a bug report.</description> <initial-focus>Title</initial-focus> <content> <with> <path>BugReport</path> <default-panel> <content> <include>bug.report</include> </content> </default-panel> </with> </content> <hint> <name>expand.vertically</name> <value>true</value> </hint> </page> </wizard> The above defines a one page wizard by re-using the composite definition created earlier. Now back to Java to use the wizard... IFileBugReportOp op = IFileBugReportOp.TYPE.instantiate(); SapphireWizard<IFileBugReportOp> wizard = new SapphireWizard<IFileBugReportOp>( op, "org.eclipse.sapphire.samples/org/eclipse/sapphire/samples/ezbug/EzBug.sdef!wizard" ); WizardDialog dialog = new WizardDialog( shell, wizard ); dialog.open(); SapphireWizard will invoke the operation's execute method when the wizard is finished. That means we don't have to act based on the result of the open call. The execute method will have completed by the time the open method returns to the caller. The above code pattern works well if you are launching the wizard from a custom action, but if you need to contribute a wizard to an extension point, you can extend SapphireWizard to give your wizard a zero-argument constructor that creates the operation and references the correct UI definition. Let's run it... Now that we have a system for submitting bug reports, it would be nice to have a way to maintain a collection of these reports. Even better if we can re-use some of our existing code to do this. Back to the model. The first step is to create IBugDatabase type which will hold a collection of bug reports. By now you should have a pretty good idea of what that will look like. @GenerateImpl @RootXmlBinding( elementName = "bug-database" ) public interface IBugDatabase extends IModelElement { ModelElementType TYPE = new ModelElementType( IBugDatabase.class ); // *** BugReports *** @Type( base = IBugReport.class ) @Label( standard = "bug report" ) @XmlListBinding( mappings = { @XmlListBinding.Mapping( element = "bug", type = IBugReport.class ) } ) ListProperty PROP_BUG_REPORTS = new ListProperty( TYPE, "BugReports" ); ModelElementList<IBugReport> getBugReports(); } That was easy. Now let's go back to the UI definition file. Sapphire simplifies creation of multi-page editors. It also has very good integration with WTP XML editor that makes it easy to create the very typical two-page editor with a form-based page and a linked source page showing the underlying XML. The linkage is fully bi-directional. To create an editor, we start by defining the structure of the pages that will be rendered by Sapphire. Sapphire currently only supports one editor page layout, but it is a very flexible layout that works for a lot scenarios. You get a tree outline of content on the left and a series of sections on the right that change depending on the selection in the outline. <editor-page> <id>editor.page</id> <page-header-text>bug database (sapphire sample)</page-header-text> <initial-selection>bug reports</initial-selection> <root-node> <node> <label>bug reports</label> <section> <content> <label>Use this editor to manage your bug database.</label> <spacer/> <action-link> <action-id>Sapphire.Add</action-id> <label>add a bug report</label> </action-link> </content> </section> <node-factory> <property>BugReports</property> <case> <label>${ Title == null ? "<bug>" : Title }</label> <section> <label>bug report</label> <content> <include>bug.report</include> </content> </section> </case> </node-factory> </node> </root-node> </editor-page> You can see that the definition centers around the outline. The definition traverses the model as the outline is defined with sections attached to various nodes acquiring the context model element from their node. The outline can nest arbitrarily deep and you can even define recursive structures by factoring out node definitions, assigning ids to them and then referencing those definitions similarly to how this sample references an existing composite definition. The next step is to create the actual editor. Sapphire includes several editor classes for you to choose from. In this article we will use the editor class that's specialized for the case where you are editing an XML file and you want to have an editor page rendered by Sapphire along with an XML source page. public final class BugDatabaseEditor extends SapphireEditorForXml { public BugDatabaseEditor() { super( "org.eclipse.sapphire.samples" ); setRootModelElementType( IBugDatabase.TYPE ); setEditorDefinitionPath( "org.eclipse.sapphire.samples/org/eclipse/sapphire/samples/ezbug/EzBug.sdef/editor.page" ); } } Finally, we need to register the editor. There are a variety of options for how to do this, but covering all of these options is outside the scope of this article. For simplicity we will register the editor as the default choice for files named "bugs.xml". <extension point="org.eclipse.ui.editors"> <editor class="org.eclipse.sapphire.samples.ezbug.ui.BugDatabaseEditor" default="true" filenames="bugs.xml" id="org.eclipse.sapphire.samples.ezbug.ui.BugDatabaseEditor" name="Bug Database Editor (Sapphire Sample)"/> </extension> That's it. We are done creating the editor. After launching Eclipse and creating a bug.xml file, you should see an editor that looks like this: Sapphire really shines in complex cases like this where form UI is sitting on top a source file that users might edit by hand. In the above screen capture, what happened is that the user manually entered "BETA2" for the product stage in the source view. There is a problem marker next to the property editor and the blue assistance popup is accessible by clicking on that marker. The problem message is displayed along with additional information about the property and available actions. The "Show in source" action, for instance, will immediately jump to the editor's source page and highlight the text region associated with this property. This is very valuable when you must deal with large files. These facilities and many others are available out of the box with Sapphire with no extra effort from the developer. Now that you've been introduced to what Sapphire can do, compare it to how you are currently writing UI code. All of the code presented in this article can be written by a developer with just a few weeks of Sapphire experience in an hour or two. How long would it take you to create something comparable using your current method of choice?
http://www.eclipse.org/sapphire/releases/0.3/documentation/introduction/index.html
CC-MAIN-2015-11
en
refinedweb
ASP.NET Request Validation Description Request validation is a feature in ASP.NET that examines HTTP requests and determines whether they contain potentially dangerous content. This check adds protection from markup or code in the URL query string, cookies, or posted form values that might have been added for malicious purposes. This exploit is typically referred to as a cross-site scripting (XSS) attack. Request validation helps to prevent this kind of attack by throwing a "potentially dangerous value was detected" error and halting page processing if it detects input that may be malicious, such as markup or code in the request. Don't Rely on Request Validation for XSS Protection Request validation is generally desirable and should be left enabled for defense in depth. It should NOT be used as your sole method of XSS protection, and does not guarantee to catch every type of invalid input. There are known, documented bypasses (such as JSON requests) that will not be addressed in future releases, and the request validation feature is no longer provided in ASP.NET vNext. Fully protecting your application from malicious input requires validating each field of user supplied data. This should start with ASP.NET Validation Controls and/or DataAnnotations attributes to check for: - Required fields - Correct data type and length - Data falls within an acceptable range - Whitelist of allowed characters Any string input that is returned to the client should be encoded using an appropriate method, such as those provided via AntiXssEncoder. var encodedInput = Server.HtmlEncode(userInput); Enabling Request Validation Request validation is enabled by default in ASP.NET. You can check to make sure it is enabled by reviewing the following areas: Selectively Disabling Request Validation In some cases you may need to accept input that will fail ASP.NET Request Validation, such as when receiving HTML markup from the end user. In these scenarios you should disable request validation for the smallest surface possible. ASP.NET Web Forms For ASP.NET Web Forms applications prior to v4.5, you will need to disable request validation at the page level. Be aware that when doing this all input values (cookies, query string, form elements) handled by this page will not be validated by ASP.NET. <@ Page ValidateRequest="false" %> Starting with ASP.NET 4.5 you can disable request validation at the individual server control level by setting ValidateRequestMode to "Disabled". <asp:TextBox ASP.NET MVC To disable request validation for a specific MVC controller action, you can use the [ValidateInput(false)] attribute as shown below. [ValidateInput(false)] public ActionResult Update(int userId, string description) Starting with ASP.NET MVC 3 you should use the [AllowHtml] attribute to decorate specific fields on your view model classes where request validation should not be applied: public class User { public int Id { get; set; } public string Name { get; set; } public string Email { get; set; } [AllowHtml] public string Description { get; set; } [AllowHtml] public string Bio { get; set; } } Extending Request Validation If you are using ASP.NET 4.0 or higher, you have the option of extending or replacing the Request Validation logic by providing your own class that descends from System.Web.Util.RequestValidator. By implementing this class, you can determine when validation occurs and what type of request data to perform validation on. public class CustomRequestValidation : RequestValidator { protected override bool IsValidRequestString( HttpContext context, string value, RequestValidationSource requestValidationSource, string collectionKey, out int validationFailureIndex) { validationFailureIndex = -1; // This is just an example and should not // be used for production code. if (value.Contains("<%")) { return false; } else // Leave any further checks to ASP.NET. { return base.IsValidRequestString( context, value, requestValidationSource, collectionKey, out validationFailureIndex); } } } This class is then registered in web.config using requestValidationType: <system.web> <httpRuntime requestValidationType="CustomRequestValidation"/> </system.web> References - Request Validation in ASP.NET - Validation ASP.NET Controls - How to: Validate Model Data Using DataAnnotations Attributes - AntiXssEncoder Class - New ASP.NET Request Validation Features - Control.ValidateRequestMode Property - ValidateInputAttribute Class - AllowHtmlAttribute Class - RequestValidator Class - ASP.NET Output Encoding
https://www.owasp.org/index.php?title=ASP.NET_Request_Validation&redirect=no
CC-MAIN-2015-11
en
refinedweb
public class TypeNotPresentException extends RuntimeException ClassNotFoundExceptionin that ClassNotFoundException..
http://docs.oracle.com/javase/8/docs/api/java/lang/TypeNotPresentException.html
CC-MAIN-2015-11
en
refinedweb
WE_Frontend::MainCommon - common methods for all WE_Frontend::Main* modules Do not use this module at its own!!! Just consult the methods. Note that all methods are loaded into the WE_Frontend::Main namespace. Use the appropriate publish method according to the WEsiteinfo::Staging config member livetransport. May return a hash reference with following members: List reference of published directories. List reference of published files. Options to publish: Be verbose if set to true. Reference to an array with additional directories to be published. Reference to an array with additional files to be published. livetransport may be any of the standard ones: rsync, ftp, ftp-md5sync, rdist, or rdist-ssh. For custom methods, use either of the following: custom:method_name Where method_name has to be a method in the WE_Frontend::Main namespace and already loaded in This will cause to load a module with the name WE_Frontend::Publish::basename (with uppercase basename) and call a method publish_basename (lowercase). This will case to require the module (based on the package name of the method) and call this method. XXX This method is not used XXX. Use the appropriate search indexer method according to the WEsiteinfo::SearchEngine config member searchindexer. searchindexer may take any of the following standard values: htdig or oosearch. Checks recursively all links from -url (which may be a scalar or an array reference), or for all languages homepages. By default, the language homepages should be in $c->paths->rooturl . "/html/" . $lang . "/" . "home.html" but the last part ("home.html") can be changed by the -indexhtml argument. Slaven Rezic - slaven@rezic.de WE_Frontend::Main, WE_Frontend::Main2.
http://search.cpan.org/~srezic/WE_Framework-0.097_03/lib/WE_Frontend/MainCommon.pm
CC-MAIN-2015-11
en
refinedweb
26 November 2010 16:50 [Source: ICIS news] TORONTO (ICIS)--Ethylene producers such as Dow Chemical, LyondellBasell and ?xml:namespace> Hassan Ahmed, head of research at New York-based equity research firm Alembic Global Advisors, said an estimated 10-15% reduction in Chinese ethylene production, caused by the country’s diesel supply squeeze, could tighten global ethylene utilisation rates and support price hikes in ethylene and ethylene derivatives in the near term. If the Chinese policy was implemented, recent industry ethylene prices hikes, and possible additional increases, could be more easily implemented, he added. Beneficiaries in the For more on Dow,
http://www.icis.com/Articles/2010/11/26/9414462/dow-lyondellbasell-to-benefit-from-china-diesel-measure-analyst.html
CC-MAIN-2015-11
en
refinedweb
31 July 2012 14:14 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> M.Setek failed to make the majority of a scheduled polysilicon delivery in 2011 to JA Solar Holdings because of the 11 March earthquake and tsunami and hence entered a framework agreement with JA Solar in March 2012 to settle outstanding prepayments of $69.1m (€56.7m), according to the source. Under the agreement, M.Setek agreed to transfer its 65% equity interest in China-based Hebei Ningjin Songgong Semiconductor to JA Solar Hong Kong, a wholly owned subsidiary of JA Solar, and the transfer for $38.9m will be completed by the end of this year. Moreover, M.Setek has repaid $11m to JA Solar in June 2012, with dividends distributed by For the remaining unused prepayments, M.Setek will continue to deliver polysilicon to JA Solar pursuant to a supply agreement. JA Solar is a manufacturer of high-performance solar power products for residential, commercial, and utility-scale power
http://www.icis.com/Articles/2012/07/31/9582241/chinas-ja-solar-concludes-share-transfer-agreement-with-m.setek.html
CC-MAIN-2015-11
en
refinedweb
The Legend of Zelda: Link's Awakening Trading Sequence FAQ by Kaas Version: 1.02 | Updated: 10/20/05 | Search Guide | Bookmark Guide || _||_ | | THE LEGEND OF ______| |_______ ______________ ______ ____ / ____| |__ / \ __ \ / \ "\ \_ \ / / | | / / | | \ | | | |"\ | \ \ /__/ _| |/ / / | | \_| | | | | | | \ / __ / / / | |_/| | | | | | | | \ /_/| / / / | | | | | | | | | /_\ \ |/ / | |_ | | | | | | | ; __ \ / / / | | \/ _| | || | | | / / \ \ / / / | |___/ / |/|| |_/ |/ / \ \ / /| __|______/|____|/______//___\ /___\ / / / | / / / / / | / / LINK'S AWAKENING / / /| |____/ / /_____| |______/ - - - - - - - - - - | | | | Trading Sequence FAQ \ / \/ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Introduction 001~~~~~~~~~~~~~~~~~~~~~~~~~ Welcome to this in-depth FAQ for the Legend of Zelda: Link's Awakening on the GameBoy. In this FAQ, I will try to help you complete the trading sequence. The trading sequence is an important part of the game. In it, you will be trading items you receive for other (sometimes better, sometimes worse) items. Your ultimate goal will be the Magnifying Glass, which you'll need to complete the game. Legend of Zelda: Link's Awakening is the first Zelda game ever to feature the trading sequence; it is now part of the Zelda games as much as the bow & arrows. Be warned however, as the following text will have spoilers in them. As always, if you think there's something wrong with this FAQ, have a question for me or simply want to tell me something, just e-mail me at the following adress: Kaas(dot)Bink(at)GMail(dot)com. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Content 002~~~~~~~~~~~~~~~~~~~~~~~~~ In here you can find the different areas of the walkthrough. Press ctrl+f to open the search box, then enter the code you see next to the section to get there faster. Introduction....................001 Content.........................002 Version History.................003 Trading Sequence................004 Uses of the Magnifying Glass....005 Special Thanks..................006 Legal Stuff.....................007 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Version History 003~~~~~~~~~~~~~~~~~~~~~~~~~ 22 June, 2005 - Version 1.0 I began and completed this FAQ today, and I think it's completely done too. If someone mails me important information, I'll make sure to add, though. 20 July, 2005 - Version 1.01 Small mistake corrected and some spelling errors were corrected today. 20 Oktober, 2005 - Version 1.02 Another small mistake has been corrected; the FAQ is done now. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Trading Sequence 004~~~~~~~~~~~~~~~~~~~~~~~~~ During your stay on the Koholint Island, you will receive many items. Some of those you will keep (such as the bow & arrows), some of them you will trade for something else. I'll focus on the latter. If you complete the trading sequence, you'll receive an important item which will aid you in finding the location of the final boss. Sometimes, when you want to trade your item for another one, you'll have to do or have something else first. I've indicated what you need, where to need to use it and the qoute when you receive your new item in this FAQ (thanks to davogones, who let me use his Text Dump FAQ). On a little sidenote, sometimes, when you receive your new item, the game shows a picture of it instead of naming it. In the qoute section I just put the name of the item, as I didn't feel like drawing all those pictures. I'm sure you understand... To make things easier, I've included a handy map to show you where to find every item. This map is the same as the World Map you see on your screen when you press the Select Button. Coordinates are noted like this: (a;1) for the first square, (p;16 for the last square). a b c d e f g h i j k l m n o p 1 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 2 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 3 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 4 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 5 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 6 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 7 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 8 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 9 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 10 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 11 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 12 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 13 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 14 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 15 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| 16 |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_| Item 1: a Yoshi Doll. Prereqisuites: 10 rupees and a sword. Location: (d;12), it can be found in the building in the southeast part of Mabe Village. How to get: Once you have the sword, go kill some monsters and cut some grass to earn yourself 10 rupees. Once you have ten rupees, go to the location I told you (d;12) and cut away the grass to enter the building. Inside, go talk to man at the right. He'll ask you if you want to play the Trendy Game, so say yes. Now press the B Button to move your crane horizontally so, that the shadow of the crane is above the Yoshi Doll (it's in the middle). Then press the A button to make the crane move vertically. If done correcty, the crane should pick up the Yoshi Doll and deliver it to you. It's pretty easy to catch, so don't worry. Qoute:You got a Yoshi Doll! Recently, he seems to be showing up in many games! Item 2: a Ribbon. Prereqisuites: a Yoshi Doll. Location: (c;9), the Quadruplet's house in the very north of Mabe Village. How to get: Once you've obtained the Yoshi Doll, just go to the house in the northern part of the village. You'll see a woman with a baby there, as well as a man and some beds with kids. Talk to the woman with the baby and she'll ask you if she can have your doll. You should say "Yes" and you'll make one lucky baby very happy (and receive a nice ribbon in return). Qoute: You traded your Yoshi Doll for a Ribbon! Maybe you can trade the ribbon for something else! Item 3: Dog Food. Prereqisuites: a Ribbon. Location: (b;11), the little house right next to Madam MeowMeow's house. In it lives a small, female Chain-chomp. How to get: Go to the small house with your Ribbon. In it, is a small, female Chian-chomp (wow, the second Mario reference; the first one was Yoshi of course). Anyway, it seems she fancies jewelery and accessoires, so talk to her and say "Yes" when she asks for your Ribbon to receive the Dog Food. Qoute: You exchanged Ribbon for Dog Food! It's full of juicy beef! Item 4: Bananas. Prerequisites: Dog Food. Location: (d;15), a little house on the beach. Just go south from Mabe Village, then keep going east to find it (it's just northeast of the place where you found your sword). How to get: make your way to the little house and talk to the big alligator inside. He'll tell you he likes canned food, and practically goes insane when he finds out you have some. Let him have it and he'll give you some bananas in return (after he eats the canned food). Now that was odd, wasn't it? Qoute: MUNCH MUNCH!! ... ... ... ... That was great! I know it's not a fair trade, but here's some bananas! YUM... Item 5: a Stick. Prereqisuites: Bananas. Location: (l;8), there's a little monkey on the bridge. You won't be able to cross the bridge without talking to her, and you'll need to cross to find the 5 Golden Leaves for Richard. How to get: Simply talk to Kiki the monkey (the one on the bridge), and give her your bananas. She'll summon some more monkeys, and they'll build you a bridge you can cross. Luckily, they also leave a stick behind... Qoute: You found a stick a monkey left behind... You take it! Item 6: Honeycomb. Prereqisuites: Stick and having dungeon 3 (Key Cavern) beaten. Location: (h;9), a man next to a tree. He can be found on the prairie, a bit east from Mabe Village. How to get: When you approach the man, you'll probably recognize him. It's Tarin, your savior, up to something stupid! Anyway, just talk to him and say "can" when he asks you if he can borrow your stick. He'll poke the honeycomb and gets chased away by bees. You can pick up the honeycomb (now without bees!) when he's gone. Later, if you check his house, you can find him hurting in his bed. Qoute: The stick became the honeycomb! You're not sure how it happened, but take it! Item 7: Pineapple. Prereqisuites: Honeycomb. Location: (n;14), the house southeast in Animal Village. It's the house next to the fence, with some sort of chimney next to it. How to get: Go inside the house and talk to the chef with the cooking hat there. He'll want your honeycomb, because he's all run out of them, so reply "Yes" when he asks for it. He'll give you a pine-apple in return. Qoute: You exchanged the honeycomb for a pineapple! It's not as sweet, but it is delicious! Item 8: Hibiscus. Prereqisuites: Pine-apple Location: (j;2), the guy who's lost in the mountains. You can reach him by entering and following the cave at (h;2), then when you exit that one, enter the cave at (j;2). When you exit that one, simply go down one screen and left one screen to reach a famished Papahl (you've met him before; he was the dad of the Quadruplets. You even gave your Yoshi away in that house). How to get: Talk to him and he'll tell you he's so famished, he can't even move. He'll ask you for some "Vittles", so give him your pine-apple. He'll give you a nice Hibiscus in return! Qoute: AH! This isn't meant to be a reward... Here, take this flower! It's a hibiscus! Item 9: Letter. Prereqisuites: Hibiscus. Location: (n;13), the house in the northeast corner of Animal Village. How to get: In the house in Animal Village lives a goat who is apparently very fond of nice manners. When you talk to her, she'll just assume the flower is for her, and will ask you for a favor. Say "Yes" and she'll now give you a letter for you to to deliver to Mr. Write. Qoute: You traded the hibiscus for a goat's letter! ...Great!? Item 10: Broom. Prerequisites: The goat's letter. Location: (a;4), a lonely house in left of the swamp. In it you'll find Mr. Write busy writing something. How to get: Just enter the house and talk to Mr. Write. He'll be happy with the letter and will give you a broom in return! Make sure to pay attention to the picture the goat included! Pretty funny scene... Qoute: You got a Broom as your reward from Mr. Write! But that photo was not of... Item 11: Fish Hook Prerequisites: Broom. Location: (b;12), the woman outside Ulrira's house in the southwest corner of Mabe Village. It's granma Ulrira, who was always busy sweeping everything clean! How to get: You'll notice the woman who was always sweeping suddenly is broom- less. Well, we now have one so talk to her and tell her it's for her. She'll give you a fishing hook she found while sweeping by the river bank. After the exchange, you'll find granma Ulrira back in Animal Village (sweeping, of course). Qoute: You exchanged the broom for the fishing hook! What will the fishing hook become? Item 12: Necklace. Prerequisites: Fishing Hook, Flippers. Location: (k;15), under the small bridge, south of dungeon 5 (Catfish's Maw). How to get: When you're in the water, press B to dive down and swim under the bridge. Jump out of the water with your Roc's Feather and enter the boat. Now talk to the fisherman and let him use your hook in trade of his next catch. The fisherman will start fishing immediatelly and it won't be bad. Well, he happens to catch a necklace! Qoute: The fishing hook became a necklace! L-l-lucky! Item 13: Scale. Prerequisites: Necklace. Location: (j;13), the mermaid in the water, just above dungeon 5 (Catfish's Maw). Her name is Martha, not uncoincedentilly the name of this bay! How to get: You'll notice the mermaid in the water, so swim over and talk to her to find out she lost her necklace. She says she's looked all over for it. Well, it looks like we've got it now, so give it back to her and she'll let you take a scale of her tail. Only one piece, though. Qoute: You returned the necklace and got a scale of the mermaid's tail. How will you use this? Item 14: Magnifying Glass. Prerequisites: Scale, Hookshot. Location: (j;15), the statue of a mermaid in Martha's Bay. How to get: use the hookshot to cross the water southwest of Animal Village and follow the path until you see the statue. Walk towards it and you should automatically insert the scale in the statue (it was missing one scale). The statue should move to the left now and you can enter the cave. You'll see nothing in the first area of the cave, but when you start moving, changes are you'll bump into some invisible enemies. Just ignore them and move to the north part of the cave. Take the item you see there to get the Magnifying Glass, which concludes our trading sequence! Qoute: You've got the Magnifying Glass! This will reveal many things you couldn't see before! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Uses of the Magnifying Glass 005~~~~~~~~~~~~~~~~~~~~~~~~~ There are only three ways to use the Magnifying Glass, and they all work automatically. 1) See invisible enemies. The only time you'll use this function is when you just picked it up and go south. You'll see a few emenies there who you couldn't see before, but it's very obvious this isn't the main purpose of the Magnifying Glass. 2) Get the Boomerang. This is more like it. Once you've found the Magnifying Glass, go to the beach at (e;16). There's a cracked wall there, so bomb it to reveal a cave. In it you'll find a friendly monster who will trade you the Boomarang for the item you have equipped in your B Button slot. You can trade back the Boomarang for your own item any time you want. Most people trade their shovel for it, because it gets pretty useless after you've digged up everything worthwhile. Feel free to choose something else to trade, however. The Boomarang itself is a very powerful weapon, stunning and sometimes killing enemies with only one hit! 3) Read the Book "Dark Mysteries and Secrets of Koholint". This is where it's all about. You can find this book in Mabe Village's library (a;12); it's the book in the southeast corner. In it, you can find the way to move in the Windfish's Egg. You should write down this route, as you'll need it when you're going to confront the final boss. When you've made your way to the Windfish's Egg, and played the instruments in front of it, the Egg opens up. When you enter and go north one room, you'll drop down. Now, everywhere you go (except backwards, which takes you back to the beginning) you'll see the same exact room. If you follow the directions you've read in the mysterious book, you should reach the room of the final boss! Good luck with him, as he's pretty hard to defeat. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Special Thanks 006~~~~~~~~~~~~~~~~~~~~~~~~~ CJayC: for hosting this FAQ and keeping an excellent site updated and operational. Nintendo: for making this excellent game. davogones: for letting me use his Text Dump FAQ for some of the quotes in this FAQ. It was very useful! Malcom M.: for pointing out a small mistake in the FAQ. My mom: for buying me this game for my birthday; best present ever! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Legal Stuff 007~~~~~~~~~~~~~~~~~~~~~~~~~ This walkthrough, and all it's content (including the ASCIIs),
https://www.gamefaqs.com/gameboy/563277-the-legend-of-zelda-links-awakening/faqs/37454
CC-MAIN-2017-34
en
refinedweb
Android Data Binding: The Big Event And You Don’t Even Have to Dress Up In previous articles, I wrote about how to eliminate findViewById from Android applications and in some cases eliminate the need for View IDs altogether. One thing I didn’t explicitly mention in those articles is how to handle event listeners, such as View’s OnClickListener and TextView’s TextWatcher. Android Data Binding provides three mechanisms to set an event listener in the layout file and you can choose whichever is most convenient for you. Unlike the standard Android onClick attribute, none of the event data binding mechanisms use reflection, so performance is good whichever mechanism you choose. Listener Objects For any view with a listener that uses a set* call (as opposed to an add* call), you can bind a listener object to the attribute. For example: <View android:onClickListener="@{callbacks.clickListener}" .../> Where the listener is defined with a getter or a public field like: public class Callbacks { public View.OnClickListener clickListener; } There is also a shortcut for this where the “Listener” has been stripped: <View android:onClick="@{listeners.clickListener}" .../> Binding with listener objects is used when your application already uses them, but in most cases you’ll use one of the other two methods. Method References With method references, you can hook a method up to any event listener method individually. Any static or instance method may be used as long as it has the same parameters and return type as in the listener. For example: <EditText android:afterTextChanged="@{callbacks::nameChanged}" .../> where Callbacks has a nameChanged method declared like this: public class Callbacks { public void nameChanged(Editable editable) { //... } } The attribute used is in the “android” namespace and matches the name of the method in the listener. Though it isn’t recommended, you may do some logic in the binding as well: <EditText android:afterTextChanged= "@{user.hasName?callbacks::nameChanged:callbacks::idChanged}" .../> In most cases it is better to put logic in the called method. This becomes much easier when you can pass additional information to the method (like user above). You can do this with lambda expressions. Lambda Expressions You can supply a lambda expression and pass any parameters to your method that you wish. For example: <EditText android:afterTextChanged="@{(e)->callbacks.textChanged(user, e)}" ... /> And the textChanged method takes the passed parameters: public class Callbacks { public void textChanged(User user, Editable editable) { if (user.hasName()) { //... } else { //... } } } If you don’t need any of the parameters from the listener, you can remove them with this syntax: <EditText android:afterTextChanged="@{()->callbacks.textChanged(user)}" ... /> But you can’t take just some of them — it is all or none. The timing of expression evaluation also differs between method references and lambda expressions. With method references, the expression is evaluated at binding time. With lambda expressions, it is evaluated when the event occurs. Suppose, for example, the callbacks variable hasn’t been set. With a method reference, the expression evaluates to null and no listener will be set. With lambda expressions, a listener is always set and the expression is evaluated when the event is raised. Normally this doesn’t matter much, but when there is a return value, the default Java value will be returned instead of having no call. For example: <View android:onLongClick=”@{()->callbacks.longClick()}” …/> When callbacks is null, false is returned. You can use a longer expression to return the type you wish to return in such an error case: <View android:onLongClick=”@{()->callbacks == null ? true : callbacks.longClick()}” …/> You’ll more often just avoid that situation altogether by ensuring that you don’t have null expression evaluation. Lambda expressions may be used on the same attributes as method references, so you can easily switch between them. Which To Use? The most flexible mechanism is a lambda expression, which allows you to give different parameters to your callback than the event listener provides. In many cases, your callback will take the exact same parameters as given in the listener method. In that case, method references provide a shorter syntax and are slightly easier to read. In applications that you are converting to use Android Data Binding, you may already have listener objects that you were setting on views. You can pass the listener as a variable to the layout and assign it to the view.
https://medium.com/google-developers/android-data-binding-the-big-event-2697089dd0d7
CC-MAIN-2017-34
en
refinedweb
5 Cloud Computing. 5.1 Introduction. 5.2 Market Trends Sheer Volume of Content Transferred - Magdalene McKenzie - 2 years ago - Views: Transcription 1 5 Cloud Computing 5.1 Introduction If computers of the kind I have advocated become the computers of the future, then computing may someday be organised as public utility just as the telephone system is a public utility. The computer utility could become the basis of a new and important industry. John McCarthy McCarthy s vision took almost half a century to realise. Today, computing as a public utility takes the form of cloud computing and epitomise how businesses demand IT be delivered. computing has certainly captured a great deal of attention and following over the past three years. IT giants like HP, IBM, Intel and Microsoft have evolved their business strategies around cloud computing. Traditional software houses like Microsoft and SAP are offering their highly successful software suites as cloud services to address growing customer demand for utility based charging and collaboration. cloud computing is reiterated on a daily basis in both IT and business news. The theme was mentioned in varying degrees in 71 of 77 Gartner s hype cycles for 2011 with 3 of them dedicated to different aspect of cloud computing. Forrester Research estimated the global market size for cloud computing to be US$241billion by Public cloud will constitute two-thirds of that market size, or US$159.3billion. 5.2 Market Trends Sheer Volume of Content Transferred The sheer volume of content to be transferred, due to ubiquity of access and demand for media, will strain any unoptimised content delivery mechanism. Since online presence is critical to businesses today, the need to deliver contents quickly and in time to their customers will continue to be a priority. Nokia Siemens Networks predicts that broadband data will increase 1,000 fold by Cicso also projected 90% compound annual growth rate (CAGR) of global traffic for video and 65% for data through Both projections outpace Moores Law so the delivery of content in the long term cannot be fulfilled simply by buying better servers. Content delivery and server utilisation must be optimised to satisfy that demand for information. The types of content transferred will also expand with different sources like intelligent, networked devices (Internet of Things) and contextual data, to processed information and interactive media (e.g. Internet TV). By 2014, 50% of all data would have been processed by cloud. Cloud computing and the global distribution of public clouds can provide that last mile delivery. 1 Harold Abelson. Architects of the Information Society Thirty Five Years of the Laboratory for Computer Science at MIT. U.S.A.: Wonder Book Publishing; 2 5.2.2 Demand for Instant Performance Social analytics is increasingly common as a business strategy to better understand and respond to the customer as an individual. The volume of data adds to the wealth of electronic records waiting to be analysed and combined for intelligence and better insights into the business. Businesses demand such insights with increasing immediacy. Demand for instant performance extends beyond the processing of data. Where IT could take months to provision a single server, it now faces competition from cloud and other hosting providers that can do the same in minutes. It is not uncommon to hear stories of business units bypassing IT because public hosting and cloud providers can fulfil their requirements quicker. Casual supercomputers fulfill both the demand for faster turnaround time and a reduction of the cost of ownership. These compute behemoths are assembled solely for the compute task, are flexible in the amount of performance required, and are released once the task is completed. Cycle Computing made news with its 30,000 core cluster at a cost of US$1,279/hour. The cluster was constructed on-demand on Amazon Web Services and was torn down eight hours later Seamless Experience from Personal Cloud and BYOD In Singapore, 30% of organisations have implemented cloud computing to address the need arising from consumer devices. The ubiquity of an untethered Internet and affordability of powerful, easy-to-use devices has driven productivity as executives stay connected to the company anytime, anywhere and using any device. Figure 1: Frictionless Sync Defines the Consumer Cloud Experience As with the proliferation of the Internet, the advent of personal clouds represents a tectonic shift in the personalisation of IT. The seamless experience and integration of data, social and 2 3 communication services into a single smart device will be the new benchmark for corporate IT offerings. Smarter mobile devices are replacing the laptop in providing a more convenient way to access information. It is not uncommon today to find business executives preferring the more mobile tablet devices to a full-featured laptop. Consistency of data across different devices will be a challenge IT departments need to resolve. Today, Android devices pull configuration and personal data from a user s Google account. In a similar fashion, IOS devices share information using Apple s icloud; user data stored in dropboxes can be accessed from any smart device. Automatic synchronisation of data across personal devices dilutes lockdown to a single device. Expectation of the same convenience will force IT to rethink what personal computing entails. Experience with the benefits of such personal cloud services will increase both demand and acceptance of such services in corporations. Cloud services provide a natural avenue for enterprises to support the resulting myriad of devices and services. Advances both in technology and processes will address the current concerns and shortcomings of cloud computing, e.g. reliability, interoperability, security and privacy, service level enforceability, and predictability of charges. Such integration of technology with process and policies, together with adoption of cloud standards, will position cloud computing for wider appeal. Insatiable compute demand of an ever data hungry world brought about by ubiquitous and faster connectivity, the demand for instant performance, faster, more sophisticated analytics and more connected businesses, and extrapolation of the seamless integration provided by personal clouds with the personalisation of IT will stimulate cloud adoption An Arduous Journey toward Cloud Computing Cloud computing does not just waive the cover charges. It outlines the underlying architectures upon which services are designed and applies equally to utility computing and internal corporate data centres. Cloud computing evolved from a myriad of technologies including autonomic computing, grid computing, multi-tenancy, service oriented architecture (SOA) and (network, server and storage) virtualisation. It abstracts design details from the cloud user, presenting compute as an on-demand service. This section reminisces on the passage of the vision of Compute as a Utility to its realisation as Cloud Computing. Since McCarthy s lecture in 1961, several notable attempts have been made to redefine how compute is used and delivered. The first virtual machine appeared with IBM s CP/CPM that was productised as VM/370 in Back then, the issue of multi-tenancy was addressed either as an integrated time-shared system with sophisticated segregation of privileges between users, or with each user having his own computer walled within a virtual machine. By the mid 1980s, general interest in compute as a utility dipped to a low with the advent of affordable personal workstations that fulfilled most compute demands. Grid computing, coined in early 1990s, revived the concept that compute should be accessible like an electric power grid. Landmark projects like and Globus Toolkit in 1999 and 1998 respectively laid the groundwork for tapping unused compute resources and 3 4 synchronising these compute jobs. Grid computing was the precursor to coordinating compute in the cloud. By 2000, grid computing had taken off in research and development (R&D). IT vendors like IBM, HP and Sun Microsystems started offering grid computing services. Most notably, Sun Microsystem started marketing Sun Cloud 2 for US$1 per CPU/hour. This was the first time compute was available commercially as a utility on a global scale. In a controversial article published in the Harvard Business Review in 2003, Nicholas Carr declared, IT does not matter. He posited that IT was becoming commoditised and would be delivered like other utility services. Virtualisation technologies started gaining traction in 2005 as a means to improve data centre efficiencies by consolidating workloads. Network, server, and storage virtualisation providers collaborated to deploy autonomous technologies that enabled rapid provisioning of services in a virtualised data centre environment. This paved the way for the internal corporate data centre to transit to cloud computing. In 2006, Amazon launched its Elastic Compute cloud (EC2) and Storage (S3) services that offered compute and storage rental with two distinct features. These services use a pricing model that charged per use, and services were provisioned (and released) within minutes of payment. Computing was now accessible as a utility. Amazon was soon followed by Google Apps that offered an office suite as a service, Google App Engine that provides a J2EE platform charged by CPU cycle, Microsoft s Azure platform service, and a flurry of web hosting providers like Rackspace and 1and1 3. There was just as much interest in providing software that enables enterprises to run cloud services in their internal data centres. Examples include Joyent s SmartOS, 3tera s AppLogic, Microsoft s Windows Azure, and OpenStack. In this surge of technologies, it is easy to forget that cloud computing is not entirely a technology play. Many other adjacent advances, especially in IT management, are necessary for a successful cloud deployment. Efficiencies gained must be balanced against the loss of control. The relationship between IT organisations, their technology partners and IT s customers, must be managed through an effective IT governance structure. After an arduous 50 years of attempts with varying degree of success, John McCarthy s vision to organise compute as a public utility has finally been realised with cloud computing. The pervasion of IT into businesses and personal lives conduced the demand and reliance on IT, and the corresponding availability of IT expertise, into a global desire for compute on demand. Additionally cloud computing allows data centre owners to realise many of its benefits by facilitating clouds to be built within their facilities. A vision born when computing resources were scarce and costly will be tested against today s demand for increasingly instant results. 2 3 Wikipedia. Sun Cloud. [Online] Available from: [Accessed 9th July 2012]. Linda Leung. More Web Hosts Moving to the Cloud. [Online] Available from: [Accessed 9th July 2012]. 4 5 5.2.5 Cloud Computing Today Service Models NIST broadly categorises clouds into three service models: Cloud Software as a Service (SaaS). Consumers are given access to the provider s applications that runs on a cloud infrastructure. Examples of SaaS include Google s GMail, Microsoft 365 and Salesforce.com. Consumers of these SaaS access the applications using a variety of clients such as a Web browser or even a mobile application. Management of infrastructure, operating environment, platform services, and application configuration are left to the cloud provider. Cloud Platform as a Service (PaaS). Consumers are given access to platform on which they can develop their custom applications (or host acquired applications). Google AppEngine, Microsoft Azure and Force.com are examples of PaaS. Consumers of PaaS launch their applications using the specific programming platforms supported by the specific PaaS. The PaaS provider takes care of delivering the programming platform and all underlying software and hardware infrastructure. Cloud Infrastructure as a Service (IaaS). Consumers are given an operating system instance on which they can install software and set up arbitrary services and applications. The IaaS provider takes care of the server hardware and network, usually using a virtualised environment. Responsibility of maintaining the operating system usually falls on the consumer. The division of responsibilities between the provider and consumer for each of these service models compared against a virtualised traditional IT environment is illustrated in Error! Reference source not found.. 5 6 Applications Data Runtime Middleware OS Virtualisation Servers Storage Network Traditional IT Managed by You Infrastructure (IaaS) Platform (PaaS) Software (SaaS) Delivered as a Service Self Service Network Accessed Resource Pooling Rapidly Elastic Measured Service Figure 2: Division of Responsibility by Service Models Beyond these three service models, numerous cloud services have emerged. Notable cloud offerings include Security as a Service, Data as a Service, Desktop as a Service, Storage as a Service, Communications as a Service, Database Platform as a Service and Service Delivery Platform as a Service. These offerings may be viewed as a specific implementation of the three Service Models depending on the level of involvement of the cloud service consumer Cloud Deployment Models Cloud deployment affects the scale and hence, efficiency, of the cloud implementation. Private Cloud is a cloud infrastructure operated solely for a single organisation. Such single tenant clouds may be managed by the organisation or a third party and may be hosted within the organisation s premises or in a third party data centre. Public Cloud is a cloud infrastructure operated by a cloud provider that is available for public consumption. These multi-tenant clouds serve a variety of customers and usually enjoy the largest scale and utilisation efficiency. Amazon Web Services and Microsoft Azure are two well-known public cloud providers. Community Cloud is a public cloud infrastructure serving a specific industry or community that share a common trait or set of concerns (e.g. security and compliance requirements, or a certain common application). An example is Sita s ATI Cloud that provides airline employees online access to infrastructure, desktop and other services. Hybrid Clouds are clouds that deployed across two or more cloud deployment models. Successful hybrid cloud implementation requires integration that enables data and application portability between the different cloud services. The most common hybrid clouds are composed of private and public clouds where workload is overflowed from the private cloud into the public cloud What is available in the Public Cloud today Most private clouds today are IaaS although enterprises who have standardised their technology architectures may provide PaaS to their developers. The public cloud is 6 7 experiencing tremendous growth and is forecasted to represent two-thirds of the market by It is not hard to imagine that the diversity of offerings from public cloud providers has exploded from a handful of IaaS to hundreds of PaaS to thousands of SaaS. The consolidation of operating systems has gravitated most cloud providers toward variants of Microsoft Windows or Linux, with a handful delivering other Unix variants (e.g., Oracle s Solaris). SaaS PaaS IaaS Salesforce, Google Apps, Microsoft 365, Dropbox, Gmail, icloud, BigQuery, DbaaS, Data-aaS, PayPal, Azure, Google App Engine/API, J2EE, PHP, Ruby on Rails, Facebook API, force.com, OpenShift, Engine Yard, OpenStreetMaps,... Windows, Linux, Unix The PaaS offerings consist of either popular middleware platforms (e.g. Microsoft s.net framework, J2EE, and Rails), or programmatic extensions from successful SaaS offerings (e.g force.com, and Google App Engine). A noticeable trend is for public cloud vendors to expand beyond their traditional service models into adjacent service models as depicted below: 7 8 Salesforce.com expanded from a SaaS provider for CRM to provide programmatic application programming interface (API) through force.com, and later acquired Heroku PaaS in 2011; Google expanded from a SaaS provider (Google App) who expanded into PaaS with its Google App Engine in 2008, and then into IaaS with Google Compute Engine in 2012; Amazon expanded from a very successful IaaS into platforms with its Elastic Beanstalk; and Microsoft s Azure expanded from PaaS into IaaS. It is also pursuing enterprises with its Microsoft 365 SaaS solution. Apparent from the above examples, the expansion to adjacent service model builds on the existing success of the cloud providers. The expanded service model is tightly integrated vertically with the provider s existing services. Elastic Beanstalk from Amazon offers its customers new capabilities and the added convenience of standard platforms. Google and Salesforce s expansion into PaaS provides a programmatic interface that affords their customers better flexibility. 5.3 Cloud Economics: Scale and Elasticity Notwithstanding technological implementations, the advent of cloud computing changes the economic landscape of computing by availing compute on an unprecedented scale. The utility prices of cloud computing are becoming the benchmark for enterprise IT forcing corporations to rethink the value proposition of running their IT infrastructure Scale Matters A large cloud setup takes advantage of significant economies of scale in three areas: Supply-side savings in cost per server; Demand-side aggregation increases utilisation by smoothing overall compute demand variability; and Multi-tenancy efficiency distributes application management and server costs to more tenants. Consider Microsoft s 700,000 square-foot Chicago Cloud Data Centre. The facility currently houses 224,000 servers in 112 forty-foot containers, and has a maximum capacity for 300,000 servers. This US$500 million facility is operated by a skeleton crew of only 45. On such astronomical scale, Microsoft reduces the costs of power both by negotiating a favourable bulk purchase price and by locating the facility to reduce cooling power loads using ambient air. Furthermore, standardisation and automation allowed Microsoft to operate the entire facility with a minimal crew reducing labour cost. New servers are ordered by containers of 1,800 to 2,500 servers allowing Microsoft to enjoy massive discounts over smaller buyers. Single-tenant environments achieved an average utilisation of less than 20%. Compute resources are generally provisioned for peak demands and fail to sustain a high utilisation because of the random nature of workload, and time or seasonal peaks depending on the 8 9 organisation s locality or industry. In addition, workloads differ greatly in resource profiles making it even harder to optimise resource utilisation. Scale vastly improves the utilisation of available compute resources. Large public cloud operators are able to maintain an average utilisation of around 90% without violating their service level agreements (SLAs). Operating a massive cloud allows the cloud provider to aggregate compute demands from various geographical diverse clients into a shared compute resource pool. This aggregation soothes the variability of compute demands from individual clients by offsetting peak demand from one client with low demands from others. Further aggregation can occur if clients span different industries with different seasonal demands. The very act of consolidating all these demands pushes utilisation beyond what can be achieved on a smaller scale. Finally, multi-tenancy of a centrally managed application or platform improves cost efficiency. Management costs can be distributed across a large number of customers instead of being borne per customer in a single-tenant environment. Hosting more application instances amortises server overhead and the resulting savings can be passed on to customers. Supply-side savings reduces the cost of ownership per unit of server as the cloud provider gains bargaining power because of their scale. Demand-side aggregation and multi-tenancy efficiencies optimise the total costs of ownership further by drastically improving utilisation. Large public cloud providers with 100,000 servers enjoy an 80% lower total cost of ownership (TCO) compared to their smaller counterparts with 1,000 servers (See Error! Reference source not found.). The ubiquity of the Internet has extended the reach of cloud providers to a global market of cloud consumers. The aggregated compute requirements from this customer pool are multifold that of any single enterprise. This massive consolidation of compute allows cloud providers to operate at unprecedented efficiencies and benefit from an equally unprecedented economy of scale Elasticity Changes the Game A key benefit of cloud computing is its ability to match computing resources closely to workload by adjusting resources at a pre-defined granularity (e.g. adding or removing server instances with EC2). This is accomplished within minutes for a cloud environment, compared to days or weeks in traditional IT, allowing for very fine grain adjustments that accomplish unprecedented utilisation. This shift of compute cost from a capital expense to operating expense has a far ranging effect on cloud consumer behaviour. Already, examples of how industry leaders are changing their game with elasticity abound: In cloud computing, 1,000 CPU-hour costs the same if you use 1 CPU for 1,000 hours, or 1,000 CPU for an hour. Pixar Animation Studios takes advantage of this aspect of elasticity to reduce turnaround time needed for rendering their movie. When Zynga launched FarmVille, it expected only 200,000 daily active users within the first two months. Instead, the game proved a runaway success and gathered 1 million users every month. Zynga s choice to implement the game on an elastic 9 10 cloud saved them from certain embarrassment and allowed their game to scale as the number of users grew rapidly. Cycle Computing built a 30,472 core cluster on 3,809 AWS compute instances by salvaging unused spot instances. The super computer lasted seven hours and at its peak costs only US$1,279 per hour. The New York Times used 100 Amazon EC2 instances to process 4TB of TIFF into 11 million PDF documents within 24 hours and under US$240. The above examples are the quintessence of leveraging cloud elasticity to avoid high upfront investment in hardware, reduce recurring costs of over-provisioned network bandwidth, reduce turnaround time to deal with unexpected spikes in compute requirements, and manage costs as demand falls. Cloud elasticity ultimately reduces business exposure and increases business agility to market changes. Whereas scale affects the bottomline of business by bringing down costs, Cloud elasticity impacts both the bottomline by using IT budgets more efficiently and the topline of business by allowing low-risk experimentation. In the long term, IT agility enabled by cloud computing will transform the business landscape. 5.4 The Future with Cloud Computing In The Big Switch, Nicolas Carr pointed to the business of supplying computing services over the Internet from MySpace, Facebook, Flickr, to YouTube, Google Docs, and Box to describe his vision of a World Wide Computer : All these services hit at the revolutionary potential of the new computing grid and the information utilities that run on it. In the years ahead, more and more of the informationprocessing tasks that we rely on, at home and at work, will be handled by big data centres located out on the Internet. If the electric dynamo was the machine that fashioned twentieth century society that made us who we are the information dynamo is the machine that will fashion the new society of the twenty-first century. The Big Switch, Nicolas Carr. The machine will deliver a compute experience that is seamless and integrated. The experience extrapolates from the current packaged cloud services like the icloud, Dropbox and Google Search. In this computing nirvana, the machine automagically coordinates data from different sources, invokes software services from different software providers, and orchestrates hardware and software compute platforms in the most optimised fashion. Security policies on data and intermediate results will impose constraints on the orchestrated services. Service levels will be directly translated from business requirements and will be used to identify resources necessary to deliver the required performance. Workload and usage profiles can be automatically learnt and fed back into the resource placement strategy. Overarching these components is the cloud brokerage service that coordinates the resources and automatically reconfigures these resources to best match the security and service level specifications of the user. The broker leverages common programmatic interfaces to query the status of each cloud and to deploy tasks into different clouds. Such brokers will be a service in the World Wide Computer. 10 11 Implementation trends today provide a glimpse of this future. The recent interest in Big Data saw cloud vendors like Amazon and Microsoft deliver turnkey Hadoop services in their respective clouds. The industry can look forward to more turnkey services that will increase the lure of cloud computing beyond just IaaS. Such a vision may still be decades away but the various pieces are coming together today. The following table illustrates the current maturity of each resource: Resource/ Component Services Services will be the way application and software will be presented in the machine. Services will be physically detached from the data and compute. The cloud customer will be able to invoke any software components from different software providers either as a pre-programmed package or in an ad-hoc fashion Data Data will be available to software services in a variety of formats with rich description of purpose and intent. Such data can be structured or unstructured and without restriction on its proximity to the task Compute Compute will be orchestrated per task and optimised on the fly based on the location of the data, application software, and security specification. The cloud will automatically figure out the appropriate platform and sizing to achieve the specified compute response time and service level Networks Networks will be orchestrated around security policies and service level requirements. A subnet as we know it today, Current State/ Noteworthy work Software is available as a turnkey VM. Different compute needs like Hadoop and databases are becoming available as a service (e.g. EMR) Business Software as a service e.g., Google App is gaining acceptance Consolidation of OS platforms and common libraries Common data store is available within the cloud provider s environment but not inter-cloud. Some common formats like XML, JSON, and SQL like access API Distributed file systems with common namespace, e.g. HDFS, Global FS. IaaS is generally well defined and understood Automatic and live migration of compute is increasingly adopted. The Internet now connects most of the world and permeates through fixed and mobile connections. Challenges Software requires specific platforms Specific schema is needed due to presumption in data SaaS tend to be single monolithic blocks that is difficult to integrate to other services Different charging model and cloud readiness of enterprise software Most SaaS deployments tend to ignore security frameworks and requires different handling Rich data description Common Access methods Data distribution Location of data is generally not well controlled Removal of data cannot be guaranteed Platforms are still evolving and fluid Current restrictions around interoperability and performance across long distances. Continued demands for higher speeds and better reliability especially across longer distances 11 12 will span multiple cloud providers IPv6 provides addresses to the next wave of devices and ensure end-to-end connectivity Lossless networks simplify connectivity by converging both storage and network. Software defined networks are still not widely adopted. Lack of controls on performance of partner networks. Resource/ Component (cont.) Security Security policies that prescribes confidentiality, integrity and availability. Legal boundaries will be described and enforced Service Level Service Level requirements will identify compute resources needed to deliver the required performance, based on the specific workload profile Current State/ Noteworthy work (cont.) Standards are still evolving and generally immature Disaster Recover practices are adopting cloud Standards are still evolving and generally immature SLA is generally specified by cloud provider Responsibility to mitigate outage is usually left to cloud user. Challenges (cont.) Walled boundaries for data (legal or classification) Programmatic way to tag security classification of cloud providers Security policies, practices and responsibility divide of user/provider is uncertain There need to be better ability to control SLA from a service perspective (e.g. coordinate across cloud providers to reach required SLA) Fine grain performance requirements (e.g. response time) that can be observed Most of the challenges identified above have been addressed to a limited degree especially within a more established cloud provider environment. The most notable examples are Google s integration of Google Compute Engine (IaaS), Google App Engine (PaaS) and its application services like Google Drive, Google Fusion Tables and Google Map Services. Single cloud provider implementations alleviate the pain of coordinating multiple cloud services. Nicolas Carr s vision of a World Wide Computer points us to an interoperable Cloud of Clouds. A Cloud that marries services, compute and networks from different cloud providers, and that automatically arrange resources available to meet security and service levels requirements. 12 13 5.5 Technology Outlook In line with the vision of a seamless, automatic and service-oriented cloud, several technologies in the pipelines are ripe for adoption in the foreseeable future. < 3 yrs 3~5 yrs 5~10 yrs Services Community Clouds DevOps Tools Cloud Standards - Personal Cloud Services Cloud Optimised Apps Interoperability Data Data aas Cloud Big Data Public Cloud Storage Compute Virtualisation Hybrid Cloud Cloud Bursting Network Internet 2.0 Software Defined NW Security Security aas IAM aas Cloud Security Stds Data Tracking SLA SLA based Charging Cloud Broker Federated Clouds Figure 3: Technology Adoption Map The Technology Adoption table above illustrates technologies according to the resources and requirements identified earlier. Adoption is generally defined as the moment when accelerated deployment of the specific technology is undertaken by its target consumers. The timeline of adoption is estimated as relatively immediate (less than three years), in the mid term (three to five years) and in the longer term (five to 10 years). The following sections are organised according to this timeline and discusses each technology in greater detail Less than three years Most of the technologies that will be adopted within the next three years are already showing signs of maturing today. Examples of deployment of these technologies already exist even though they might not have hit the main stream Community Clouds Resource/Requirement: Service NIST defined the community cloud as a deployment model. A community cloud infrastructure is shared by several organisations and supports a specific community that has shared concerns (e.g. mission, security requirements, policy, and compliance considerations). A community cloud will target a limited set of organisations or individuals. 13 14 The key to understanding the community cloud is to comprehend its ability to address the concerns of the community it serves in unity. Compliance of such clouds is scalable and applied to all organisations adopting it. As a result, such cloud deployments will appeal to organisations with well defined security and compliance requirements, e.g. government clouds. Today, pockets of community clouds have already evolved. Notably, the Singapore Government s G-Cloud aims to address common compute needs of local government agencies by providing different security levels that closely match the tiered architecture commonly found in Web services provided by the respective agencies. Another example is SITA s ATI Cloud. It aims to provide a set of applications to airline customers while meeting compliance and standards required of the airline industry. While today s community cloud may leverage public cloud infrastructures, private clouds remains the cornerstone of most implementations. Meeting the compliance requirements of specific communities are the primary reason for such deployments. Delivering these services using private clouds mitigate many of the challenges posed by public clouds. Multi-tenancy of community clouds will create value for the cloud. Sharing of data between different members of the community can bring about huge benefits that was not possible in the previously silo ed approach. One such example is the health care community cloud. Shared health data provides convenience to patients and allows pre-emptive health management. The universal view of health data improves the management of clusters of contagion. While it is possible to share data without cloud, building the service around a community cloud prevents the natural fragmentation of data early in development. The cloud implementation also provides for better capacity management and growth. Central to the implementation, however, is the fact that the community cloud provider is really a brokerage service that has mapped the compliance requirements into a resource pool. With the extended capabilities of cloud brokerage services, smaller communities will be able to provide their unique cloud offerings. Enablers Many organisations operating within a well-defined vertical already share information using some SOA frameworks. The movement to community cloud typically involves an industry leader establishing a consortium that will move specific operations into the cloud. Examples of such leaders include SITA and Changi Airport. Community clouds leverage on high commonality in data, tools, compliance, and security requirements, enhancing collaboration and coordination of business processes between participating organisations Personal Cloud Services The proliferation of smarter devices and ubiquity of mobile connectivity enabled these personal devices to participate in delivering cloud services. Indeed, personal cloud services extends availability of data in the cloud, including personal data (e.g. contacts and personal preferences); messaging services (e.g. and electronic chat); and media (e.g. music and 14 15 videos), into the personal devices to create a seamless experience regardless of where or how it is accessed. Google s Android and Apples IOS devices are connected to their user s Google and Apple account to provide automatic data synchronisation. This decouples the user s data from their devices making the latter a portal to the data that is stored in the cloud. The resultant experience changes the way we work and play. A calendar SaaS may provide synchronisation to a personal device. When the device participates in SaaS, location information from the device can augment information about a meeting to provide a contextaware reminder. The travel time to the meeting venue can take into account traffic information and the user can be informed when it is time to leave for the next meeting and the best route given the traffic conditions. Enablers Personal cloud services are enabled by a ubiquitous mobile Internet and smarter consumer devices. The miniaturisation of sensors enabled a myriad of sensors to be packed into the mobile device. These sensors range from simple magnetic field sensors to multi-dimensional accelerometer to sophisticated GPS systems. The device software provides access to these sensors through applications Virtualisation Virtualisation technologies started maturing as far back as 2005 but only saw mainstream adoption in Singapore as late as This class of technologies will continue to gain traction in enterprise datacentres and with hosting providers, and is a prerequisite of cloud computing. Virtualisation is not restricted to the well-discussed server virtualisation though and has found its place in various aspects of enterprise computing for many years now. Storage virtualisation is a matured technology that was restricted to high-end storage systems like the Hitachi Data Systems 9900 and has since trickled down to lower cost storage systems. In addition, network equipment has provided virtualisation in the form of virtual local area networks (VLANs) since early Advances in virtualisation management software provide a tight integration between storage, compute and network. VLANs are pushed into the hypervisor layer of the virtualised environment to improve operational efficiency by allowing virtual machines (VMs) from different subnets to coexist in a single physical host. Network administration is automated with the hypervisor provisioning the correct VLAN when the VM is started. At the same time, storage is provisioned as VMs are created. Software images are cloned instantaneously from a reference pool. Virtualisation is precursor and a key building component of cloud computing. There is a growing trend for software vendors to deliver their software in a standard VM instance (e.g. in OVF format). Examples of traditional software delivered as virtual appliances are Oracle Database Templates, Ruby on Rails, SugarCRM, and even security products like Barracuda firewall. Cloud vendors can provide turnkey instances of these appliances that their users 15 16 can simply fire up, avoiding the complexity of installation and setup and hence, reducing support cost. A noteworthy trend today is the availing of converged virtualised architectures where hardware providing compute, storage, and network, and software providing virtualisation and management are delivered as an integrated solution. The entire stack of software and hardware are prequalified to interoperate with each other. Such a unified architecture greatly speeds up deployment and operational management. Another virtualisation technology experiencing growth in the cloud is Virtual Desktop Infrastructure (VDI). These virtual desktop instances enjoy ease of management, better security and rapid provisioning compared to their physical counterparts. Korea Telecom (now known as KT Corporation) took advantage of their proximity to enterprise customers and offered VDI services hosted in their cloud for US$25 per user per month. Server and storage virtualisation are well adopted as of this writing Security-as-a-Service Security-as-a-Service is the delivery of cloud-based services that aids to improve security or governance of the consuming organisation. Security services ranges from Web or content filters, file encryption and storage, and logfile analysis, to intrusion detection, vulnerability assessments, and disaster recovery, to security policy and governance enforcement frameworks. The services may be consumed in another cloud, in an enterprise data centre, or directly in end-user devices. Cloud-based security controls services are not new. Services have already been in mainstream adoption for years and include and distributed denial of service (ddos) detection and prevention. The benefits of delivering these services in the cloud, compared to their enterprise counterpart, and are their efficiency, effectiveness and flexibility. The specialisation of task to deliver a specific service and the massive amount of data collected from their clients allow cloud security vendors to detect new threats more accurately and react more quickly than most local installations. Providing Security-as-a-service is a natural extension to existing security services that are already outsourced. Services like penetration testing and vulnerability assessment already used automated tools. The tools today cover an entire range of services including site monitoring, application and network vulnerability testing and reporting services. Security providers can readily maintain these tools as-a-service. Finally, adoption of cloud and standardisation because of virtualised servers has created an opportunity that allows disaster recovery sites to differ in setup from production site. Key services can be provisioned and restored in a cloud service. Disaster Recovery as-a-service (DRaaS) reduces time-to-restore using ready servers and eliminates the cost to maintain a duplicate set of systems to recover to. Furthermore, recovery procedures can be tested and practised without affecting production infrastructure at the cost of provisioning the systems just for the duration of the practice. 16 17 Enablers Security is the top concern of CIOs and the biggest inhibitor to cloud adoption. The broad nature of Security-as-a-service offerings means that adoption varies greatly depending on specific security applications. Cloud-based implementations like DDOS detection and prevention, secure and Web gateways, and cloud-based application security testing are examples of very established cloud-based security services. Inhibitors Concerns are elevated by lack of transparency in the cloud and fears about leakage of information via covert channels when a peer is compromised in the multi-tenant environment. One such attack was demonstrated when a researcher successfully located his chosen target compute instance in Amazon Web Service by iteratively launching compute instances until a co-located instance is found. He proceeded to take the target compute instance down 4. Such inherent risk in multi-tenant environment prohibits adoption of Security as-a-service for sensitive data Identity/Access Management-as-a-Service Identity and Access Management (IAM) as a Service (IAMaaS) refers to SaaS forms of IAM. IAMaaS requires minimal, and often no, enterprise on-premise presence of hardware and software. IAMaaS is critical for the federation of identity across multiple cloud providers. Insofar as best of breed selection of cloud is concerned, an organisation may select multiple cloud providers that specialise in their specific task. This entails either maintaining different identities for individual services, or having the cloud services use a single identity. IAMaaS provides the coordination required to share identity across the services. The adoption of IAMaaS will be driven both by adoption of a myriad of cloud services and consolidation of islands of identity as a result of mergers and acquisitions, or where identity was just not centrally coordinated. Enablers IAMaaS could become the holy grail of a global federated identity. The growing support for authentication and authorisation protocols like OpenID and OAuth presents an opportunity to realise a distributed global identity system. Typical use cases are when Web-based services re-use the authentication services of their more popular counterparts (e.g. identifying a user by their facebook account) and relieve its users from the need to remember another password. 4 Thomas Ristenpart. Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds. [Online] Available from: [Accessed 9th July 2012]. 17 18 Single corporate identities will also simplify security controls especially in a global implementation where identities must be kept synchronised. These centrally managed identities allows for faster detection and management of identity compromises. IAMaaS also provides an opportunity for implementers to institute a corporate-wide identity and access policy that can then determine the authentication and authorisation architecture. Inhibitors Existing IAM solutions may hinder implementation of cloud-based IAM. Implementation and technical differences aside, many identity solutions are local to individual departments and will require tremendous effort to sanitise for corporate wide usage. Existing applications may not be compatible with the new authentication and authorisation mechanism and may need to be updated. Finally, governance or policies over the location of IAM data is hindering massive adoption of IAMaaS Cloud Brokers A cloud broker is an entity that manages the use, performance and delivery of cloud services and negotiates relationships between cloud providers and cloud consumers. As cloud computing evolves, cloud providers seek to differentiate their services by creating a myriad of offerings with varying features. For instance, IaaS evolved to cater to different workloads. The complexity of selecting the appropriate service explodes with PaaS and SaaS offerings and when different services are deployed. Cloud brokerage services are increasing in demand as the integration of cloud services becomes too complex for cloud consumers. Cloud brokers bridge the gap between a cloud consumer needing to become an expert in all cloud services, and a cloud provider needing to provide customised services to the consumer. The goal of a cloud broker is to tailor cloud services to the specific requirements of a consumer. A cloud broker provides services in the three forms: Service Intermediation where the cloud broker enhances the service from a cloud provider by improving specific capability and providing value-added services; Service Aggregation where the cloud broker combines and integrates services from one or more cloud providers into one or more services; Service Arbitrage where the cloud broker aggregates services from multiple cloud providers but retains the flexibility to re-evaluate and select services from a different cloud provider. A cloud broker may enhance the original cloud offering by improving service level and security performance, and add value with centralised billing, reporting or identity management feature. Cloud brokerage may be delivered as a software, appliance, platform 18 19 or even as a cloud service. Cloud brokers may consumer services from another cloud broker. The basis of automated cloud brokers are cloud control API. These can range from APIs provided by the various cloud implementations like AWS s libraries for the Ruby, PHP or Perl, to APIs that work across different cloud implementations like Apache s DeltaCloud. Feature sets supported by each API will continue to grow affording greater monitoring and control to cloud brokers and consequently, cloud consumers. Cloud brokers can be organised, but are not limited to, around technology verticals, geographic regions, or industry verticals. Enablers Owing their business models, cloud brokers tend to rapidly adapt to changes in both demand and supply in the cloud market. The key value of cloud brokers will be in the federation of various cloud services and enforcement of security and service level requirements. Operating between cloud consumer and providers, cloud brokers are in the best position to orchestrate compute, network and data resources such that consumer requirements are met. Finally, community clouds and other special purpose clouds can be implemented via a cloud broker. Inhibitors The multi-layered approach for cloud brokers abstracts both complexity and specific agreements that can be formed between a cloud provider and consumer. This abstraction leaves the cloud consumer without direct access to the cloud provider and forces the cloud consumer to rely on the cloud broker to ensure conformance of a cloud provider to fulfill any compliance requirements. The ability of cloud brokers to layer over each other necessitates a common service level and security understanding. This is particularly important where industry-wide policies need to be enforced Three to five years The following technologies are expected to be adopted within the next three to five years. They tend to be fairly established technically but are pending the final environmental or social push to amass final adoption Development-Operations (DevOp) Tools DevOps (a portmanteau of development and operations) is a software development method that stresses communication, collaboration and integration between software developers 19 20 and IT professionals. DevOps is a response to the growing awareness that there is a disconnect between development and operations activities. Lee Thompson, former Chief Technologist of E*Trade Financial, described the relationship between developers and operations as a Wall of Confusion. This wall is caused by a combination of conflicting motivations, process and tooling. Whereas development is focussed on introducing change, operations desire stability and tend to be change-averse. The impact of this contention between operations and development results in release cycles delays, and longer downtime of services and other inefficiencies. Such effects are immediately felt by businesses whose business is reliant on rapid innovation and continued delivery of IT services as a competitive edge. As a methodology, DevOps aligns the roles of development and operations under the context of shared business objectives. It introduces a set of tools and processes that reconnects both roles by enabling agile development at the organisation level. DevOps allows fast and responsive, yet stable, operations that can be kept in sync with the pace of innovation coming out of the development process. Cloud computing provided exactly the environment for DevOps to flourish. APIs that extend cloud infrastructure operations are integrated directly into developer tools. The ability to clone environments at a low cost and to tear down such environments when no longer needed enhances the developer s ability to test their codes in exactly the same environment as operations. Quality assurance processes like unit and functional testing can be integrated into the deployment cycles. DevOps tools are readily available today. Broadly, these tools allow the creation of platforms that improve communication and integration between development and operations. Puppet and Chef are two tools that can be integrated into the DevOps process to allow rapid deployment in the cloud environment. These operations oriented tools allow substantial scripting capabilities in the form of recipes where repeatable deployment is possible. Such recipes enable developers to replicate production environments and operations to test new production environments. Sharing such recipes allow developers to better test their code, and operations to involve developers when rolling out new environments. Automated build tools have progressed into continuous integration tools. Hudson (previously Jenkins), Cruisecontrol and Bamboo are examples of such tools that integrate source code management, build, self-test, and deployment. While the process is development focussed, operations can be involved by providing their consideration and by expanding the automated test plans with operational requirements. This integration allows development to identify integration problems early in their development process. DevOps tools also include monitoring tools like Graphite, Kibana and Logstash. Such log analysis tools will be crucial for development to understand the performance of their codes especially in a distributed deployment environment like the cloud. 20 6 Cloud strategy formation. 6.1 Towards cloud solutions 6 Cloud strategy formation 6.1 Towards cloud solutions Based on the comprehensive set of information, collected and analysed during the strategic analysis process, the next step in cloud strategy formation CLOUD COMPUTING An Overview CLOUD COMPUTING An Overview Abstract Resource sharing in a pure plug and play model that dramatically simplifies infrastructure planning is the promise of cloud computing. The two key advantages of this Cloud Computing Architecture: A Survey Cloud Computing Architecture: A Survey Abstract Now a day s Cloud computing is a complex and very rapidly evolving and emerging area that affects IT infrastructure, network services, data management and Introduction to Cloud Computing Introduction to Cloud Computing Rohit Thakral rohit@targetintegration.com +353 1 886 5684 About Rohit Expertise Sales/Business Management Helpdesk Management Open Source Software & Cloud Expertise Running Realizing the Value Proposition of Cloud Computing Realizing the Value Proposition of Cloud Computing CIO s Enterprise IT Strategy for Cloud Jitendra Pal Thethi Abstract Cloud Computing is a model for provisioning and consuming IT capabilities on a Essentials for Architects using OpenStack Cloud Essentials for Architects using OpenStack Course Overview Start Date 18th December 2014 Duration 2 Days Location Dublin Course Code SS906 Programme Overview Cloud Computing is gaining increasing Security Issues in Cloud Computing Security Issues in Computing CSCI 454/554 Computing w Definition based on NIST: A model for enabling ubiquitous, convenient, on-demand, CoIP (Cloud over IP): The Future of Hybrid Networking CoIP (Cloud over IP): The Future of Hybrid Networking An overlay virtual network that connects, protects and shields enterprise applications deployed across cloud ecosystems The Cloud is Now a Critical Introduction to Cloud Computing Hadoop in the Hybrid Cloud Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big Cloud Computing Trends UT DALLAS Erik Jonsson School of Engineering & Computer Science Cloud Computing Trends What is cloud computing? Cloud computing refers to the apps and services delivered over the internet. Software delivered IBM 000-281 EXAM QUESTIONS & ANSWERS IBM 000-281 EXAM QUESTIONS & ANSWERS Number: 000-281 Passing Score: 800 Time Limit: 120 min File Version: 58.8 IBM 000-281 EXAM QUESTIONS & ANSWERS Exam Name: Foundations Introduction It seems like everyone is talking about the cloud. Cloud computing and cloud services are the new buzz words for what s really a not Unified Communications and the Cloud Unified Communications and the Cloud Abstract Much has been said of the term cloud computing and the role it will play in the communications ecosystem today. Undoubtedly it is one of the most overused The Cloud as a Platform The Cloud as a Platform A Guide for Small and Medium Business As the cloud evolves from basic online software tools to a full platform for business, it can provide ways for your business to do more, grow Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings Solution Brief Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings Introduction Accelerating time to market, increasing IT agility to enable business strategies, and improving Outline. What is cloud computing? History Cloud service models Cloud deployment forms Advantages/disadvantages Ivan Zapevalov 2 Outline What is cloud computing? History Cloud service models Cloud deployment forms Advantages/disadvantages 3 What is cloud computing? 4 What is cloud computing? Cloud computing is the Capability Paper. Today, aerospace and defense (A&D) companies find Today, aerospace and defense (A&D) companies find Today, aerospace and defense (A&D) companies find themselves at potentially perplexing crossroads. On one hand, shrinking defense budgets, an increasingly The Need for Service Catalog Design in Cloud Services Development The Need for Service Catalog Design in Cloud Services Development The purpose of this document: Provide an overview of the cloud service catalog and show how the service catalog design is an fundamental. Shaping Your IT. Cloud Shaping Your IT Cloud Hybrid Cloud Models Enable Organizations to Leverage Existing Resources and Augment IT Services As dynamic business demands continue to place unprecedented burden on technology infrastructure, Cloud Computing: Making the right choices Cloud Computing: Making the right choices Kalpak Shah Clogeny Technologies Pvt Ltd 1 About Me Kalpak Shah Founder & CEO, Clogeny Technologies Passionate about economics and technology evolving through PLATFORM-AS-A-SERVICE: ADOPTION, STRATEGY, PLANNING AND IMPLEMENTATION PLATFORM-AS-A-SERVICE: ADOPTION, STRATEGY, PLANNING AND IMPLEMENTATION White Paper May 2012 Abstract Whether enterprises choose to use private, public or hybrid clouds, the availability of a broad range The Cisco Powered Network Cloud: An Exciting Managed Services Opportunity . White Paper The Cisco Powered Network Cloud: An Exciting Managed Services Opportunity The cloud computing phenomenon is generating a lot of interest worldwide because of its potential to offer services Elastic Private Clouds White Paper Elastic Private Clouds Agile, Efficient and Under Your Control 1 Introduction Most businesses want to spend less time and money building and managing IT infrastructure to focus resources on Building Blocks of the Private Cloud Building Blocks of the Private Cloud Private clouds are exactly what they sound like. Your own instance of SaaS, PaaS, or IaaS that exists in your own data center, all tucked away, Silver Lining: To Build an Effective Cloud Computing Infrastructure, Start With the Right Core Technology White Paper The Silver Lining: To Build an Effective Cloud Computing Infrastructure, Start With the Right Core Technology For cloud service providers, choosing optimal enabling technologies is vital to What Cloud computing means in real life ITU TRCSL Symposium on Cloud Computing Session 2: Cloud Computing Foundation and Requirements What Cloud computing means in real life Saman Perera Senior General Manager Information Systems Mobitel (Pvt) Successfully Deploying Globalized Applications Requires Application Delivery Controllers SHARE THIS WHITEPAPER Successfully Deploying Globalized Applications Requires Application Delivery Controllers Whitepaper Table of Contents Abstract... 3 Virtualization imposes new challenges on mission Managing Cloud Computing Risk Managing Cloud Computing Risk Presented By: Dan Desko; Manager, Internal IT Audit & Risk Advisory Services Schneider Downs & Co. Inc. ddesko@schneiderdowns.com Learning Objectives Understand how to identify APP DEVELOPMENT ON THE CLOUD MADE EASY WITH PAAS APP DEVELOPMENT ON THE CLOUD MADE EASY WITH PAAS This article looks into the benefits of using the Platform as a Service paradigm to develop applications on the cloud. It also compares a few top PaaS providers Enhancing Operational Capacities and Capabilities through Cloud Technologies Enhancing Operational Capacities and Capabilities through Cloud Technologies How freight forwarders and other logistics stakeholders can benefit from cloud-based solutions 2013 vcargo Cloud Pte Ltd: Optimizing the Cloud Infrastructure for Enterprise Applications White Paper: Optimizing the Cloud Infrastructure for Enterprise Applications 2010 Ashton, Metzler, & Associates. All rights reserved. Executive Summary Given the technological and organizational risks The Virtualization Practice The Virtualization Practice White Paper: Managing Applications in Docker Containers Bernd Harzog Analyst Virtualization and Cloud Performance Management October 2014 Abstract Docker has captured the attention Oracle s Cloud Computing Strategy Oracle s Cloud Computing Strategy Making IT Consumable Richard Garsthagen Director Cloud Business Development EMEA Copyright 2014, Oracle and/or its affiliates. All rights reserved. Trends Driving IT Innovation Virtualization and Cloud Computing Written by Zakir Hossain, CS Graduate (OSU) CEO, Data Group Fed Certifications: PFA (Programming Foreign Assistance), COR (Contracting Officer), AOR (Assistance Officer) Oracle Certifications: OCP (Oracle Leveraging the Private Cloud for Competitive Advantage Leveraging the Private Cloud for Competitive Advantage Introduction While it is universally accepted that organisations will leverage cloud solutions to service their IT needs, there is a lack of clarity by Orange séminaire Aristote 17/12/2009 cloud computing by Orange séminaire Aristote 17/12/2009 Orange and the cloud vision and strategy cloud computing - what is it about? cloud computing is a model for enabling convenient on-demand network High Performance Computing Cloud Computing. Dr. Rami YARED High Performance Computing Cloud Computing Dr. Rami YARED Outline High Performance Computing Parallel Computing Cloud Computing Definitions Advantages and drawbacks Cloud Computing vs Grid Computing Outline Cloud Terminology Handbook Cloud Terminology Handbook Cloud Terminology Handbook 2 Cloud advocates love to argue over semantics. That s fine for them. But when it comes to procuring services, language shouldn t get in the way of Elastic Cloud Infrastructure: WHITE PAPER Elastic Cloud Infrastructure: Agile, Efficient and Under Your Control - 1 - INTRODUCTION Most businesses want to spend less time and money building and managing infrastructure to focus resources for SCADA Cloud Computing for SCADA Moving all or part of SCADA applications to the cloud can cut costs significantly while dramatically increasing reliability and scalability. A White Paper from InduSoft Larry FROM A RIGID ECOSYSTEM TO A LOGICAL AND FLEXIBLE ENTITY: THE SOFTWARE- DEFINED DATA CENTRE FROM A RIGID ECOSYSTEM TO A LOGICAL AND FLEXIBLE ENTITY: THE SOFTWARE- DEFINED DATA CENTRE The demand for cloud infrastructure is rapidly increasing, the world of information is becoming application. When It's smarter to rent than to buy CLOUD COMPUTING When It's smarter to rent than to buy Is it new concept? Nothing new In 1990 s, WWW itself Grid Technologies- Scientific applications Online banking websites More convenience Not to visit Introduction to Cloud Computing Frans Baas IBM Innovation Center Benelux January 2011 V1.2 Introduction to Cloud Computing Content Why Cloud and why now? What is Cloud Computing? Customer Cloud case Developing Cloud workflows Developing CHAPTER 8 CLOUD COMPUTING CHAPTER 8 CLOUD COMPUTING SE 458 SERVICE ORIENTED ARCHITECTURE Assist. Prof. Dr. Volkan TUNALI Faculty of Engineering and Natural Sciences / Maltepe University Topics 2 Cloud Computing Essential Selecting the right Cloud. Three steps for determining the most appropriate Cloud strategy Selecting the right Cloud Three steps for determining the most appropriate Cloud strategy Selecting the most appropriate cloud model can be a challenging process for organisations and IT executives tasked 21/09/11. Introduction to Cloud Computing. First: do not be scared! Request for contributors. ToDO list. Revision history Request for contributors Introduction to Cloud Computing by various contributors (see last slide) Hi and thanks for your contribution! If you Managing the Cloud as an Incremental Step Forward WP Managing the Cloud as an Incremental Step Forward How brings cloud services into your IT infrastructure in a natural, manageable way white paper INFO@SERVICE-NOW.COM Table of Contents Accepting the VMware for your hosting services VMware for your hosting services Anindya Kishore Das 2009 VMware Inc. All rights reserved Everybody talks Cloud! You will eat your cloud and you will like it! Everybody talks Cloud - But what is it? VM, IT Monitoring for the Hybrid Enterprise IT Monitoring for the Hybrid Enterprise With a Look at ScienceLogic Perspective 2012 Neovise, LLC. All Rights Reserved. Report Published April, 2015 Hybrid IT Goes Mainstream Enterprises everywhere are See Appendix A for the complete definition which includes the five essential characteristics, three service models, and four deployment models. Cloud Strategy Information Systems and Technology Bruce Campbell What is the Cloud? From Cloud computing is a model for enabling ubiquitous,
http://docplayer.net/1235896-5-cloud-computing-5-1-introduction-5-2-market-trends-5-2-1-sheer-volume-of-content-transferred.html
CC-MAIN-2017-34
en
refinedweb
First, here's a sample: public class Deadlock { static(); } } In Java, each Object provides the ability for a Thread to synchronize, or lock, on it. When a method is synchronized, the method uses its object instance as the lock. In your example, the methods bow and bowBack are both synchronized, and both are in the same class Friend. This means that any Thread executing these methods will synchronize on a Friend instance as its lock. A sequence of events which will cause a deadlock is: alphonse.bow(gaston), which is synchronizedon the alphonse Friendobject. This means the Thread must acquire the lock from this object. gaston.bow(alphonse), which is synchronizedon the gaston Friendobject. This means the Thread must acquire the lock from this object. bowbackand waits for the lock on gastonto be released. bowbackand waits for the lock on alphonseto be released. To show the sequence of events in much more detail: main()begins to execute in the main Therad (call it Thread #1), creating two Friendinstances. So far, so good. new Thread(new Runnable() { .... Thread #2 calls alphonse.bow(gaston), which is synchronizedon the alphonse Friendobject. Thread #2 thus acquires the "lock" for the alphonseobject and enters the bowmethod. gaston.bow(alphonse), which is synchronized on the gaston Friendobject. Since no-one has yet acquired the "lock" for the gastonobject instance, Thread #3 successfully acquires this lock and enters the bowmethod. bower.bowBack(this);with bowerbeing a reference to the instance for gaston. This is the logical equivalent of a call of gaston.bowBack(alphonse). Thus, this method is synchronizedon the gastoninstance. The lock for this object has already been acquired and is held by another Thread (Thread #3). Thus, Thread #2 has to wait for the lock on gastonto be released. The Thread is put into a waiting state, allowing Thread #3 to execute further. bowback, which in this instance is logically the same as the call alphonse.bowBack(gaston). To do this, it needs to acquire the lock for the alphonseinstance, but this lock is held by Thread #2. This Thread is now put into a waiting state. And you are now in a position where neither Thread can execute. Both Thread #2 and Thread #3 are waiting for a lock to be released. But neither lock can be released without a Thread making progress. But neither thread can make progress without a lock being released. Thus: Deadlock! Deadlocks very often depend on a specific sequence of events occurring, which can make then difficult to debug since they can be difficult to reproduce.
https://codedump.io/share/nm8l6QWzC7KB/1/how-does-synchronized-work-in-java
CC-MAIN-2017-34
en
refinedweb
A Top Shelf Web Stack—Rails 5 API + ActiveAdmin + Create React App Blending a rock-solid API and CMS with the absolute best in front-end tooling, built as a single project and hosted seamlessly on Heroku. Rails is an incredible framework, but sometimes you don’t need all the bulk of the asset pipeline and a layout system. In Rails 5 you can now create an API-only Rails app, meaning you can build your front-end however you like. It’s no longer 100% omakase. Like using create-react-app, for example. And for projects that don’t need CMS-like capabilities, that works pretty great straight away. create-react-app even supports proxying API requests in development, so you can be running two servers locally without having to do any if NODE_ENV === ‘development’ voodoo. Gosh, create-react-app is great. Still, I’ve worked with ActiveAdmin on a few projects, and as an interface between you and the database, it’s pretty unmatched for ease of use. There are a host of customisation options, and it’s super easy for clients to use if you need a CMS. The issue is that removing the non-API bits of Rails breaks it. Not ideal. But never fear, all is not lost! With a couple of steps you can be running a Rails 5, API-only, serving your create-react-app client on the front end, with full access to ActiveAdmin. We’re going to build it, then we’re going to deploy it to Heroku, and then we’re going to celebrate with a drink. Because we will have earned it. Given that theme, we’re going to build an app that shows us recipes for delicious drinks. It’s thematically appropriate! So, what are we going to use? - Create React App All the power of a highly-tuned Webpack config without the hassle. - Rails in API-only mode Just the best bits, leaving React to handle the UI. - ActiveAdmin An instant CMS backend. - Seamless deployment on Heroku Same-origin (so no CORS complications) with build steps to manage both Node and Ruby. And it’ll look something like this: If you want to skip ahead to the finished repo, you can do so here: And if you want to see it in action, you do that here: Let’s get started, shall we? Step 1: Getting Rails 5 set up With that delicious low-carb API-only mode There are a ton of great tutorials on getting Ruby and Rails set up in your local development environment. will work out your operating system, and will walk you through getting Rails 5.0.2 installed. If you’ve already got Rails 5, awesome. The best way to check that is to run rails -v in your terminal. If you see Rails 5.0.1, we’re ready to roll. Note: At the time of writing the newest version of Rails, 5.1, doesn’t quite work with these steps. Best to stick with 5.0.1 for now. So, first up, start a new Rails app with the --api flag: mkdir list-of-ingredients cd list-of-ingredients rails new . --api Right. We are already part of the way to making a delicious cocktail. Maybe use this time to congratulate yourself, because you’re doing great. Once the install process has finished, you can fire up Rails: bin/rails s -p 3001 It’ll do some stuff, eventually telling you that it’s listening on. If you visit it, you should see something like this: There’s even a kitten! So great. Step 2: Getting ActiveAdmin working With a couple of small tweaks to Rails (Thanks to Roman Rott for this bit.) So, incredibly (and awesomely, which is not a word), you can still get ActiveAdmin working like a charm with this set up. Before you can install it, you just need to switch a couple of Rails classes and add some middleware that ActiveAdmin relies on. First, you’ll need to swap your application_controller.rb from using the API to using Base: As Carlos Ramirez mentions, this requirement is an unfortunate design decision from ActiveAdmin, as now any controllers we make that inherit from ApplicationController won’t take advantage of the slimmed down API version. There is a work around, though. Add a new api_controller.rb to your app/controllers: So you can get your controllers to inherit from ApiController, not ApplicationController. For example: From there we’ll need to ensure that the middleware has the stuff it needs for ActiveAdmin to function correctly. API mode strips out cookies and the flash, but we can 100% put them back. In your config/application.rb add these to the Application class: Your config/application.rb should look something like this: Do not confuse the flash with The Flash, who is likely heavily trademarked. I don’t have the kind of money I’d need laying around if he turns out to be litigious. You should also move gem 'sqlite3' into the :development, :test group and add gem 'pg' into the :production group. Heroku doesn’t support sqlite, and you’ll need to swap those things around once you get to deploying your app. Why not do it now? Now, keen developers will be sharpening their pitchforks right now, because you should 100% run Postgres locally if you’re developing a Real Application, to ensure your local environment matches your production one. But for the purposes of this exercise, let’s just be roguish and tell no one. They’ll never know. Bundle and install everything, and then install ActiveAdmin: bundle install bin/rails g active_admin:install You should see something like the following: Finally, migrate and seed the database: bin/rake db:migrate bin/rake db:seed Once again you can fire up Rails: bin/rails s -p 3001 But this time hit. You should see something like this: And you should take a moment to feel pretty great, because that was a lot. You can log into ActiveAdmin with the username admin@example.com and the password password. Security! You can change it really easily in the rad ActiveAdmin environment, though, so fear not. Step 3: Adding create-react-app as the client Yay! Super-speedy Webpack asset handing! (Shout out to Full Stack React for this bit.) So. We need a front end. If you don’t have create-react-app yet, install it globally with: npm install create-react-app -g And then, in the root of your app, generate it into the /client folder: npm install -g create-react-app create-react-app client It’ll take a bit. You probably have time for a cup of tea, if you’re feeling thirsty. Once it’s installed, jump in and fire it up: cd client npm start Right! You have a simple create-react-app running. That is good. But we can do more than that. As I mentioned earlier, one of the best bits about working with create-react-app and an API is that you can automatically proxy the API calls via the right port, without needing to swap anything between development and production. To do this, jump into your client/package.json and add a proxy property, like so: "proxy": "" Your client/package.json file will look like this: (You might wonder why we’re proxying port 3001. Once we hook everything up our scripts will be running the API on port 3001, which is why we’ve been running Rails that way. Nice one picking up on that, though, eagle-eyes. Asking the right questions!) fetch (along with a bunch of fancy new language features and polyfills you should 100% check out) is included with create-react-app, so our front end is ready to make calls to the API. But right now that would be pretty pointless—we’ll need some data to actually fetch. So, let’s get this cocktail party started. We’ll need two relations, the Drinks, and the Ingredients that those drinks are made with. You’ll also potentially need a blender, but honestly, a margarita with a couple of ice cubes is still so delicious. Promise. Now, normally I’d say avoid scaffolding in Rails, because you end up with a ton of boilerplate code that you have to delete, but for the purposes of the exercise, let’s use it. And then delete it. Do what I say, not what I do and all that. Before that though, I should mention something. One downside to ActiveAdmin using inherited_resources, which reduces the boilerplate for Rails controllers, is that Rails then uses it when you scaffold anything in your app. That breaks stuff: "Could not find"is never a good start to the last line of output. Fortunately, this is a solvable problem. You just need to tell Rails to use the regular scaffolding process. You know, from when we were young and scrappy and people didn’t say JavaScript fatigue like having options is a bad thing. The Good Old Days. Just remind Rails which scaffold_controller to use in your config/application.rb and we can be on our way: config.app_generators.scaffold_controller = :scaffold_controller Your application.rb should look something like this, and everything should be right with the world again: Crisis averted! So, scaffolding. First, the Drink model: bin/rails g scaffold Drink title:string description:string steps:string source:string Then, the Ingredient model: bin/rails g scaffold Ingredient drink:references description:string Notice that the Ingredient references the Drink. This tells the Ingredient model to belong_to the Drink, which is part of the whole has_many relative database association thing. See, my Relational Databases 101 comp-sci class was totally worth it. Unfortunately this won’t tell your Drink model to has_many of the Ingredient model, so you’ll also need to add that to app/models/drink.rb all by yourself: Then we can migrate and tell ActiveAdmin about our new friends: bin/rake db:migrate bin/rails generate active_admin:resource Drink bin/rails generate active_admin:resource Ingredient Go team. Now, Rails is a security conscious beast, so you’ll need to add some stuff to the two files ActiveAdmin will have generated, app/admin/drink.rb and app/admin/ingredient.rb. Specifically, you’ll need to permit ActiveAdmin to change your model, which when you think about it is pretty important. First up, app/admin/drink.rb: Then app/admin/ingredient.rb: Without permit_params, you can never edit your delicious drink recipes. Not on my watch. In our routes, we’ll need to hook up the drinks resource. I like to scope my API calls to /api, so let’s do that: Start the server: bin/rails s -p 3001 And you should be able to visit to see… *drumroll* { } Nothing. We should probably add some drinks. To save some time, here’s a db/seeds.rb that I prepared earlier, featuring a delicious negroni and a delicious margarita: You’ve already migrated, so it’s just a case of seeding the database: bin/rake db:seed Now when you refresh you should see: So, we’re pretty much good to go on the database front. Let’s just massage our scaffolded controllers a little. First, let’s cut back the DrinksController. We can make sure def index only returns the id and title of each drink, and we can make sure def show includes the id and description of each ingredient of the drink. Given how little data is being sent back, you could just grab everything from index, but for the purposes of showing how this could work in the Real World, let’s do it this way. You’ll want to make sure your controllers are inheriting from ApiController, too. And let’s just get rid of 99% of ingredients_controller.rb, because it’s not going to be doing a lot: And now we have some fancy data to feed the client. Good for us! This is a big chunk of the setup, and you’re doing great. Let’s celebrate with some Adventure Time, and take this opportunity to think about how great that cocktail is going to be. Talk about earning it: Next up, let’s create a Procfile in the root of the app for running the whole Rails / create-react-app setup locally, because Heroku uses procfiles to manage your bundle. If you haven’t used them before, you can read about them here. We’ll call it Procfile.dev, because while we do need to run a Node server locally we’ll be deploying a pre-built bundle to Heroku, and we won’t need to run a Node server as well. Having a Node server and Rails server locally massively speeds up development time, and it is pretty great, but it’s overkill for production. Your Procfile.dev should look like this: Procfiles are managed by foreman, which if you don’t have installed you can add with: gem install foreman And you can fire up the new set up with: foreman start -f Procfile.dev Who wants to type that every single time though? Why not make some rake tasks to manage running development and production locally for you? Just add start.rake to your /lib/tasks folder: You’ll also need to add Foreman to your Gemfile (I’d recommend putting it in your :development group: Then install it: bundle install And from there all you need to do to fire up your development environment is run the super simple: bin/rake start Glorious! Ten keystrokes to fire up two servers? What magic is this?! And to test production (which will take a while to build without a grleat deal of output): bin/rake start:production Well, that step was a lot. So what’s happening here? foreman will start the front end, /client, on port 3000 and the API on port 3001. It’ll then open the client, in your browser. You can access ActiveAdmin via the API, at, just like you’ve been doing all along. Now we can sort our the React app. The simplest thing is to just check it works: In your console, you should see the API call logged: Which we can 100% use to grab the actual details of each fine beverage from drinks#show in Rails. Sure, we could’ve just sent everything from the server because it’s only two drinks, but I figure this is closer to how you’d really build something, so let’s go with it. Now, rather than go through the full front end application, you can either grab the client folder from the repo: Or you can install the following dependencies: npm install semantic-ui-react --save npm install semantic-ui-css --save And add them to your /client app. First, add the css to client/src/index.js: And the all the fancy bells and whistles to your client/src/app.js: Either way you choose to do it, you’re golden! You should have a fancy front end that uses Semantic UI and looks something like this: Step 4: Get everything ready for production With Rails serving the Webpack bundle So, how do we get our Rails app serving the Webpack bundle in production? That’s where the magic of NPM’s package.json‘s postinstall comes in. We can get Heroku to build the app on the server, and copy the files into the /public directory to be served by Rails. We end up running a single Rails server managing our front end and back. It’s win win! There are a couple of steps to make that happen. First up, let’s make a package.json file in the root of the app, which tells Heroku to compile the create-react-app. The postinstall command will get run after the node build is (you guessed it) installed. First up it’ll build it, then it’ll move the files into /public. How easy is that? As always, you are doing great. Also, honestly, my hands are getting sore. On the plus side, this step is super short! Go team! Step 5: Deploy it to Heroku And celebrate, because you’ve earned it We are super close! Soon, everything the light touches will be yours, including a fresh, tasty beverage. So let’s create a new Heroku app and get this thing over the finish line: heroku apps:create Now, if you push to Heroku, this looks like a dual Rails / Node app. But your Node code needs to be executed first so it can be served by Rails. This is where Heroku’s buildpacks come in — they transform your deployed code to run on Heroku. We can tell Heroku, via the terminal, to use two buildpacks (or build processes) in a specific order. First nodejs, to manage the front end build, and then ruby, to run Rails: Let’s make a Procfile, in the root, for production, which will tell Heroku to run the Rails app: Before we deploy it’s worth noting that create-react-app defines react-scripts, which manages the build of the client (along with a bunch of other stuff), as a devDependency in your package.json. Heroku sets an ENV var, NPM_CONFIG_PRODUCTION, to true, which means your build will disregard any devDependencies and it will fail. Not ideal. But not unfixable! To overcome this you have two options. You can either set NPM_CONFIG_PRODUCTION to false: heroku config:set NPM_CONFIG_PRODUCTION=false Or you’ll need to move the react-scripts in your /client/package.json out of devDependencies into dependencies: Both options will work, but given that you’ll likely have more than one devDependency, it’s easier to tell Heroku to recognise them. Normally I’d say you should always tune your production environment for, well, production, but in this case we’re building the assets with Node, not serving them. With that sorted, we can deploy this beautiful beverage-based beast: git add . git commit -vam "Initial commit" git push heroku master Heroku will, following the order of the buildpacks, build the client, and then fire up Rails. You’ll need to migrate and seed your database on Heroku, or ActiveAdmin will not be thrilled (and you won’t be able to log in). That’s easy enough though: heroku run rake db:migrate heroku run rake db:seed And there you have it: When you visit your app you’ll see the create-react-app on the client side, displaying some delicious drink recipes. You’ll also be able hit /admin (for example,) and access your database using that truly terrible username and password ActiveAdmin chose for you. I’d recommend changing those on production ASAP. I did, so you can’t just change all my demo recipes to be really rude. I’m gutted you’d even try to, honestly, after everything we’ve been through. It’s worth noting, if you’re planning to use react-router from here, there’s a few more steps you’ll need to take. Logan asked about it, so thanks Logan! This isn’t exactly the most thrilling demo (especially given it suggests putting prosecco in a negroni, which is a capital crime in some parts of the world) but hopefully it gets you up and running. All the ingredients to make a delicious Rails API / Activeadmin / create-react-app beverage are here, and the sky’s the limit. You can see a ready-to-go repo here, too: Thanks for taking the time to have a look, and I genuinely hope you celebrated with a drink, alcoholic or otherwise. 🍹 Shout out to Roman Rott and Full Stack React for the inspiration to put this together. And a massive thank you to Glen and Xander for taking the time to make suggestions and proofread this small essay for me. 💖 If you have any questions leave a comment or say hi via Twitter. Alternatively, enjoy a GIF of a dog wearing a hat.
https://medium.com/superhighfives/a-top-shelf-web-stack-rails-5-api-activeadmin-create-react-app-de5481b7ec0b
CC-MAIN-2017-34
en
refinedweb
Move/Merge articles to external wikis - 3.3 New dependencies for pacman - 3.4 vmlinuz26 - 3.5 Switch initscripts to systemd on all pages - 3.6 ConsoleKit and D-Bus - 3.7 virtualbox-iso-additions & Co. package name changes - 3.8 Inconsistent block size value for dd command - 3.9 Move openjdk6 to jdk7-openjdk - 3.10 compiz and emerald got dropped to AUR - 3.11 libgl and libgl-dri are no more - 3.12 Automatic loading of kernel modules - 3.13 Help me rewrite the eCryptfs article - 3.14 Links to Gentoo wiki - 3.15 Cleanup: installation category - 3.16 Cleanup: links to non-existent packages - 3.17 Add missing interlanguage links in protected pages - 3.18 xorg-server 1.16 - 3.19 Should we remove or archive obsolete articles? -. - Pages with misspelled or deprecated templates - Need to fix template or change to new) Xen: Creation of a network bridge I cleaned up the Xen article as much as I could. I can't get past the "Creation of a network bridge" section as it only describes how to do that with netctl. Maybe someone more experienced with Xen can expand on how to do that with other network managers? UPDATE: See Network bridge, this request should be implemented there. axper (talk) 07:32, 20 July) - Just a note: some of these pages are in Category:DeveloperWiki, some in Category:Arch development and some in both :\ -- Lahwaacz (talk) 08:42, 10 July 2014 (UTC)) - is of course preferable, however if there's really no alternative and the article in the Archives is still relevant, the fact that it's no longer editable shouldn't prevent from linking there. -- Kynikos (talk) 01:43, 8? Seriously outdated: - AIF_Configuration_File - outdated, but see Talk:AIF_Configuration_File#Preserve_AIF_pages_for_Development (it says "pages" in title, were there more?) - Installing_Arch_Linux_with_EVMS - last stable version of EVMS is from 2006 - Hard_Disk_Installation - written in 2008, no major update ever since Duplicated: - Install_from_SSH and Remote_Arch_Linux_Install describe basically the same thing - Installing_Arch_Linux_in_Virtual_Server - merge into Virtual_Private_Server? merge Remastering_the_Install_ISO and Building_a_Live_CD? -- Lahwaacz (talk) 22:08, 6 March 2014 (UTC) -- edited on 17:18, 11 July) Add missing interlanguage links in protected pages Some protected pages lack respective interlanguage links even though translated pages exist (for example, Installation guide). --Kusakata (talk) 16:49, 1 July 2014 (UTC) - I've fixed Installation guide, I can't check all the other pages now, if you find some other missing links somewhere else, please report it in that article's talk page. A little tip to find all the localized versions of the same title is to preview one article adding Template:i18n (no need to save of course). -- Kynikos (talk) 07:31, 2 July 2014 (UTC) - Thank you. I found: Category:Help (lacks ja, uk, zh-TW links), The Arch Way (su), Arch Linux (su), Installation guide (bg (its name is Official Installation Guide)), lt (Quick Arch Linux Install), nl (Official Installation Guide), th (Quick Arch Linux Install)), Arch packaging standards (ja, pt), Archboot (ar), and Help:i18n (ja). xorg-server 1.16 The release brings in rootless X, but also breaks redirecting output, it's not obvious how to run more than one X server at a time, I don't like it and I'm going on strike, so I won't be fixing any articles like Start X at Boot. -- Karol (talk) 15:01, 29 July 2014 (UTC) - Well, someone should step up as we're seeing content duplication, e.g. [6]. -- Alad (talk) 07:20, 11 August 2014 ) List of suggested solutions Delete, don't look back(current method, not a solution) Separate namespace(discarded) - Redirect to a page like "ArchWiki:Deleted" - Publicize Special:Undelete Bot requests Here, list requests for repetitive, systemic modifications to a series of existing articles to be performed by a wiki bot.
https://wiki.archlinux.org/index.php?title=ArchWiki:Requests&oldid=330061
CC-MAIN-2017-34
en
refinedweb
This is my forst time posting to the list. If I should submit this patch in some other way, please let me know. I think this is a bug in Ant. What happens is that Ant accesses the system properties list not by using the standard well-defined interface but by accessing its underlying hash table. The Java Language spec is ambiguous about what happens in this case: it says that every property list can contain another property list as its "defaults", but what it does not say is whether accessing the underlying hash table gives access to the defaults. It seems to me that if you have a Properties object then you should be using the Properties interface. The spec doesn't guarantee anything about the internal structure of the system properties. Andrew. *** Project.java~ Thu Oct 11 14:58:28 2001 --- Project.java Tue Feb 11 21:00:57 2003 *************** public class Project { *** 393,401 **** public void setSystemProperties() { Properties systemP = System.getProperties(); ! Enumeration e = systemP.keys(); while (e.hasMoreElements()) { ! Object name = e.nextElement(); ! String value = systemP.get(name).toString(); ! this.setProperty(name.toString(), value); } } --- 393,401 ---- public void setSystemProperties() { Properties systemP = System.getProperties(); ! Enumeration e = systemP.propertyNames(); while (e.hasMoreElements()) { ! String name = e.nextElement().toString(); ! String value = systemP.getProperty(name).toString(); ! this.setProperty(name, value); } }
http://mail-archives.apache.org/mod_mbox/ant-dev/200302.mbox/%3C15946.26686.750806.490293@cuddles.cambridge.redhat.com%3E
CC-MAIN-2017-34
en
refinedweb
Opened 7 years ago Closed 3 years ago #14226 closed Bug (fixed) Bug in dumpdata dependency calculation involving ManyToManyFields Description The manage.py dumpdata command incorrectly interprets ManyToMany relationships as dependencies of the model that declares them (rather than the other way around). In the example below are 5 models - User, Tag and Posting, where both User and Posting have M2M relationships to Tag via UserTag and PostingTag, respectively. This should be serializable. Here are the actual dependencies: User: None Tag: None Posting: User PostingTag: Posting, Tag UserTag: User, Tag However, dumpdata fails with this error: Error: Can't resolve dependencies for main.Posting, main.PostingTag, main.Tag, main.User, main.UserTag in serialized app list. from django.db.models import Model,CharField,ForeignKey,ManyToManyField,TextField,DateTimeField class User(Model): username = CharField(max_length=20) password = CharField(max_length=20) topics = ManyToManyField("Tag",through="UserTag") def natural_key(self): return (self.username,) class Posting(Model): user = ForeignKey(User) text = TextField() time = DateTimeField() def natural_key(self): return (self.user.username,self.time) natural_key.dependencies=['main.User'] class Tag(Model): name = CharField(max_length=20) postings = ManyToManyField(Posting,through="PostingTag") def natural_key(self): return (self.name,) class PostingTag(Model): tag = ForeignKey(Tag) posting = ForeignKey(Posting) def natural_key(self): return (self.tag.natural_key(),self.posting.natural_key()) class UserTag(Model): user = ForeignKey(User) tag = ForeignKey(Tag) def natural_key(self): return (self.tag.natural_key(),self.user.natural_key()) The reason this occurs is invalid logic in django/core/management/commands/dumpdata.py in lines 152-155. Here is the relevant code & context: 145 # Now add a dependency for any FK or M2M relation with 146 # a model that defines a natural key 147 for field in model._meta.fields: 148 if hasattr(field.rel, 'to'): 149 rel_model = field.rel.to 150 if hasattr(rel_model, 'natural_key'): 151 deps.append(rel_model) 152 for field in model._meta.many_to_many: 153 rel_model = field.rel.to 154 if hasattr(rel_model, 'natural_key'): 155 deps.append(rel_model) 156 model_dependencies.append((model, deps)) Lines 152-155 treat M2M relations like FK relations. This is incorrect. A Model named by an FK is a dependency, however, the model named by an M2M is not. The fix requires adding the M2M *table* to the model_list, and processing its dependencies accordingly. I've attached a simple test project that demonstrates the problem. Attachments (3) Change History (19) Changed 7 years ago by Changed 7 years ago by Patch processes M2M tables correctly comment:1 Changed 7 years ago by This needs tests; fixtures_regress already has tests for the sort_dependencies utility method, which is what is being modified here. Also - I'm not completely convinced the patch is correct. On first inspection, I'm fairly certain the "add the through model to the dependency chain" will result in objects being added to the fixture that aren't required for the simple case. The simple case (normal m2m) can be satisfied by simply removing m2m checks from the dependency chain. In the complex case (manually specified m2m model), checks aren't required either, because the manually specified m2m model will be processed as a standalone model. Tests would help to validate this :-) comment:2 Changed 7 years ago by Not sure how the CC and keywords got modified. Sorry James and Andrew. comment:3 Changed 7 years ago by comment:4 Changed 7 years ago by the patch to add tests will only be usefully applied after the genocide of doc tests has been merged into master. comment:5 Changed 7 years ago by also, this test does not account for russelm's concerns. comment:6 Changed 7 years ago by The proposed code patch appeared to break fixtures_regress:test_dependency_sorting_dangling, but on closer inspection, should not be considered to be the case. That test explicitly expected the dangling, un-related model to have a position in the resultant dependencies, but such an expectation is invalid. Changed 7 years ago by Patch to add/fix tests under fixtures_regress comment:7 Changed 7 years ago by active development of this at comment:8 Changed 7 years ago by The test cases look good, but running the full test suite reveals some additional breakages -- most notably in the fixtures tests. Some of these breakages are output ordering problems, but some appear to be more than that. comment:9 Changed 6 years ago by comment:10 Changed 6 years ago by comment:11 Changed 5 years ago by Change UI/UX from NULL to False. comment:12 Changed 4 years ago by comment:13 Changed 4 years ago by I think the correct way to fix this is to remove models referenced by complex (with explicit intermediate models i.e. through=...) M2M relations from the dependency chain, but keep simple (with automatic intermediate models) in the dependency chain. This is also the check done by django.core.serializers.python.Serializer.handle_m2m_field. In serialized data, simple M2M relations are shown inline with the model that defines them, which makes dependencies of the referenced models. For complex M2M relations, however, the intermediate models should be serialized along with the other models, and should be included as models in the serialized data. The model defining the M2M relation thus has no dependency to the other model, but the intermediate model will have a dependency to both of the M2M models. Development branch at Pull request at All tests pass under sqlite and postgres. I created quite a few tests to first learn about the issue and then to be sure that everything works. The test_dump_and_load_m2m_complex_* tests are most likely redundant with the other tests in the PR, and can be removed as seemed fit. FWIW I also tried the patch posted by aneil, which broke a couple of tests, and removing the M2M checks altogether, which didn't break any tests but resulted in a regression that's caught in the tests in the PR (namely test_dependency_sorting_m2m_simple) test project demonstrating dumpdata problem
https://code.djangoproject.com/ticket/14226
CC-MAIN-2017-34
en
refinedweb
This morning we became aware of a Twitter campaign run from the website. This campaign is intended to provide Microsoft with feedback about our decision to continue to use Microsoft Word for composing and displaying e-mail in the upcoming release of Microsoft Outlook 2010. The Email Standards Project, which developed the website that promotes the current Twitter campaign, is backed by the maker of “email marketing campaign” software. First, while we don’t yet have a broadly-available beta version of Microsoft Office 2010, we can confirm that Outlook 2010 does use Word 2010 for composing and displaying e-mail, just as it did in Office 2007.. Word enables Outlook customers to write professional-looking and visually stunning e-mail messages. You can read more about this in our whitepaper, outlining the benefits and the reason behind using Word as Outlook’s e-mail editor. SmartArt Drawing and Charting tools Table and Formatting tools Mini Toolbar for formatting Word has always done a great job of displaying the HTML which is commonly found in e-mails around the world. We have always made information available about what HTML we support in Outlook; for example, you can find our latest information for our Office 2007 products here. For e-mail viewing, Word also provides security benefits that are not available in a browser: Word cannot run web script or other active content that may threaten the security and safety of our customers. We are focused on creating a great e-mail experience for the end user, and we support any standard that makes this better. To that end,.. As usual, we appreciate the feedback from our customers, via Twitter or on our Outlook team blog. -- William Kennedy Corporate Vice President, Office Communications and Forms Team Microsoft Corporation I hope MS not forget that not all other users on the world without Outlook and sorry NO, this new OOXML file format will not fix the problems. Receive EMail written in Word2003 was sometimes a hopeless thing, when you use another eMail program. This post makes it look like Word is the only way to author e-mail posts in Outlook 2010. Of course, those of us who limit our authoring of e-mails to plain-text and only view arriving HTML-formatted e-mail don't have to worry about whether Word is used for editing rich-formatted e-mail. Right? But what happens when someone sends me one of these rich emails if I have an email client other than Outlook/Word? Will it be compatible? Our issue and by our, i do not mean "creators of email marketing software", I mean web developers, is that you are using words RENDERING engine to DISPLAY the email in the client. It Matters not to us how how you create the emails, jsut how you display them and others created using Standards. I assume that HTML created in Words will display correctly in IE? Then why not use IE rendering engine for the DISPLAY of all HTML formatted emails??? Hey guys, Thanks for commenting on this. I would like to ask one question though,. We're not looking for anything special, unique or far fetched. Just let us design our HTML emails the same way we design our HTML websites. You let us do that and you've done your job, so we can do ours. Thanks again, I really hope you guys will consider improving Outlook's rendering capabilities. Just because Word formats the email doesn't mean the email is sent as a Word document. It gets converted to HTML. The lack of CSS support may mean poorer designs (tables and the like) but it in no way means that other email viewers will have any more problems with it than web browsers would. If it's the same as 2007, where I can turn off using Word for the editor, that makes me happy. Good job ignoring this MARKETING CAMPAIGN by Campaign Monitor, as that's all it is. 25 year experience! In that case rock on with table based layouts I read you response with joy and sadness. Joy that you have recognised a very powerful message sent by the users of Twitter today, but sadness that you do not seem to understand what the problems here are. I disagree with the comment about there being no widely-recognised consensus about what is appropriate HTML for displaying HTML emails. HTML is the standard - why should there be a different standard for emails??! Fix it! Fix it! Fix it! Fix it! Fix it! That is all at this time. The intertubes have spoken. I have lead a webdesign agency that, among other things, builds e-mail marketing campaigns. Outlook was always an issue because of Word's poor rendering of standard HTML. It is just sad that Word does not have the ability to properly render CSS, which is a de-facto standard to position elements in HTML today. It's not all about e-mail marketing either. These days I am at MSFT and even our PA can't get the formatting of e-mails to wish co-workers a happy birthday come out correctly. There is no reason to use Word as rendering engine. To use it for mail creation is fine, but please use IE or a standards compliant rendering engine for display. Thanks. So keep the Word authoring tools, but fix the HTML that it outputs to you can use a browser to render it, just like everyone else. And the email standards project might not be anything official, but it is an attempt to establish some consensus, while you just do what you want. "The 'Email Standards Project' does not represent a sanctioned standard or an industry consensus in this area. Should such a consensus arise, we will of course work with other e-mail vendors to provide rich support in our products" I think this campaign just showed you that such a consensus has arisen. Or is the support of 18,000+ Twitter users not enough? Peter: E-mail written with the Word editor is just e-mail; has nothing to do with the new Office OpenXML document format. It's just an HTML, RTF or Plain Text e-mail and any e-mail client in the world that supports those formats will be able to read it fine. Dennis: Yes, as of Outlook 2007 Word is the *ONLY* editor in Outlook so you are using Word to create plain text or Rich Text (RTF) e-mails as well. -Ben- Please make Outlook 2010 render email according to standards. Anyone who has ever created an email blast will tell you that switching to Word as the rendering engine in Outlook 2007 was a bad decision. If the next version of Office does not do a better job rendering HTML email it will be a step in the wrong direction. Actually, why bother generating HTML at all, if your target recipient is also using Outlook? Why not just send a multipart/alternative message containing text/plain and {some Word-specific MIME type} parts? Honestly, if you want to operate in your own little Office ecosystem and ignore what the rest of the world is doing, that's fine, but why not use the mechanisms which have existed since before Outlook was created to do so? "[click here to] Read this issue online if you can’t see the images or are using Outlook 2007." - Quoted from the official Microsoft Xbox newsletter. Even your companies own marketing teams cant send out appealing newsletters using the tools you are providing. At least give us a meta tag that triggers Trident >.< Sorry Microsoft you just don't get it. Your rendering of email is so far off what every other client can do - including Hotmail/Windows Live Mail - if they can do it, inside a web browser with all their other bits and bobs, why can't you? I'd suggest you go have a chat with your colleagues. Great - you can create your email within Word - that's great functionality. But what about viewing emails that people who *heaven forbid* haven't created their email in Word? How do those email display on other platforms - I'm assuming you'd hope they'd display correctly - or are you adding all sorts of proprietary tags, as Word has a habit of doing, so that they only look ok if you load the email up in Outlook somewhere else... So what if the campaign is run by the creators of Email Marketing Software... it needs someone like that to get behind it to give it the exposure needed. There are now over 19,000 web designers and email marketeers here using all sorts of different packages who are agreeing that you're going down the wrong path... listen to them! I should mention that as far as I know Gmail strips CSS. I can't remember if it's only external CSS files, or maybe it only supports a subset of CSS, but the point stands. This blog post is a distraction from the original intent of the 'movement' on twitter. They are concerned with how emails will _DISPLAY_ in Outlook 2010, but this blog post spends most of it's time talking about the virtues of _COMPOSING_ email using Word. You (Microsoft) have clearly missed the point of the campaign - the concern is only with how received emails are rendered. By failing to support CSS formatting, and even major portions of HTML, you completely wreck the design of all but the most outdated emails. If IE can display it correctly on a website, Outlooks should be able to display it in an email. Is that so difficult to understand? Please please please let Outlook 2010 display CSS properly. CSS is the standard layout technology for the web, and it makes sense to use it in email. @ Jim Deville Have you ever looked at the HTML it creates? If you do you'll see why that's a problem too. All we want are web standards! We want to code emails the way we code websites! And as for security reasons being so crucial, then install a HTML rendering engine to display emails and force scripts to be turned off then. Slandering the ESP isn't becoming either, these guys are working with other email clients for the greater good. Unless I'm misunderstanding, The "Email Standards Project" does represent an industry consensus. That's what the whole point of it is. Surely the frustrating experiences of all your end users who open CSS-based e-mails in Outlook count for something. Alex is right -- use Word to formulate e-mails to send. Disable all the scripting necessary to make Outlook secure, but render html in a predictable, standard fashion so that we marketers aren't required to hack specifically for your e-mail client. William, can you say that every e-mail you've received in Outlook '07 looks as intended? I receive e-mails every day from national advertisers that don't render correctly in Outlook. Travis Bell said exactly what I was thinking. There may be no "set standard" for how to use HTML in an email, but in reality with the HTML standard there is one: HTML. Noone said we want ActiveX support in our emails, we want CSS and HTML support in those emails. HTML and CSS, in and of themselves, if you block ActiveX, and Javascript, etc, is no more harmful than MS Word rendering it within it's tight rules system. IMHO, I think this route is being taken because it already works and means less work having to support HTML/CSS standards.. I have a company that send emails on behalf of clients, and to be honest, Outlook 2007 is a real pain... it has taken development a step back in many ways as it doesn't support common standards. The primary culprit is the limited CSS support - there is no support for CSS floats or for CSS positioning. With the exception of color, CSS background properties are not supported; this includes background-attachment, background-position, background-repeat and background-image. I hope that MS takes the campaign running on seriously! Travis, "Standards compliant" implies a standard. They are saying their isn't one. This is a step backwards from a standards point of view. Build the emails in Word that's fine but give outlook a different rendering engine that mirrors what you just gave us with IE8. Why would you take this step back? While there is no de facto standard for HTML email, there is a de jure standard, and that's what the whole fixoutlook.org commentary is about. HTML 4 and CSS 2.1, without needing to resort to <font> tags, etc... are what email designers want. In my experience as someone who provides email marketing to clients, building HTML email can be an incredibly painful experience. These messages are generally hand-coded, not composed in Outlook, which means we get none of the benefits of Outlook's slick authoring abilities. Unfortunately, hand-coded HTML email is very hard to make look good in Outlook 2007. Adding consistent support for a few key CSS properties — margin, padding and float — would make my daily work much easier. Building HTML email for Outlook 07 is like making a site for Internet Explorer 6 -- it is time consuming, confusing and totally unsatisfying. It's a guessing game: maybe I should try line-height, maybe padding will work on this element if I nest it in a table... In the end, the reading experience is invariably poorer for Outlook users than for users of other email clients. I can't even begin to imagine how complex an app like Outlook is under the hood but I hope you can look at ways to improve the CSS capabilities of Outlook 2010. I've got no axe to grind, not that I think that the email marketing company mentioned does. It was just as annoying when Notes was the client that caused the most problems when sending out completely legal, solicited emails. Lotus have got their act together and the latest version is much better. It's only Microsoft that seems to be going backward. Security and ease-of-use are worthwhile aims, but I don't understand how this helps? Oh good, marketing spiel rather than words from actual people. Whether your reasoning is good or not, the people who are opposed to using Word for rendering are not the kind of people who are going to be won over by heavily authored statements like this. You would probably have been better off not making any statement at all. Here's the reality, whenever someone wants to include rich content in an email (ie charts, graphs, pictures), they do it as attachments. I have NEVER encountered someone using anything beyond the basic rich text formatting options when composing an email, most people don't even use those. For the record, I don't work in a company full of purists either, I work with your everyday office computer users. This makes me highly skeptical that there is a market big enough to justify all this functionality that **only other users of outlook can actually appreciate** Please let your developers write your next response. I mean this in all honesty and kindness, but I don't trust or buy into statements that come from management. Travis Bell stated the issue about as succinctly as anyone could. As a Web developer, I'm happy to have him speak on my behalf on this issue. In the end, it's just that simple: designing an HTML email should be exactly the same process as designing any other HTML document, like a Web page. In my opinion, the Email Standards Project <em>has</em> created an industry consensus. If Microsoft can re-navigate software as broadly used an Internet Explorer onto a course that gives respect to Web standards (not perfect, but far better than IE6), why is the team crafting Outlook so determined to move in completely the opposite direction? Thanks so much for the comments WIlliam. <blockquote>"There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability."</blockquote> I'm not sure you guys are seeing the point being made by the 18,000 people who have tweeted about this today. The consensus *is* web standards. Even if you don't support everything in the W3C specification, those that you do support should be standards compliant. That means margin and padding should work, table formatting shouldn't break. Even basic box model support would be a huge step forward. If you're interested, there is a complete list of the basic CSS properties Outlook need to support to bring it inline with the rest of the majority of the industry from a standards perspective: The issue here isn't about composing emails in Outlook. I don't care if people use Word to compose emails. What I care about is what people see when they receive and view HTML emails in Outlook 2007+. The fact that Outlook 2003 had pretty exceptional CSS support and you all decided to switch to Word for rendering in Outlook 2007 was a gigantic step backwards. That's what I'm especially mad about. Regarding there being a "widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability" - I encourage you to work with the Email Standards Project. They're the only one's who've stepped forward to try and deal with this issue. Yes, FreshView/CampaignMonitor is behind it, but I truly believe the ESP has all of our best interests at heart. The problem is that every other major email client has settled on a rather broad set of standard HTML support (nothing special or fancy, just standard) and Outlook has not. There is no inherent risk in supporting web standards. For the most part, as long as you disallow scripts and embeds you should be done. The result of this is that users of Outlook will see a much worse rendering of simple HTML. You already have IE and the Trident rendering engine. Why not use that to render HTML within Outlook? You could improve Word's HTML rendering engine to add support for features made in the last decade. That might make people happy. Seems like a waste to maintain two HTML rendering engines within the same company, though. Couldn't the email be composed using Word and then converted to HTML when it is sent, and have all HTML email displayed using IE? If there's a worry that IE wouldn't display it properly due to rendering engine differences then surely there's also the worry that every other HTML email client might display Word-generated emails incorrectly as well. Why do you need an agreed subset of HTML? What's wrong with straight out HTML in the first place? I understand the power of Word as the authoring tool, but frankly it sucks for rendering. As a web designer who frequently builds HTML emails for my clients, I'd love to be able to build them the same way I build HTML web pages - using all CSS and current design standards. But because so many of the e-mail recipients are forced to use Outlook, I'm forced to make incredible design sacrifices in order to build something that will look somewhat-acceptable in Outlook, and all because it uses Word's rendering engine. Whatever Office users are using to build their own HTML emails should have absolutely no effect on what Outlook uses to render them. Please make our lives easier, and help ensure that everyone all over the world has a chance to view content in the same way, and consider bringing Outlook up to speed with current web standards. I agree with Travis. As a developer, I don't as much what Outlook users will use to create their own e-mails to other people. What I do care about is that e-mails that my company creates will be displayed correctly in Outlook using modern HTML coding techniques. As Alex suggested, go ahead and use Word to create the e-mails, but use IE to display them. That way, we can develop better code-quality e-mails for use in our backend systems and know that they will render correctly. Mr. Kennedy misses the point. The issue is the rendering engine used for displaying HTML emails. The Word 2007 rendering engine used in Outlook 2007 is far interior to Internet Explorer engine used in Outlook 2003. As someone who codes HTML emails for a living, it is ridiculous that we cannot reliably use padding, image floats, and background images in our emails. Outlook 2003 used the Internet Explorer engine for rendering HTML and allowed users to compose email using Word if they wished. Outlook 2010 should go back to this dual approach. BTW, I am not a CSS zealot and don't think tables are evil, etc. I'm guessing it would be to hard to determine if an email was sent from Outlook, using Word specific features and use that rendering engine in that case. Otherwise, default to the IE8 rendering engine? Fine, we get it, you can create rich email in Word. But since they get converted to HTML, why do you have to use Word's rendering engine to display them? Web standards are the way forward, with Outlook 2010 and IE6 Microsoft is single-handedly holding back the web and making web designers' jobs more frustrating. PS. I love it that the examples of rich emails you gave were charts and diagrams. it's almost straight of out of the Apple adverts. This is absurd. No wide consensus? We're not talking HTML, here, we're talking CSS — and CSS rules like "float" have been around since 1.0. The real problem here is that you're unable or unwilling to make your various formatting tools (shown above) spit out proper HTML, so that your email client can _read_ proper HTML/CSS. In addition, you're stating that because no clear industry standard exists for HTML emails, you think the best solution is to use the rendering engine of a proprietary text processor. Aside from that being a flawed conclusion, in my opinion, the basis is also terribly wrong: I believe the success of this Twitter campaign makes it quite obvious that a standard does exist. Perhaps you should start a campaign for _preserving_ the Word Engine and see how many retweets you get. Apologies if this is a bit of a flame comment, but come on — it's Outlook 2010. One would think it could render HTML emails better than Netscape Navigator 4. Like I said, this is just absurd. This again underlines Microsoft's intolerance for open standards. HTML and CSS are open standards and other mail clients have no problem supporting them. This leaves users to make their own free choice based on features, user interface and other preferences, rather than treating their customers with contempt by locking them into a product ecosystem on the pretense of greater usability. I'm not sure I can add to what has already been said, except to say that everyone here and on Twitter is right. The reason why the Twitter campaign was so successful is because anyone who has tried to write an HTML email campaign that works with Outlook has been incredibly frustrated. There is an entire industry build around trying to make sure HTML email campaigns work in Outlook. That should tell you something. Blog posts like this do not help. It doesn't even sound like it was written by a person, more like a team of marketing experts. Anyone who has ever tried to make a decent HTML email for Outlook knows this post for what it is. William, Kudos for the quick response. But instead of going on defense, you should actually *listen* to your customers' concerns. Your entire post fundamentally misses the point. It's not about e-mail *creation*, it's about e-mail *display*. It's cool (tho naieve) that you guys think Outlook is used to create every e-mail on the planet, but it isn't. As a .NET developer, I've created many websites that use simple SMTP clients to send HTML formatted messages that were designed using Visual Studio or Expression Web. You guys like having the Word 2010 engine being the one to display HTML: fine. So, update Word 2010 to render markup the same way IE8 does. Seems like a really no-brainer solution to the problem, that doesn't need a "standards body" to create. There already is a standards body for HTML, and that should be good enough for you guys. Robert McLaws Windows-Now.com "For e-mail viewing, Word also provides security benefits that are not available in a browser: Word cannot run web script or other active content that may threaten the security and safety of our customers." So there's no easy way to disable scripting and active content while viewing email with the IE rendering engine? That sounds broken to me. The main point of contention is the RENDERING of HTML emails. Williams response seems focused on Word's AUTHORING capabilities. Why is the Word engine not limited to authoring and IE8 used for rendering? If composing emails in Outlook sent to other Outlook users is top priority, then you guys are on the right track. Unfortunately this isn't the case. Without support for extremely basic thing like background images in CSS it's near impossible to create a rich email that works for all users. I don't know about anyone else, but in my life it's not 100% of my friends using Outlook for email. Exactly what Travis said. A lot of people are unhappy about this. Travis Bell pretty much said what I wanted to say. So I'll thank him for stating my thoughts so clearly. I personally think it's good that you focus on a great user-experience when composing e-mails. But the issue as I see it is not if/if not to use Word 2010 as the e-mail editor - The issue here is that you also use its rendering engine when displaying incoming HTML e-mails. I understand that it doesn't make sense to use it for composing but not for displaying. But couldn't you then at least fix some of the many bugs and improve your CSS support? That's all we ask - everybody wins. I suggest you take another look at and see if you can improve any of the issues raised in that blog post. The complain on twitter via fixoutlook.org is not about how emails are created but about how they are rendered. To take a concrete exemple, when building a newsletter, webdevelopers have to play with the rendering behaviour of all the different mail clients, so again, as I'm not the first one commenting this blog post, why not using a well-known standards such as HTML? in other words, why not using IE renderer? If it's just because of some security problems, I still don't understand why standard compliant DOM elements are not supported? Using Word for writing emails seems to be your main focus which is fine. For rendering/displaying emails I feel like it should render like any other web page. You should give the user different options so that they can customize this if they don't want to use one or the other. I understand the bias seeing as this is an MSDN blog, but you seem to assume that the world exists to use Microsoft and nothing more. In a perfect world, we'd all use only MS Word to create and send perfect HTML emails to recipients on only MS Outlook, seeing as its the 'best e-mail authoring experience around'. But we all know this isn't the case. Email marketers everywhere rely on Outlook and other email clients to properly render messages NOT created in MS Word. However, seeing as Outlook (like it or not) represents an overwhelming percentage of consumer inboxes in almost every industry, any obstacle to a message's rendering effectively impacts the revenue of hundreds of companies around the world. It's because of this fact that the Twitter campaign is taking off - not because petty designers are lazy or need something to yell about, but because Outlook simply needs better CSS support as we saw in older versions of Outlook (pre-2007) where IE was the rendering engine. Your explanation in this post, however informative, still seems myopic and all too self-serving. When dollars/pounds are at stake, especially in this economic climate, errors of judgment of this scale need taking to task, which is exactly what the Email Standards Project are trying to do. Regardless of whether or not they're a 'recognized consensus', they still represent a valid reason for seriously reconsidering the dev road map for Outlook 2010. \\ Thanks, Jon As others above have said, if you improve Word's rendering of HTML to be (at least more) standards-compliant, then you solve the problem. Is there a reason why you can't do that and also keep supporting all of the Office-only rich features that are so important. I don't think you should punish Outlook users by forcing them to hand-code HTML emails. We're not asking you to remove any composing capabilities. Just bring the rendering engine up to date so HTML email not built in Outlook displays properly. I'm tired of all the workarounds I have to code just for your email client. My company stopped sending HTML emails with Outlook 2007 because the Word engine adds crap code to the package which other some clients email don't understand. Honestly, how hard is it to support the web standards that have been around for years now? Please? "." The whole point of the original statement is that there is no standard for HTML in email. Additionally, implementing the full standard for HTML, XHTML or CSS would open up all kinds of fun new tools for spammers. As a developer for SendLabs, an email marketing tool, I can say that rendering emails consistently across clients is a nightmare. I can also say that Outlook is by no means the worst culprit. GMail and Lotus are where the bulk of our rendering issues lie. In the end, the primary function of email is sending text-based messages. Word is Microsoft's text document editor which makes it the most likely choice. While we developers may be more vocal, in the end I'd say the vast majority of Outlook users would prefer a MS Word based editing experience rather than a Dreamweaver-esque WYSIWYG. Yes, there is no census on a subset in HTML-Email, but of course there are standards in HTML itself. Word-generated HTML isn't really follwing these standards, despite the same possibilities, especially new versions like HTML 5. So why not push those standards with the new version and make sure that ANY email client can have the described stunning experience with emails from Outlook? (Sure, selling products is an answer...) it is a shame that you guys continue to go against the current on these topics which create many issues for developers worldwide. It is a fact that the most problematic email client is outlook, the same applies for your web browser aka: Internet Explorer. HTML in Email is the spawn of the devil and should be banned. Do your own thing Microsoft - as long as you continue to allow us to specify TEXT ONLY emails. While the Email Standards Project is backed by freshview, that doesn't mean the project is not worth listening to - bringing it up detracts from the issue. They have done some great research, and have good ideas on what HTML/CSS should be supported in an email client. As a developer, I am mostly concerned about the rendering quality of email clients. If I need to send automated email, I would like to be able to construct the HTML like it is 2009, and not 1999. I know you have to balance the needs of Outlook users with the needs of developers, but I think you can make both happy by supporting the rendering of modern HTML. We all understand Microsoft's desire to have its own emails created and viewed consistently. That is perfectly reasonable. Can you not understand the rest of the world's desire to stop creating HTML emails using the equivalent of HTML 3.2? Does the Outlook team disregard the existence of DOCTYPEs when they make these statements about consistency? Surely it wouldn't be difficult to implement your Outlook-authored emails using a Microsoft doctype, and include real HTML/CSS support for developers who prefer it. That would be a true commitment to interoperability. You could have your cake AND eat it too! Sadly, the ideas in this blog post reiterate the traditional Microsoft sentiment: they are only concerned with the user experience of non-technical end-users of their products. They are in no way concerned with the thousands of developers and designers who deal with the quirks of their awkward and selfish technical decisions day in day out. "There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability" I can't believe I just read that: HTML4 and CSS2 have been standards for many years now. It has taken IE8 to render them (more or less) correctly in a Microsoft browser... now get your act together and do the same for the standard Microsoft e-mail client. The problem, as Alex stated, is not with CREATING HTML emails in Outlook. It's the way they are rendered when you RECEIVE them. I think Travis put it well. Let us build and let you render a HTML email the same way we do web pages. Not a big ask I'd think? Cheers for the response. This appears to be the same awful argument for improving the user interface of browsers (adding tabs, RSS buttons, etc) instead of improving the rendering of good HTML that we heard back with IE6. It's code word for we'd rather have people be able to send graphs made in Word to other Outlook users than to make it easier for email marketers and their developers to send beautiful emails! Disappointing that web standards is simply not considered as a necessity given all the work to bring Internet Explorer 8 up to speed in passing the Acid2 test. Observe web standards please, Word is far to out of date to be the engine. As a developer who often has to code HTML emails for clients, I'm quite frankly appalled at this - the Word HTML rendering engine just isn't up to the job, as it doesn't support many common and absolute basic HTML attributes. Outlook 2007 is one of the worst clients for displaying HTML emails, and now 2010 will continue this? What a disgrace. Thanks for nothing for making things many times more difficult, frustrating and annoying. HTML is evil - I'd like a plain text only option please To be honest, this is a terrible decision. When people talk HTML standards, they talk about the ability to create a latest version HTML layout (Using all the latest tools - eg. Style Sheets & Divs, not tables) and it the HTML will render in all HTML Viewers (browsers, email, etc) and the output will look exactly the same. Microsoft (and some other much smaller groups), keep suggesting that it is about the technology itself. It is not. The issue is about simplicity. Do it once, do it right, do it universally, and it just works. We are always having to do it twice, just so that we fit microsofts platforms in somewhere. This is how you are losing market leadership on IE and Outlook. Otherwise excellent, strong products.. but they just no longer "fit". Ok, here is my POV, and why I suscribed to fixoutlook.org : If you want to include Word capabilities into Outlook, just make it send mails in Word's native format, not HTML. HTML IS a standard, normalized by W3C. As is, it should be rendered properly by any user agent that claim to support it. W3C specs for HTML *DO NOT* specify that HTML (and CSS) must be sent over HTTP and not by email. Specs specify how HTML/CSS should be renderded. If Outlook does (attempt to) render HTML/CSS, then IT IS a HTML User Agent and then should conform to specifications. Your beating around the bush here. Your not talking to users, your talking to developers and designers. Stop telling us the security and ease of use of your so called product. I don't care about power, or ease of use. Give me my HTML and CSS standard emails. I'm pretty freaking sure that if there's a group that writes and tests expectations, such as the support for HTML and CSS in email, and it happens to be named the Email Standards Project, there's a standard. Now, it's not written in a giant non exinsistant tablet declaring the laws of the web, or penned in the constitution, but neither is my right to not be slapped upside the face by a major development company. Just please stop being so proprietary. The amount of heartache you have given me over the years from the combination of 98/2000/XP and Internet Explorer is unbearable. I'm not the first one to say that i've nearly broken into tears spending hours trying to fix the mess when your software destroys my neatly written code. If "there is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability," then why not join the discussion and work on it with the rest of us? I would think the Outlook team has that responsibility. Also, while the Email Standards Project was created by Freshview, they take pretty unbiased stance when it comes to testing for standards compliance in email clients. This is email "acid test" they use: Nothing in that list is bleeding-edge or caters to one specific email client, it's all pretty standard stuff that's been available for quite a while. I look forward to Outlook working more closely with web developers and designers. FYI: Gmail needs to step it up as well. If Word is the rendering engine, can Word be fixed to render the same HTML & CSS that has been approved and standardized by the W3C? We aren't asking to render anything different than what has already been approved. I understand the security in running scripts within HTML, but that is not what we are asking for.... What about W3C compliance? You've attempted to fix some of the issues in IE8. Why are you choosing to go "backwards" and force designers to think in terms of web pages versus emails? It's bad enough that we already have to test in IE6, IE7, and IE8 (besides your competitor's browsers) just to make sure that we catch everyone of their nuances. Outlook 2007 for HTML email design has been a pain to deal with. Now you want to continue with the same practice in 2010? What is going to happen when you come out with IE9? Are we going to have to a Virtual PC for every version of Outlook and every version of IE just to make sure that it looks okay and presents out message as we intend? There has got to be a way to balance easy of use with standards compliance. Adobe has done a pretty good job with Dreamweaver. Why can't you do the same with Word and Outlook? IE7 and IE8 have been a huge improvement over IE6. We're really glad to see that. Just as Ani M says above, designing HTML emails for Word has been a huge problem. In most other email clients, the designs degrade predictably, but Outlook '07 is in a world all its own. I'm always having to answer the question "why does it look weird in Outlook?" Time to go back to TEXT only emails!!!! Thanks! If Outlook users want to send charts, can't they attach a DOC or a PNG like everyone else? If you really think your users need all those bells and whistles natively in their mail client, you should be working towards giving Word the ability to export to the accepted format (standards-based HTML), not reinventing the rendering engine. How hard would it be to send and display Word formatted emails using some custom header or code and then employ IE8 for rendering email as a backup? Then you would actually have web development fans instead of continually alienating the community on which you will rely when everything moves to web apps, which it will. When we talk about sending HTML e-mail, I assume it is as some sort of MIME multi-part, yes? Why is Word needed to render it for viewing (not editing)? Or are people jumping to conclusions about that? I know that outlook has a great piece of the email clients market... But why setting apart from the rest of the world? Are Microsoft creating their own standards for email clients? it really looks like. I dont want to send emails fromm outlook if it will look crapy on my client computer just because they dont have the same software that i do. What about interoperability? Please reconsider better css support in your next releases HTML email formatting is a nightmare everywhere at the moment, and Outlook really is the worst offender (closely followed by web based clients like hotmail/gmail/yahoo) The complaint here is about the display rendering aspects, not the composing. IE7 and IE8 have made considerable improvements in this regard even if you did have to introduce 'compliance mode'. Do the same and you'll find a much larger fan base for this product. This entire campaign is your customers trying to help you and influential ones at that - these are the grassroots designers and developers who guide small businesses, friends and family everywhere. These are the people that will create the next Flickr, Twitter or Facebook. These are in fact the same market that your silverlight/codeplex/web framework/.NET MVC groups are trying so hard to crack into and that your marketing and PR people are hunting for. Listen to them. There are people out there who haven't upgraded their offices past Outlook 2000 because of issues with the rendering engine. Forwarding messages on, in particular, often completely breaks the layout of the message. For those of us in Web Development, I don't particularly care how you put the HTML into an email - what I care about is how my HTML is displayed. Having to use 90s-era HTML to ensure that customers using Outlook can see my email the way it was meant to be displayed is frustrating, and I feel there are accessibility issues with using hacky table based layouts in email. (I do and will always include a plain text version, but many people don't) I don't dispute the power of the word engine in outlook for composing messages, but thats not what this campaign is about. This campaign is about recieving nicely formatted email - and being assured that the email your sending is going to render the way you want it to by the reciever viewing the message. If I send an email to someone with nice graphics and formatting, I just have to cross my fingers that the reciever can view it correctly. It's not like a webpage where I can test it in all browsers - and I don't have time to test all email clients to see if the messages renders ok. Unfortunately, if the message doesn't render nicely - I come off looking unprofessional, all because a simple standard cannot be agreed upon. Email should be easy to send and easy to recieve, and I shouldn't even have to worry about testing an email I want to send to my clients. By not supporting some kind of standard in message display (and standard HTML/CSS seems the most logical choice), we are not making any progress here. As a person that does e-mail marketing, let me tell you that outlook 2007 is a real disappointment. It's such a disappointment that in fact I refuse to upgrade, and still use outlook 2003 because it has exactly what a lot of the people above me are asking for. A different engine for creating e-mails and another one for displaying them. The secret, that is not being mentioned here, is that your authoring engine could not author html that your very own rendering engine could render correctly. So you decided to ignore the world and replace the rendering engine. Even if we concede that there are no standards for email(debatable), if you wanted our respect and really cared for your clients, you could have started an initiative on creating those very standards that you claim are missing instead of making life difficult for designers and diminishing over-all user experience. I'm curious how many normal people really spend a lot of time generating rich HTML emails? In my experience the only rich HTML emails I receive are marketing emails. Do normal people have real world requirements for sending richly formatted emails? I'm genuinely curious. I guess my first comment didn't go through? --- If "there is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability," then why not join in the discussion and help define email standards? I would think the Outlook team has that responsibility. Even though the Email Standards Projects was created by Freshview, their "acid test" is unbiased and tests each email client against a set of CSS properties that have been around for quite a while. Outlook isn't alone in its poor compliance, Gmail and Lotus Notes test poorly too. So, this isn't a problem exclusive to Microsoft. You have covered authoring emails, but as designers we're more concerned with how emails look when they are received. If you're going to let us use HTML, then let us use standards compliant HTML. Shouldn't HTML be the same whether its on a browser, a mobile phone, or in an email client? Isn't that why the W3C came up with the HTML standards? I think your problem is that you're separating webpage HTML from email HTML when they should be one and the same. HTML is HTML. "here are some images that show some of the rich e-mail that our customers can send, without having to be a professional HTML web designer" Yes, and those formatted messages can only be viewed correctly in Outlook, whereas most of the features you tout (smartart, charts) don't even carry over properly into Entourage 2008 (Microsoft's own Mac email client), let alone email clients used by thousands. All of this would not be such an issues were Microsoft not attempting to defend a regression in functionality. Outlook 2003 contains more complete support for web standards than 2007 or 2010 - the basic functionality of rendering formatted emails has gone backwards. "There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability. " This is true. There is no "eHTML" standard (perhaps there should be?). However, Microsoft claims to support HTML. Not "we provide partial support for some HTML tags from the HTML 3.2 standard". Please. Acknowledge that what happened from Outlook 2003 to Outlook 2007 was a step backwards, rather than packaging this as some sort of win for your locked-in customers. As Alex has raised, it is the Rendering that is important. Dan has suggested that the issue is a marketing campaign by a email service provider. That is far, far from the truth. The majority of software systems will use the Internet in some way and as part of that will use email most likley HTML-formatted email. Even Microsoft the X-Box communication has problems with Outlook 2007. When Microsoft makes an email "authoring" decision that causes rendering to come out WRONG this has a flow-on effect to the software industry and the wider customer base. Because Microsoft (and Outlook) have such significant roles in the market it generally means cost for other software providers to provide obscure compatibility with the Microsoft product. As the world continues to embrace a "digital economy", more and more of our business and personal life activities move online. It is very sad that Microsoft, the market leader, cannot produce a product that embraces the concept of a COMPLETE user experience ... authoring and viewing. Microsoft ... this is a message from your customers! They were appalled with the failure of Outlook 2007 to properly render HTML and are equally appalled by the decision to fail to address the serious shortfall after a further 3 years. I think this post misses the point of the objections - the problem is not directly related to the fact Word is the editor/rendering engine, nor does anyone seem to be disputing the fact that by using Word as an editor you can create interesting content. The point is that Word is a low-fidelity renderer of some aspects of standards-based HTML & CSS, and that is disappointing to many, given Microsoft's drive to fully support standards elsewhere. Given the fact that Microsoft has a strong standards-based browser in IE8, and has the editing tools in Expression Web, Visual Studio, etc. it is unfortunate that Word is still the editor of choice for this format of email in Outlook. There's gotta be a better way than just using Word to render e-mails. At least with Outlook 2003, we had IE as a rendering engine and could get some nice things done with HTML e-mails. Why not do a hybrid? Use Word for composing messages and have more of a standards based rendering engine (maybe built from IE8?) for reading HTML e-mails. The Email Standards Project is the authority right now. No one else seems to care. Is the WASP not an authority? The W3C? I hope the door is not closed on this issue... Why don't you guys create some sort of trigger that allows email marketers to take advantage of the Trident rendering engine, and make the default email setting use word. When 'Joe Corporate' composes an email, it would use MS Word then. Then, when we send out an email, we can place some HTML code in our email which will then use Trident rather than word on the recipients side. I have always used Microsoft's email clients, my all time favorite is Windows Mail. I understand Office's vision to make possible for no specialized people to produce all sorts of contents, but I wouldn't like that to be the cause of a step behind. Great blog by the way. Seriously, how hard would it be for a 3rd party to write and release an Outlook add-in that changes the email rendering engine to use IE, or even better a WebKit release? If MS aren't going to play ball on this, perhaps somebody else should come to the party. I don't personally mind that Word is used to create e-mail in Outlook. In fact, I can appreciate its ability to allow the easy creation of rich text messages. My issue specifically is with the use of word to read e-mail. I find Word to be far less accessible as a way to read e-mail than Outlook 2003, Outlook Express and other mail reading software. Word just causes more problems when reading mail with our specialized screen readers such as JAWS, System Access and Window-Eyes. Please, at a minimum, consider allowing us to adjust a setting that would allow us to decide how we wish to read our e-mails in Outlook. Thank you. I actually agree with the web-standards compliance debate. Outlook should absolutely be sending compliant message bodies. There's nothing wrong with using Word as the editor because it is really a great editor. Hands down... No arguments there whatsoever but what we're talking about is the standards-compliant output. We would love to have Word as the editor but asking for Outlook to send the e-mail with web standards compliant message bodies. Thanks for listening to the feedback! We really hope you just take action for the outpour of feedback from industry leaders. Ed B. I don't think my last message made it past moderation (take from that what you will). I'll just leave it with saying I agree completely with what Travis and the vast majority of people are saying on here - please stop standing in the way of progress in the world of web standards and make Outlook 2010 render emails with standard HTML and CSS. My job is hard enough rebugging for IE6 without this kind of stuff too. thanks for the blog post.. but I think it's time for a change. please, guys. "As an example, here are some images that show some of the rich e-mail that our customers can send, without having to be a professional HTML web designer. " This is fine, but why do you have to make the jobs of "professional HTML web designers" more difficult? As mentioned many times, what is so hard about letting people use word to compose their emails, but still render emails based on web standards? I do hope you will use a proper rendering engine (e.g. IE), rather than Word, for displaying emails in Outlook. It's silly not to properly support CSS in this day and age. Thankyou for taking the time to write a measured response to the Twitter campaign, but I have to take issue with your assumption that Word is the best tool for composing emails. Do you have any metrics for the percentage of users who might send emails containing graphs, SmartArt and other "power features" vs. those just using it as a simple text editor? All I can remember from moving from Outlook 2000 to 2007 was that, aside from the adding of these power features, the very basics of email composition (not even HTML rendering) went downhill. I could no longer interleave the auto-indented original mail with my un-indented replies as the formatting tools wouldn't seem to allow it. Nor could I copy and paste blocks of text around or even delete paragraph breaks without blocks being erroneously re-styled with neighbouring spacing and layout rules. In my opinion, the use of a powerful word processor with a hierarchical styling system is not the best way to quickly compose emails and responses in a relatively small window. I am in no doubt that "the easy option" (from your software engineering and testing standpoint) of Word for composition and Word for rendering produces consistent results in an Outlook-only environment. However, surely IE8's rendering engine would produce higher fidelity results for emails received from corporate environments running other software such as Notes, Evolution, Thunderbird or GMail For Domains? Surely users of Outlook will (correctly) blame Outlook for inaccurate display of emails received from external sources, and this will reflect badly on Microsoft and the Office suite? Whilst the fixoutlook.org site may concentrate on what you may quickly brush aside as frivolous HTML marketing emails, it is also concerned with rich emails from e-commerce webapps, such as order confirmation emails, printable offer forms and the like. I spend some of my life as a web developer creating the latter kind of rich HTML mail, and even after taking into account the need to put on my 1990s HTML hat and use lowest-common-denominator tables and font tags, I continue to spend around 30% of my time dealing with rendering quirks and bugs from Word/Outlook 2007. Not only is this infuriating for people in my line of work, but other web companies might not have the money or resources to devote effort to working around Word, so emails will go out broken for your users. And trying to work around the problem, and perhaps composing HTML emails for mass consumption using Word to begin with is not a feasible solution either, as not only are the tools and design expressiveness lacking, but other mail clients and webmail systems (including Windows Live Mail) will take issue with the odd, non-complaint CSS and markup generated. Your final paragraph comes as rather inconsiderate to the large percentage of web developers who would stand behind the Email Standards Project. Perhaps there is no official "email industry" support, whatever you might define that as, but can you really so brazenly ignore the fact that you are significantly increasing the workload of any web developer whose site sends out a rich email to a user? The level of standards support we ask for is simply something similar to that given by the browsers in common use today, so there is not such a disparity in the accuracy of display of HTML content on the web and HTML content over email. I've seen you say elsewhere that including two rendering engines in Outlook would be needlessly resource intensive, but would using the IE engine installed on the system really be such a strain on modern hardware, whilst having the Word engine loaded to take over when the appropriate HTML namespaces were detected in an Outlook-to-Outlook message? Thankyou for taking the time to read my comment, and please consider commissioning some metrics on rich email use before dismissing this Twitter campaign as the uninformed anti-Microsoft bandwagon that the tone of your blog post seems to suggest you believe it to be. One thing I have to ask is why Word was ever designed to create HTML anyway. It's a word processor, not web design software. If Microsoft customers want to create web pages, there was FrontPage and now Expression Web. Why (literally) reinvent the wheel. How many Office customers use more than 50% of the functions of Word? Do you really think there are people out there creating quality web sites and pages with Word? Even from a layout standpoint, Word is terrible. So why would you choose to include HTML rendering in a word processor, and then essentially force all web designers to deal with using a word processor's HTML engine for rendering dynamic email. It just doesn't make sense. I think Leo Davidson hit the nail on the head with his comments about Word code rendering in IE. Even if you completely take designers and developers out of the picture and focus on end users, the ones who the switch to Word was for ,they're not being served well by the current lack of standards support in Word. The code it produces is a nightmare and it simply doesn't render well in other email clients. Wouldn't it be a win win situation to improve that? You help your customers be able to truly create visually stunning emails that THEY can send out and have them received by everyone not just fellow Outlook 2007/2010 users and people sending HTML emails to them can also ensure that they remain stunning. No one is asking for the sun and the moon or even the ability to add script (I think most everyone can agree that would be bad), we're just asking that, for the things you do support, that it be standards compliant. Listen to the W3C, those are consensus built standards! (written on a computer running Vista, composed in Word 2007 while listening to a Zune, not an Microsoft hater) As others have mentioned, a simple switch would be awesome. Leave Word as the default, but for those of us who are authoring *HTML* emails, the rendering engine should be IE, not Word. For those who have used Outlook (Word) to author their emails, the rendering engine should be Word. I just can't understand why you'd do this. Word is a terrible HTML renderer, and everybody knows it. There's got to be some kind of compromise we can all come to? The WWW has been held up for years thanks to the shoddy renderer in IE6, and doing the same with email is just incredibly sad, not to mention irresponsible. Word just isn't up to the task at this point, it needs a lot of love (which has already been given to IE no less,) before it will be. :) i'm disappointed that some common ground cannot be achieved here. i'm certain there is a technical rationale for not allowing FULL web-standards rendering in email. i can think of at least a few malicious or otherwise unscrupulous techniques that would allow you to do things that would be a disservice to the end user. i think the solution for all involved, however, is to use a subset of CSS rendering for email - modify the rendering engine to only allow certain selectors and attributes (layout, visual formatting, but no @import, no url() no :before :after content; etc.) internet explorer already has a 'zoned' security model - how much more difficult is it to add 'email' to the list of IE zones? i really don't know any of those technical stuff, but i just hope to be able to see the html correctly, but not the outlook2007 weird display! no matter what the outlook group say (i appreciate you take time explaining it here), we all know it's a problem. i'm just wondering haven't you outlook developers never encountered the annoying html display problem? so many newsletters i got showed up ugly. or actually you guys didn't use outlook yourselves? Just to hammer the message home: I work for a web design agency. We often make html email campaigns for our clients. It doesn't matter to me, or anyone else in the same position as me, what engine you use for creating HTML. What matters is the rendering. HTML is a standard. CSS 2.1 is a standard. and outlook 2007 can't cope with them at all. There shouldn't need to be links in emails saying "if this email is mangled click here and be taken to a browser, any browser, it's got to be better than this." As for you lovely people saying HTML email is the devil. text-only versions can be sent alongside HTML emails, that is what campaign monitor does. and you can set up your email client in such a way that it will use a text version if available. Thanks People don't get it apparently... 1. You compose message in word and it gets converted to HTML. If you want CSS, etc.. turn off word editing (it's pretty quick option to find.. give it a try), and write it by scratch. 2. Viewing HTML in word is better than a browser because it prevents javascript etc from running. If you open up a spam email and an image loads or javascript runs, that's how they know it's a live email address. 3. People are acting like this is something new.. this is the way it's been for nearly a decade and now you're taking issue.. please understand what you're babbling about before you "join the cause". 4. This is a giant marketing campaign to get a true browser built in so they can take advantage of #2 in this list. You're a pawn.. you've been duped.. thanks for playing. Let's assume there is no accepted standard. I disagree, as does your browser team and many thousands of web professionals, but lets put that aside for a moment. Don't you have a responsibility to, at the very least, continue to support your *own* rendering standards as per Outlook 2000? See the differences here: Not going forward is one (problematic) thing, but to continue a giant leap backwards seems extremely unfortunate, and will worsen the user experience for both Outlook users and email authors. If this is going to be the case, then please fix Word which will then fix Outlook. There are broad standards and for the web to work properly, they must be followed. Companies are starting to see the failings of Outlook 2007 - like Internet Explorer was fixed up (mostly), they MUST be fixed in Outlook 2010. In my opinion using Word to render and compose HTML emails is the worst idea ever. Even though I like Outlook, I use Windows Live Mail for both business and personal mail because it simply works better (not sure what it uses for composing and rendering). Why not use Internet Explorer's rendering engine for displaying HTML emails and a modified version of Visual Studio's Designer for authoring. Then we'll finally get closer to being able to compose more standards-compliant emails which will allow one to create a nice looking email that is a LOT smaller in size because instead of using 500k worth of useless markup one can achieve the same result with a little bit of CSS and be done. Another nice feature would be to allow Base64 encoded images to be embedded inside emails. This will make it possible to create engaging emails that don't rely on active Internet connection and have the images always available on the remote server as well. This will also eliminate the need to block all images by default because with Base64 encoded images spammers can't do their tricks that use malicious php scripts as source for their images that track roughly how many people read their junk email and whatever else they can do. Even allowing some extremely basic javascript could be done in a safe manner by only allowing an extremely specific array of functionality such as roll over images, show/hide layers and other usefil things that could improve the email experience rather than completely cripple it by using Word for HTML editing. We hear and read about interoperability, openness, standards compliance but then we get things like using Word, a program that is incompatible with everything else in the first place but is also light years away from producing compatible and standards compliant anything. I understand that Outlook should make things easy for users to write HTML email. But why can't you see the problems that using Word for rendering HTML emails causes, resulting in a poorer user experience. Even despite the best efforts of many to ensure that emails do render correctly in Outlook, even emails that are authored in Outlook, frequently do not. Surely this is a concern? Having users being able to *write* HTML email is handy, for some users, some of the time. Being able to *read* email they have received from 3rd parties is essential for all users, all of the time. Maybe, instead of arguing over whether or not Outlook should use Word for authoring/rendering, what if Word could actually generate and render standards-compliant HTML. In my experience, it seems that MS Office products consistently live in their own little world of non-standards compliance. Perhaps this is a marketing strategy to keep customers coming back. But is there any reason Word cannot generate valid, compliant HTML? Quite often, clients will come to me with copy for an email, but they have done it in Word, and so is quite difficult to make it so the rest of the world can see. If MS Office is all about efficiency and productivity, why not make it efficient and productive for ALL aspects of the world, not just the intra-company conversations, memos, and reports? No CSS support? But you do have some sort of Microsoft corporate standard, don't you? If Windows Live Hotmail supports a good healthy number of CSS selectors and properties, Outlook SHOULD be consistent and support those as well. You work at the same company called Microsoft right? Or perhaps there is a lot of bitter rivalry with the Windows Live Hotmail/Mail group? Well played, Microsoft. But if the power of Word is all in the interface, why can't the final product be standards-compliant? Hovering toolbars could work just as well in an HTML authoring program as they would in Word. SmartArt graphics are just that - graphics - and should end up as nothing more than an image in the final e-mail (as I suspect they already do). I understand Microsoft would want to use existing code from its products to promote its ecosystem. At the very least, standards-compliant authoring should be an option. Word in Outlook is a disaster - combining the two creates headaches when one program crashes the other. This would be a moot point if Word, with all its power as an HTML editing tool, actually supported a larger set of the CSS standards. As a generator of emails to be viewed by the largest range of recipients, I don't want to be restricted to 1990's HTML - and as a recipient of emails from a wide variety of senders, I don't think it's reasonable that supporting HTML emails from Outlook should require a heap of obsolete or complex formatting handlers. The fact that an online marketing firm is pushing for standards compliance in a product which will be widely used is not automatically a reason to dismiss the request. I would contend that most of the demand for consistent display across email programs comes from people who have a professional interest of some kind - but this is a growing segment of emails as journalism and the like become increasingly electronic. Why should the consumers of formatted emails have to pay more for content because a large segment of the recipient market is locked into a non-standard formatting grammar? Do your worst Microsoft, I'm still only reading plaintext email. I agree that Word is arguably the most powerful way to create professional interesting email... however I was wondering if there are any other programs that can be used to render email in Outlook Copy pasting MS word html causes a client of mine to break her site frequently. When MS Word produces XHTML standard like everyone else then I won't complain. I feel I have little new to add, except my own voice, one more among the thousands who have already spoken up in support of standards and against using Word to render e-mails. I work for a company that sends out seven regular e-mails a week, to thousands of recipients, not to mention all the automatically generated, transactional e-mails. These recipients are opening our e-mails with such a wide variety of e-mail clients that if we want to have them render correctly in any inbox, we must abide by the lowest common denominator - and that is Outlook. Please, please, raise yourself above this level. Going from Outlook 2003 to 2007 was a significant hit in the ability of the client to display existing content AND to co-exist with other e-mail systems. If this were still 1993, where e-mail only traveled inside the office, never touching another company, your argument here would make sense. But it's not: Outlook should use a halfway decent HTML parser, because that's part of what it means to be a decent e-mail client these days. "Word enables Outlook customers to write professional-looking and visually stunning e-mail messages." Is there really a reason for that? I've never received a supposedly visually stunning email from an Outlook user in my life. Unless you're referring to the ones that have images, word art, and emoticons strewn haphazardly through them. HTML emails are not designed in Outlook by your standard customers. They are designed by us, using standards that are accepted across the entire internet. With the only notable exception of your software. Exchange and Outlook are hyper proprietary. That's why so many organizations are moving away from both products. While there may not be a consensus on how much HTML is appropriate for an email, it doesn't make sense to become more proprietary. Here's my suggestion: all HTML 4.01 is appropriate with no scripting or plugins. Stop trying to make email a Word doc. I don't use PowerPoint to make a website. I could not agree more with everyone else voicing their opinion above that all we want is a rendering engine that has some level of support for standards that have been around for over a decade. The logic that is in the response by Microsoft to a degree reminds me of the decisions US auto makers made years ago to ignore industry trends and consumer insights. They stuck to their short-sighted beliefs and held on to their history, and now they are in a difficult position. As the comments have stated, we don't care how you create your emails, just render them according to the now well-established web standards so we can all save a lot of time and money. Otherwise it's time to start weaning all the networks I admin off of Outlook and maybe Office in general. It's already happening anyway without any influence from me. I already put in a good chunk of support time dealing with users having attachment problems with malformed Outlook HTML emails and then I do my other job and build email marketing templates that take a lot of extra time because Outlook can't handle CSS. It's hard to believe that you don't see the value and opportunity that Outlook is missing, especially as we are finally seeing IE6 use fading. Email that isn't plain text, or start with the plain text equivalent of whatever screwed-up rich text scheme the crack-smoking programmers have devised, it should be quickly and silently discarded as "content-free junk". If you cannot convey your message in plain text, then the problem is that you cannot write (and likely cannot think). The solution is to learn to write, not to clutter up your communication with irrelevant eye-candy. That isn't to say eye-candy isn't nice -- but email isn't the place for it. (Also, when you reply, trim what you don't reply to, don't top-post, and indicate the difference between your reply and what you're quoting by inserting '> ' before the quoted material.) There are some inaccuracies in this post which need to be addressed - firstly, whilst Freshview are indeed one of the world's foremost providers of email marketing software, this makes them very much an extremely experienced and knowledgeable group of people- that they are service providers is irrelevant - Microsoft are service providers too - does this mean we should ignore what you have to say about I.T. as you may be biased? Secondly, whilst Freshview are heavily involved, they are not the Email Standards Project. Thirdly, with over 20,000 tweets recorded in ONE DAY, perhaps its time Microsoft sat up and listened to what people have to say - lest they continue to lose market share to other vendors who have listened AND have been able to provide and come to the party with concensus on these issues. Its a dire shame for 2009 - a year when Microsoft finally was able to resolve over a decade of antagonism with the web design community by releasing IE8 - for which I sincerely commend you - but then to be working towards a HUGE step backwards in the rendering of its email software. This make no sense. Creating mails with Word is also problem. It gives big overhead. Majority of mails are in plain text or some basic formatting. Using Word to create e-mail is like driving a Ferrari to buy bread and milk in local store. Then, on other side, when mail comes in Outlook, Word is starting up again. It is too much time consuming and standard non-compliant. What a mess. It looks like this blog is the victim of a marketing campaign as well. I guess no one could fill up a blog as well as e-mail marketers. I've been on both end of web development. I'm trying to remember a time when a designer submitted his work to me via e-mail with the expectation that it was supposed to be a useful design or site prototype. No, it never happened. I dumbstruck to imagine that one would. I wouldn't do that, either. This "web designer" feedback is unremarkable. Of course, the problem is that users are becoming educated enough to avoid clicking hyperlinks in e-mail messages. Since they're not going to the marketer, the marketer is finding another way to come to them. Outlook is fine. It's working for corporations and small business, and for the individual users that are comfortable with it, I think it's working for them as well. I'm a Mac user and small-time developer, and I quite honestly don't give a damn if Microsoft clients can't read simple HTML 4.0 emails. Too bad for them. I can't be bothered jumping through hoops to go back in time to the mid 90s. I provide a plain text version of the email for primitive clients like Outlook. As time goes on I think Microsoft will have to do a better job rendering standards rather than rely on their slowly diminishing stature. I feel that Microsoft made a good step forward by focussing on web standards with the release of Internet Explorer 8, just to take two steps back by using the Word HTML rendering in Outlook 2010. Please embrace web standards for Outlook 2010 aswell! Can you not see you are going backwards? Outlook 2000 had rendered e-mails better then Outlook 2007. As a web developer it is a nightmare coding e-mail campaigns for Outlook 2007 and above because of the poor standards. If you at least implement some of the suggestions suggested at we will all be happy. This is basic HTML/CSS that you used to support, so please do so again. I think Microsoft should help providing a real standard for email and not simply wait for it. In addition, everything which is being created using your "familiar and powerful tool", if I quote your post, could be respecting the best practices used by professionals for creating emails. We could then avoid another "specific hack for microsoft products" (i.e. IE6&7) that web developer use every day. You really said nothing with this post. All you needed to say was: > There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability. Then the debate is about how good/bad of a renderer you are using instead of what standards you are trying to force upon people. Why not solve this issue the same way you solved the backwards compatibility in IE8 - with a META tag. Using a META tag, someone sending a specially designed HTML mail could get it to render using the IE8 rendering engine - and using Word rendering if the META tag is not present. I think is fine to use the Word engine as the editor for Outlook, but you should make sure it outputs compliant html. This is also very important when using Word to author blogs (a feature that I actually use). For me, Microsoft does not need to make email marketers happy. Normal users do not "design" their email. I've always thought that the HTML e-mail feature set was too large - making it a viable method of advertising (a.k.a. SPAM!) There's no real reason to support fonts. If the e-mail client has a nice default font, then using strong, em, and so on - the meaning can be conveyed. Can't really get rid of links, people need those - boy would it be nice if we could. Attachments are necessary - but inline images probably aren't. Inline images attract people to style their emails for marketing purposes, but people sharing photos can (and usually do) send them as attachments. Images are already restricted because they cause a visit to a particular URL, which can be identified for tracking purposes. I would like to see images removed from the "HTML Email Specification" - which apparently needs defining. *wonders if the W3C already defined it, but can't be bothered to look* I have made several systems which send out HTML e-mail, and the problem is always to make it look good when opened in Outlook. I'm not overestimating if I say that at least 50% of the time is spent tweaking the format to please Outlook. The article above does not convince me at all that using Word as the editor/renderer is a good idea. PLEASE give us standard HTML! It's pretty simple really. proper HTML should just render properly in Outlook. Saying there's no definitive rules about what standards to support is a cop out. The HTML standard is pretty mature, guys. CSS has been around a long time. HTML and CSS are not intrinsic security risks. Rendering it properly has been achieved via numerous browsers - except one of the most popular email applications on the planet. Seriously, it's about utility for your users, not about simplifying things for developers... Properly formatted HTML means that things display as they were intended, which improves the user experience. Even if you're not impressed by the outcry on Twitter and on this site, please just go back to user experience first principles - which is that properly rendered HTML will improve their experience of email and, therefore, your product. I love Microsoft's work, but on this issue, I really have to agree with the vast majority of comments: we need to be able to read emails as if it's in a browser windows. Perhaps a healthy compromise would be that it's like seeing a document preview (same as a PDF attachment for example) - and click on a tab at the top of the message and see it in the the same area (rather than opening up the browser separately). An additional setting along the lines of "Always show in preview mode from Sender X".. That way you keep the Word engine, and readers can get the email easily in a decent look. I'm no technical guru, but surely there's a better solution.. Thanks for replying to the outcry among developers about your decision to use Word as the HTML rendering engine for Outlook. If you don't want two engines in Outlook, please then fix the HTML rendering engine in Word for displaying e-mails. You're bringing out a new version of your software right? There's got to be progress in your software for basic things like rendering HTML in e-mail. Why not just fix Word to support HTML and CSS web standards? This would help in so many areas of everyday usage. Pasting content from the web into a Word document would work better. Pasting Word content into CMS systems would work better. Sending artwork from Word to other email clients would work better. Maybe this campaign should be called FixWord.org! If we're stuck with Word as an email render engine, then please FIX WORD to support web HTML CSS standards that the ENTIRE REST OF THE WORLD USES. 'nuff caps. to all web developers: I don't care about web standards in MY FRIGGIN EMAIL!!!!!! If I want to see a promotion, I click a link to your site... Uh... I don't care how you implement composing and rendering, but it must be interoperable, i.e. read and write correctly messages that follow the relevant standards. There are lots of other clients than Outlook out there (Mac users, anyone?). Rendering using Word must not be required for correct rendering. This means: * standards compliant HTML rendering * better MIME support (i.e. not making a mess of PGP-signed messages; this does not require PGP support, only decent MIME support so that the text content is displayed) * not making a mess with encodings, i.e. marking windows-1252 as iso-8859-1 (Outlook Express does it correctly, why does Outlook make a mess?). All this is the bare minimum for interoperability. I stopped using Outlook because of these issues. *It's not a decent mail client any more*, it has become only a client for internal, tightly integrated, Exchange-based entreprise messaging. Just adding my voice to the many. I don't use Outlook, but let it be clear that this is very hurting to newsletters, campaigns, and other HTML content that could be sent to people's inboxes. There is a consensus and agreed standard for HTML content in email, and that is that it shouldn't use the Word rendering engine. The manner in which HTML e-mail is renderin Outlook 2007 represents a significant step back in terms of accessibility to blind and visually impaired users who use Outlook. Using IE to render HTML in 2003 worked well for screen reader users and the 2007 method of doing so was a regression. To have full access to HTML e-mail, a screen reader user has to take a secondary step of viewing the e-mail in the browser--a step that takes additional time and keystrokes that no one else is made to suffer. With the supposed "commitment" to accessibility that Microsoft likes to tout, the continued persistence will lose me and many others as customers. With many other solutions providing functionality that Outlook does, It is unlikely that I will upgrade to 2010 or continue to use 2007 as it directly affects my level of productivity. As a developer I spend a lot of time each week trying to explain to customers why the clean, efficient modern code and look and feel we create for their website can't be duplicated in their email marketing. It just doesn't make sense. I think Microsoft has responded well to feedback over the past years and has worked hard to create good, standards-compliant browsers IE7 and IE8. Why are you moving forwards in this area and backwards in email? As for "there is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability" - come off it! As a Microsoft .NET developer of 8 years the more and more I learn about Microsoft the less I am inclined to continue to use/recommend their technologies. I've recently been fortunate enough to use more open source technologies and feel that the future is with them for the sole reason that they evolve. Rendering the right HTML is not just an outlook problem but also an IE (which is the most popular browser in the world) problem. I hope for the sake of Microsoft that you listen to the developers because we hold the power now to direct usage. It is simply not good enough to say "we do it this way and you deal with it". Sooner or later developers will not tolerate this and you will lose. I know most of the people commenting here are not your customers. But think about this: while you might make today's big corporations happy for now, the people who will create the Apple and Microsoft of tomorrow are the ones positing here and taking part in the campaign, and they will probably not be using Microsoft products. If you all manage to make web pages that work on IE, FF, Opera and Safari, just spend 5 more minutes to make sure your email looks ok on all email clients. Who knows, maybe MS will give us Outlook for free, so we can test emails before we send them out. :) MS is the best, if only they could make VS to work 250% faster then it works now, i would not have to spend 1 000$ on new pc just to work normal with VS 2008. But there is no reason to complain, as long as they give us MSDN subscribtions, we get for small fee all software that would cost us like 100 000$ per year. They are not that bad, you only have to understand that they need to render it with word so that all those cool stuff made in word would show up correctly. Did you ever see how clients make emails? How stupid 90% of PC users are? What all they do and in which ways? They are c/p gods, they copy 10 different encodings in one email :D and you want IE to render that? They call MS and ask why does not autocad drawing show up as image in email, etc... Its not MS that sucks, it's those 90% stupid users that suck. MS, congratulations on trying to make milions of people happy, while each one of them has different demands. And don't mind this so-called web developers and e-mail campaign designers (is that a real job?!?), real developers will always understand your problems and appriciate your efforts - being a developer, either web or desktop, is not easy job, we are not in DOS anymore, it's not about 16 colors anymore, it's so much more, and each month it advances. You are doing great job. (from IE fan - using it since 1995 and loving it, altrough, you did wait to long between IE6 and IE7) I'm not an IT guy or a web designer etc... just a guy who uses Microsoft Office. Now that I have a new laptop with Vista (another story all together) and Office 2007, I'm finding that features I had in older versions no longer exist. In Outlook, just as one example and since this is the topic here, there is no "Out of Office Assistant" anymore. You have to be on an Exchange Server. Why? Independent people can't be out of the office? Now I'm scared to ever upgrade again! As to what end users want, letting aside a marketing departments campaign mail. is it really little more than emboldening, underlining, spell checking, image insertion, and document attachment. Isn't that what 99% of users actual do with email. Why do I need Word for that, really, where's the requirement. I understand that you want people to use your products, and that you wish to continue selling more products, but driving spurious use cases to try and bolster an applications use is a bit too much. Don't just foist 'features' onto end users and then tell them it's good for them. As to Microsoft's commitment on interoperability in documents, and Web standards compatibility, your scorecard is poor, could do a whole lot better. Being interoperable, and standards compliant doesn't just mean you can send you MS application produced files to another MS application user. Come on, get on with the job of real interoperability, and adherence to standards. That's a hard job in itself. But if you make the real commitment there, you won't get these reactions, and you'll find that people will really appreciate it. As a REAL Outlook user, I'm glad it supports "simple" HTML. I have no desire for CSS or anything else that is not needed in regular office email correspondance. Even if the renderer was improved, what would it do when I click on reply and started editing? If the editor could not support everything the renderer could, the display of the email would change drastically. I don't feel any bit sorry for email spammers. Nobody should be using the Smartart and Charting tools in the body of an E-Mail - how can you be sure that the recipient not only has Outlook, but the same version as you ? Not to mention the large size they become, and that many rules based filters just block them. You caved and gave us standards for IE8. Please do the same for Outlook. I guess the impact of the failure of the "embrace, extend, extinguish" strategy in the IE team hasn't been felt in the Office team yet. in our company two thirds of our computers run XP, one third are Macintoshes running leopard. one Win Server and 4 linux servers. we use Outlook 2003 and Entourage but were evaluating Apple Mail since it will support Exchange, were also evaluating Win 7 to make a long story short it all about compatibility! were not going to upgrade unless it a better product and it works. Windows Mail it simply to use and it works the only reason why we don't use Windows Mail is because it lack Exchange, and Entourage 2008 the best of the lot. pity u didn't make a pc version, both passed ACID test set by the Email Standards Project, which begs the question why not use IE8 Trident or even better use modern rendering engine like WebKit for Outlook. who idea was it to use Word to render html in Outlook 2010? come on it a word processor if this is a tactical decision to lock in customers, makes no sense. that the reason why in our company we use Java not C# (by the way i actual like C#) we use Apache not IIS ect... the amount of business's using branded html (plus CSS) for marketing campaigns??? what will end up happening is people will not upgrade which has been a problem for office 2007 or even worst switch to different email client, or switch platforms. Keep up the good work guys and PLEASE don't listen to spammers and their thinly disguised "we're about the standards" campaign... Respect: Internet standards, web users and designers. Stop: Alienating millions to make profit. The number of emails I get that do not display correctly in Outlook 2007 is a disgrace. Is it the email author's fault? Oh, no it can't be, because when I click on the link that says "If you can't read this email correctly, click here to read it on our website." it looks fine. This whole shambles needs a re-think. I thought I had detected a change of direction in Microsoft now that Bill Gates had stepped down, perhaps this has not reached the Office team yet. Oh, and I'm not anti-Microsoft, I have made the company I work for 100% Microsoft software based where possible. Windows Server/client, SQL Server, Exchange, Commerce Server, Dynamics NAV, Office 2007 etc - so you can see I am a pretty good advocate of Microsoft. The mess that is Office 2007 and in particular Outlook 2007 has seriously had me thinking about alternatives. I have never, ever seen an email with SmartArt or Charts embedded in it. Please leave Word for composing and displaying communications intended for print and let Outlook use a standards based model. I'm good with using Word as the editor in outlook. But I would LOVE IE8 to RENDER (display) the mail. I think you missed the point. Lets hope you get it. You rendering choices are so different to everyone else's. MS used IE rendering previously, which while not perfect was much better than the 'word in outlook' approach. You MS customers want their emails to look the same everywhere, also in other non MS email clients. Your stance is unhelpful. It's also not like we are actually asking much. We want you to deliver a product that displays our craftsmanship it a professional way. We want more and we want better please. The way Internet explorer has embraced standards since version 7 has been very helpful and inspiring to the community. I always find outlook utterly frustrating when we are sending out our HTML emails because it is the only client in which they do not display well. It is a shame that they work in an older version of outlook and yet fail in the most current one. You need to take a step forward, not stay in the past and allow the world to see emails in the same way. One other point. Security is not an issue in css. So supporting it in a useful and meaningful way would just make our day. "it’s the best e-mail authoring experience around" What about viewing experience? How can you release e-mail software in XXI century that cannot display CSS-formatted messages properly? There are many useful cases it's needed :/ I hope Microsoft will fix another problem - why isn´t it possible to use Word for individual ad-emailing INCLUDING attachments? Since Office 2000 users wait for that - maybe Office 2010 will help to make that possible! :s ababiec - you're the first person I've seen on here who, like me, is a user, not a marketer/web designer. Most businesses leave online images switched off by default for their users, so we purposely *break* marketing emails to protect ourselves from spam. We don't care what marketing messages look like for the most part. Understandably, marketers do. I agree that the rendering should be standards-compliant as far as can be made reasonably secure. And using IE8 for rendering would be great *if it worked seamlessly*. We had separate editing/rendering engines with Outlook 2003 and before, at least those of us who used Word as our editor did. We were using IE6 to render, and guess what? IT SUCKED. Relatively simple emails used to suffer from rendering inconsistencies. (Don't even get me started on IE rich text editing. Numbered lists in particular used to make me want to throw things.) Yes, ideally the IE rendering engine should have been fixed. But in the real world where resources are limited and features are prioritised, I'd rather have a good experience for the basic emails from my colleagues which *do* occasionally contain embedded objects. The marketers are a special-interest group. While their points are valid, they are *not* representative of your user population. So, Microsoft, show us you have nothing to hide; tell us what the telemetry says. How many emails aren't rendering properly? What percentage of users even allow images to display? I know this argument isn't about external images, but it would be one indicator of how many users care about rendering fidelity for marketing mails. The argument for using Word for rendering HTML mails are somewhat skewed towards the editor functions. I think for _display_ of emails, MS should use the browser, preferably even the default bowser and not just IE. You can not edit the preview window anyway. However, as an email editor the arguments given are much more convincing. By the way, are you aware that you CAN show any HTML mail from Outlook 2007 using IE? All you have to do is open the mail, choose action+other actions+view in browser. It should be a snap for MS to supply an automatic feature within Outlook that does exactly this as a preference (skipping the display of the email in Word, of course). Again, there is no point in using Word to display an email in your inbox. Fix it! Please get with the program and support email HTML standards. You probably haven't seen these, or you'd be behind them 100%, but you can find more information here: Email is about communication, and all communication is based on standards, otherwise we could not understand one another. Many people do not understand why you would use your funky word processor to interpret HTML. Please reconsider your decision prior to the release of your new software. I'm sure you can fit this new feature in before your release date. Thank you. “There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability.” Why does there even need to be one? It's HTML for crying out loud. Are you by any chance one of these seasoned prevaricators that sits on the W3C HTML working group? This kind of obfuscated language is intended to sound like it highlights a problem, but all it does is obscure the fact that there is NO problem, and throw a feint to try and distract people. if u cant understand English, someone will help u geeks out...just like the following post...so dont worry.. :D The whole point of fixoutlook.org is about the rendering, not the authoring. Don't get me started about every middle-managers wet dream of sending bloated Word docs with fancy (and useless) 3D bar graphs to their entire staff, but that's not what this campaign is about. It's about asking Microsoft to provide a rendering engine for Outlook that will properly render HTML with CSS. How hard can that be? This statement reeks of smug contempt for people who try to make proper HTML e-mails that are as lightweight as possible. And, TechieBird, "in the real world where resources are limited and features are prioritised"? Please, this is Microsoft we're talking about. It should not be above them to create/implement a decent HTML rendering engine. Microsoft and stantards out the standard. I am astounded. For years I was blissfully ignorant of what Microsoft was doing with most all their products, but that stopped the day our company decided to stop snail mailing client's their info packets and send them via e-mail. The e-mails weren't super large or complex or image laden, but yet Outlook obliterated the WC3-validated code. Then the tricky education of what Outlook supports and doesn't support began. It's basically a game of "if it was 1993 and I was coding this, how would I do it?" It doesn't matter that this anti-Word rendering effort was started by a "maker of 'e-mail marketing campaign' software." Microsoft's blatant disregard for web standards is shameful and effects us all. HTML e-mails are not going away, in fact their use is only going to grow as more and more companies chose not to print and mail. The question now is do we want e-mails that are low file sized, quick to load and easily read by everyone, including those with disabilities, or do we stick with 1993 technology until Microsoft's decline is far enough along that the entire planet could care less about their stupid decisions? Integrate Word html engine in Outlook. That's really smart... Go Microsoft, Go. If you don't get it, you don't get it. Dumb until the end... Go Microsoft, Go. Why don't use IE 8 rendering engine? It doesn't comply with standards, but it's a lot better that Word... It should be a no brain decision, but even these ones you don't get it... Maybe you have became too big and too blind... My sincere advice, be Humble... Cya If you want to use Word to render e-mail in Outlook, that's fine. But use IE to render HTML in Word. And with all of the resources available to the Office team, it is not credible that you can't manage to make Word produce proper markup. Your attitude at present is fuelled at best by laziness, at worst by contempt for your customers. I love Office, and I love Outlook, but this is nothing less than unacceptable. Oh, and HTML should be used for e-mail composition, not a subset thereof. I agree, the Word layout structure is one that I would rather do with out, and many times when there are formating issues, this is my first troubleshooting step to change this to HTML or plain text just to avoid the hassle. I agree, your allowing IE8 to be optionally installed with Win7, why not allow this feature in outlook 2010 to be disabled or removed... Obviously, we are supporting the product because it is the only one in the market that does what it does, why not sympathize with the end user and let them have this preference... I mean, they paid $100 for outlook (2007), why not... Right, there is no standard for HTML in mails. Most of the time, it's difficult to know if the mail will be displayed correctly (as designed) on the client computer. That's why I say my users not to write a mail which can be understood correctlly if it's well displayed ! If they need to apply styles, they join a file. I laugh when I see the examples ! Yes, it's great... when you have Outlook or Word... But for others users ? Do you think people will migrate to Office to read mails ? I'm not sure they want/can. As a developer of a CMS, rather than an e-mail marketer, I'll throw my perspective in. Web apps need to be able to send out e-mails, and those e-mails need to be styled appropriately to match that webapp. i.e. branding. Therefore HTML/CSS is needed to achieve what people will expect, and it is needed to a level beyond what regular people might need when sending/receiving e-mails. Outlook 2007 is awful, and even if there is no 100% agreement on how to do things, 99% of what HTML and CSS does can be made to work in all other e-mail clients other than Outlook 2007. This creates a particular problem, and basically means that you can't achieve certain things unless you are willing to invest in implementing a from-the-ground up design for e-mails - and even then, it's severely limited. Now taking things further, our software needs to be able to embed web content within e-mails, to summarise new things, or for various other reasons. For example, in a newsletter we'll want to show news summaries, and those news summaries will be styled in standard XHTML and CSS - in fact we need to embed our CMS's CSS because it needs to be rendered using a common subsystem that controls how content relating to the website should display. This kind of thing should not break, it's not rocket science, but currently Outlook cannot handle mildly elaborate CSS scenarios. Writing a whole parallel set of CSS just for e-mail is absolutely crazy, and very limiting if any website content author had done precision design using non-stock styles. Now how can MS fix this? All you need to do is something like allow a new meta tag like <meta name="email_standards" content="1" /> or whatever. If you get an e-mail like that, render it using Trident. If someone starts a reply to it, re-render it using Word if you feel you have to - at that point it's not so bad as it's already been read. The standard of the output from Word has always been a matter of derision, and the recent spat over 'open' formats has not changed anyones opinion for the better. Stating that there is no current standard is dis-ingenuous at the very least; there are a number of standards for HTML, all of which have been abused by MS in the past, and most of which have been adopted by ALL other email engines. That's ridiculous! Sure, technically there's no email HTML standard, but we web designers want to be able to design emails like we design web pages... with STANDARDS-BASED HTML markup. Saying there's no email HTML standard is just an excuse. For us web developers, our issue isn't that you're using Word to create the emails, it's that you're using its *rendering engine* to *display* the emails. If you want to use Word for message composition, fine, but please use IE's rendering engine for displaying emails! Utterly disingenuous, as many before me have already pointed out. @ababiec - if you're not a developer, what do you care what's going on under the hood? Do you even understand what you're talking about with your "CSS or anything else not needed" comment? If you're a "REAL user" (a develop[er isn't real?) using Outlook to compose, presumably you neither know nor care what kind of HTML is generated, so long as your message looks right at the other end. And that's the point people are trying to make. Word-generated HTML is such an unholy mess that it doesn't render properly in other clients, and email from sources other than Outlook/Word often doesn't render properly in Outlook. This is more short-sighted monopoly-think, that imagines giving a horrible experience to users of non-MS products will coerce them into the MS camp. With the usage stats of non-IE browsers steadily climbing, I say good luck with that... jevoenv said: "It's not all about e-mail marketing either. These days I am at MSFT and even our PA can't get the formatting of e-mails to wish co-workers a happy birthday come out correctly." How about this: "Happy Birthday." Why do you feel the need for fancy graphics, bizarre fonts or precise positioning to deliver a simple message? Many email USERS hate the crap that marketers and those who wish to emulate their way feel that they have to put on our screens. If you've got a simple message, deliver it simply. If you've got something more complex, put it on a web server and mail me a link. I'll ignore that just as easily but won't hate you as much. I like standards and microsoft :-) Hey, just let them merge ;-) Microsoft Team, I am not much of a web developer, I'm more of an end user. I have no axe to grind with MS; I am not a Linux disciple. It's fine with me that Microsoft makes gobs of money, and I do not feel the need to vent my spleen about bugs/crashes/perceived monopolies, etc. I love using Outlook to compose email messages, but really, how many people create graphs and Smart Art IN the email? Wouldn't they be much more likely to compose stuff like that in Word (say, in a report), then cut and paste? (Thank you for the ability to do that. BTW!) If you're addressing richness just from the email sender, you're addressing only half the user experience. Those of us who send email also RECEIVE email. As a marketer who uses the email marketing programs, I want the message I spent hours on to look as good in someone's inbox as it does in my web-based editor. And as clever as the Business Contact Manager Home module is, I will never use it for email marketing just to address rendering issues in Outlook email. CAN-SPAM puts the fear of the devil into marketers (as it was designed to do), ISPs limit email traffic, and the features of a dedicated email marketing program make it much easier to manage lists and track results than BCM. Thanks for addressing the concerns, Microsoft. I love your product, but like all of us, you've still got some work to do! While you?re at it, can you stop calling your proprietary character set ?ISO-8859-1?? I?m sick of seeing odd characters pooped all over my email! ? thanks There are two main issues here. Using Word as the HTML composer means that HTML emails sent through Outlook will have badly formed HTML, and so are likely to render badly in other email clients if people make full use of the possibilities available to them - unless the quality of HTML output from Word 2010 is better by several orders of magnitude than from previous versions of Word. Netscape 4 could output generally valid and concise HTML more than 10 years ago, but every incarnation of Outlook produces code that is more bloated and less standards compliant than before. Given the accepted move within the web industry to move towards standards, it is unbelievable that email is moving in the opposite direction. You say that this is to facilitate rich content emails between Outlook users - but (a) only the tiniest minority of users will make use of this, and (b) why do we have to put up with Word continuing to output such abysmal HTML? But the MORE IMPORTANT problem is not to do with sending but with receiving. The majority of email users are not using Outlook, so anyone creating emails to send to a general audience has to accommodate a variety of different email clients. Most of these have some reasonable level of support for web standards - except Outlook/Word, which fails to recognise a huge number of standard and common HTML elements. This means that anyone using Outlook and receiving an HTML email from an external source is likely to have a substandard rendering of the message - is that what you want? For every email that comes in to look wrong? I am sure that if you did a straw poll of Outlook users, you wouldn't find one in a thousand who supports that state of affairs, but that is what you are giving them. WHY?! I'm a web developer and regularly part of my job is to build HTML emails for clients. Clients are well aware of how fantastic a properly built HTML email can do for their branding. Unfortunately, if they see they're designs in Outlook 2007 they're shocked and blame the developer. I put a lot of effort into developing my HTML emails to render properly in a range of email clients and webmail services, but Outlook 2007 is impossible. Besides the rules imposed on us (no background images, no positioning, etc) we now also have to deal with a plethora of HTML rendering bugs that just can't be solved. For example, font-family has to be declared on every single table element, paddings can only be applied to td elements, and what on earth is going on with the crazy column spacing bug?!?! () Please please please... don't leave us developers out. I understand that its ideal to have the same product render and edit... but if that's the case then FIX THE WORD RENDERING ENGINE TO THE STANDARDS OF IE8! I cannot tell my clients that they can no longer have graphical, multi-columned HTML emails because the market leader no longer supports them... they'll laugh in my face... this technology should be progressing not regressing. Ever since Microsoft moved rendering of Outlook to the MS-Word platform email has been problematic for the significan part of the internet that does not use Outlook or Outlook express. There are different email programs because people make choices. Do we all drive Buicks? Buicks are great cars with lots of room and creature comforts. Should we build out highways to accommodate Buicks because GM says it is best? Oh, wait. GM is going Bankrupt because the market shifted and the worlds economy tanked. Likewise with email. Outside of a corporate environment I have not used Outlook yet I have a significant bit of influence without it. What of the 10's of millions of others that also decline to use outlook, should we have to deal with the bloatware crafted emails from Outlook? Should our emails not be as designed simply because one vendor says it's "Built Like a Rock" Follow the standards as the rest of the world, or change them for the whole world. Didn't we already have this debate with IE? Standards simplify things for everyone who handles the marketing and technical aspects of managing e-mail. Simplifying web and e-mail design only stands to make your customers more money in the long run which seems like something you'd want to do since it would help them to justify paying for Office upgrades. No doubt, Word makes a great e-mail composer. But that's not the point. The point is that it should also fully support HTML/CSS e-mail when viewing e-mail. Someone has missed the point of fixoutlook.org here. @TechieBird I agree that having separate engines is a a pain. But have you ever seen an HTML file that Word created? It's filled with a lot of useless code that doesn't even render properly. The same thing that happens when someone using Outlook 07 sends an email with HTML objects (like SmartArt, etc) to someone with Outlook 03, or any other email client. And please, please - understand the difference between spam, and the email that you actually want - like transactional email from an online merchant (perhaps notifications that your package has been sent, or that your hotel room has been reserved), or new information from companies that you're interested in and have signed up for personally (retail, arts, etc). Keep in mind that marketers are part of every business, and business is Microsoft's bread and butter; so we're far from a special interest group. The bigger issue here (and the reason that people are supporting the email standards project) is that MS should adhere to the VERY widely accepted HTML standards already in place. However, if they're too concerned that doing that will affect security, then why not work with a standard that has been accepted by several other major email providers, including Yahoo, GMail, and others. A lot of this back and forth from this blog and others is starting to sound like the mess that came from the IE6 team, until Firefox started taking serious market share from them. Our text editing application is better at rendering HTML than our Internet Browser,.. what!?!?! And if the word generated mails are turned into html anyway, why not keep rendering using ie? Fact is that while IE can display any Word html perfectly, the same sure cannot be said the other way around, so what Microsoft is really saying, is that we don't care about rendering just as long as we can handle mails between Outlook Clients. And ababiec, not all HTML mails are spam,.. Creating email newsletters is one of the services we offer to our clients. We do this by creating a "web standard" design that complies with the majority of email systems. Afterwards we downgrade (!!) our design to the only system that does not follow standards: MS Outlook. More than ever, clients are not willing to pay for this extra work. The result is a growing number of email campaigns that use standard HTML + CSS. Users of MS Outlook will not get the best viewing experience, but that's a choice MS made for them, we and the rest of the design community should not back up mistakes for MS. This is typical Microsoft hubris and dominance. They want to make it such that every non-Microsoft email program has to now support MS Word format XML in order to display the message. It's one more step towards their new document standard that they keep wanting to push on everyone without consensus approval. Do what I'm going to do. If you get a message you can't read, and know it's not spam, keep replying back that you can't read it. Even if you can slightly read it but part of the message is messed up -- reply back that you can't read it. If enough of us do this, Microsoft will be screwed. I gave om on Ms products a long time ago. It was around the time when they started the wga-thing. Now I am running Debians stable and nothing can get me back Not allowing the multiple benefits of web standards to sway the decision on Outlook 2010 was plain wrong. Mistakes are good for us, one way or another, when we learn from them and work to correct them. Please allow one benefit from this mistake to be the appropriate Microsoft person/people deciding to adopt web standards for Outlook 2010 HTML rendering and in doing so, make friends across the business and design (and many other) communities. Thanks, cheers, -Alan. From my perspective the argument is as follows: Certainly does MS want to enable its end users to create graphically pleasing emails via Outlook - which is just fine for me. However one must admit that in real (business) life, only a few users are really using this capability beyond some basic formatting (bold, font, colour). The most messages in my mailbox that want to make a graphic impact are machine created ones. Either they are mailing campaigns from people / companies that I invited to mail me. Or, and this is the key point for me, they are "systems communications" where you receive automatically generated mails. And that is where MS is really missing the point. Why make it hard for developers to provide a decent customer experience - even if it is just a mail reply message when you signed up for just another account somewhere? It definite is a legitimate aim to provide decent capabilities for branding. And I do not see that MS is making any good progress here. So: Please divide between editing / creating and rendering... O. It must be nice having your head so far up your @$$. HTML emails should follow HTML standards. There. I've simplified it for you. "There is no widely-recognized consensus in the industry" Please define the number of individuals required before the outlook team recognizes the de jure standard to be a consensus? 18,000+ individuals tweeted about this in 24 hours. How many do you think you might get before the release of 2010? This isn't a twit-head only campaign. Most people join twitter and send only one post. Guess how many of us can be convinced to join twitter just to respond to this issue? Does your team really want to create a grass-roots campaign that casts negative light on your products? "There is no widely-recognized consensus in the industry about what subset of HTML is appropriate for use in e-mail for interoperability." Really? So when an email has a multi-part message in which the type is "text/html", you somehow are unbound by HTML standards? Please stop confusing me. As been said repeatedly here, the issue is what engine is used to render the HTML in Outlook. What's the harm in using the IE engine? the benefits seem clear. We don't need all of these features. They are rarely useful. Keep it simple. If you need this type of presentation, then use Word or Powerpoint! 90% of emails can suffice as plain text or standard HTML! Please make standard HTML/CSS available in Outlook! Instead of being confrontational with your end-users, why not try to open a dialog? You're trying to provide a solution for people, when those people (even though it's not all people) ignoring their point by not opening a dialog with them only serves to alienate those people. This reduces your sales rather than increases sales. Why can't there be a compromise where you support properly formatted html emails and render them with IE8's engine? Keep Word as the way emails in Outlook are composed, output html as you see fit, but when an email is received, why not switch over to IE8 for the rendering? I don't see that addressed in your post. IE8 would have a hissy fit if it had to parse Word-generated emails: Only Word can render Word-HTML properly. Luckily, I think it's pretty rare for people to bother with anything beyond bulleted lists in normal emails - who could be arsed to generate a pie chart in an email, for example? So it's only an issue for people sending out emails with complicated layouts and lots of images, i.e. for marketing purposes. It's a better idea to send out plain text or very simple emails anyway - most corporate email environments will block images. And if rendering complicated layouts is so fraught with danger because the main email client uses a weird rendering engine, then, weirdly enough, MS are doing us all a favour. Ian Muir said: "Additionally, implementing the full standard for HTML, XHTML or CSS would open up all kinds of fun new tools for spammers." Way to prove you don't know what you're talking about. HTML, XHTML and CSS are all just harmless plain text. There is no code executing in them, only tags to be interpreted by a renderer. Javascript execution would be a problem, but nobody is suggesting Outlook allow javascript to execute, and disabling its execution in an embedded IE instance should be trivial. Given that Outlook 2007 will be in used for many (over 10) years to come. Whatever is done with Outlook 2010 emails will still have to be kept simple enought for Outloook 2007 to display. Microsoft wants users to be able to create emails using a Word-like interface and expect it to be rendered consistently by Outlook recipients. Web designers want to be able to use standard HTML to build emails and have them rendered correctly in Outlook. So why don't you support both? Emails sent from outlook should have a different content type, e.g. Content-type: text/html+word. Outlook can then choose a rendering engine (Explorer or Word) based on the content type. Anyone see a feasibility issue here? All or nothing is a fallacy. The first fallacy is that Microsoft Word could not be made to create standards based Documents. The Office family has started using XML and in doing has stuck somewhat closely to the standards. The tools available for standards HTML are there or could be there by supporting the software community. The second fallacy is that this must be all or nothing. Go ahead default to a Word HTML generator that is not using the standards but give your users who care a chance to be able to stay your users rather than trying every other app hoping we can find something better. Overall I like Outlook a lot. It has its issues but what program that does everything it does is not going to have some issues, I get that. About 20% of my business is training people to Outlook and I would love to be able to set my clients up with something that will work with standards. I hope that Microsoft will act on this and help to move everyone forward. You see, if more people can use Office they will use Office. Creating Islands of isolation is not the way to build market share. I work for a $14mil nation-wide not-for-profit. Outlook's proprietary features have caused much embarrassment when members of my staff tried to use them in emails to non-Microsoft using colleagues, friends, and as we are a not-for-profit, donors. Microsoft has caused my organization to appear unprofessional, and because of this, it is now our company's policy that RTF is turned off in all of our clients. My staff only cares about the rich editing features to the extent that everyone in the world can view their effects. Mr. Kennedy, please listen to the comments on the many posts here and fix your product. If you won't fix your product, then shift the email paradigm before Google beats you to it. It does not matter if it being done by a e-mail marketing company, we need standards for e-mail! What about when you send to people who are not using outlook????? Microsoft is not the only game in town, if you keep going this way like you did with IE you are going to find yourself in losing customers! WAKE UP MICROSOFT!!! Most email clients use web browser components to render emails, and therefore get support for all relevant parts of the web standards. Why can Outlook not use MSHTML to render emails? Posting inline replys to to HTML emails doesn't make any sense, so who cares if whatever rich text editor you use (Word in this case) can't open it? And while I'm here can we please have the broken quoting fixed in plain text emails so people don't have to use things like (or send emails that are a complete mess). Deja vu? This sounds all too much like the arguments made against changing standards support in IE6. It started when a Microsoft product was found to handle things in a nonstandard manner that the larger community of developers found troublesome. We're now on the second phase of the process, where the software maintainers hold that their method and strategy is fine and proper, even if it breaks their fundamental compatibility with the rest of the community. Now we'll move on to the last phase, where enough of an outcry is generated that a secondary application like Thunderbird or GMail begins to get better PR and marketshare. Well after this happens, Microsoft will begin to implement better compatibility in order to appease the community and retain their userbase. It's sad that this has to play out again so soon after the IE6 debacle. One would hope that losing 20% of a monopolistic market would be enough of a jolt to change the corporate culture at Microsoft to recognize the importance of standards on the internet, but this post by William belies that hope. William, your product is a communications tool, and digital communications rely on standards. Despite your claim to the contrary, there is a standard for email rendering: HTML and CSS. YAHOO, GMail, OSX Mail, Thunderbird, and Hotmail all render consistently using this standard, leaving Outlook as the IE6-like outlier. If you wish to use Word to make the editing portion of your user experience better, that's fantastic, and I encourage you to do so. Simply make the results obey standard HTML/CSS rules, and use a real HTML/CSS renderer to display emails from others. The screen shots in this post are the best argument for taking Word out of email I've ever seen. I can't believe that you think it's a 'feature' that a user can use proprietary extensions to make 'rich' email messages. Email is about communication, not document creation and compatibility is critically important. You'd think after the TNEF fiasco you start to value standard more. Obviously, you're not listening to us but in my role as CTO I take the following actions: 1. Any emails with proprietary formats are discarded at my email server. 2. Any client _capable_ of generating invalid content, or _incapable_ of rendering standard content correctly is banned. I work at a consulting firm and we're a "Microsoft Gold Certified Partner", (if that makes you listen M$) and I am primarily a consumer of Outlook, though occasionally I do create a web application where the client wants HTML email to be sent, though NEVER for a marketing campaign. Usually they just want their banner at the top and maybe a background gradient or something, just to keep their branding consistent. If Outlook could render the email with standards it would be very easy to provide this functionality. As it is, I usually have to rework what the client wanted to an extremely stripped design or just use plain text email. But I have a solution... provide a feature similar to IE8's meta tag rendering options. If you're not familiar, including the proper meta tag will instruct IE8 to render the page like IE7. So if something in IE8 were to break your design, this is an easy way to put a stint in a site while you work out the CSS kinks. Outlook could look in the <head> section of an HTML email and check for a meta tag that instructs Outlook to render the HTML in the email as it would for IE8. If that meta tag isn't there, then use the Word rendering engine. That way, embedded objects and all the other Word proprietary features will be the default, but if the HTML email asks to be rendered in IE8 then it can be. The advantage of this is that the person composing the email would need to know how to turn this feature on. This way nothing needs to be stripped out of the Office suite and re-worked (expensive and time consuming). Instead just continue to add new and better functionality, isn't that the point of buying software in the first place? We all want new and better features, this seems like a no brainier to me. Word still uses the old Winhelp hand mouse pointer for hyperlinks - this looks so ugly under Vista and Windows 7. Please finally use the system hand mouse pointer in Office 14 (also for the hyperlink in the About box). Why not use both? There are benefits in sending HTML e-mail newsletters through a service such as Campaign Monitor and Word's rendering engine simply cannot handle rendering these pages properly. Personally I find I receive far more e-mails of this kind then Word designed e-mails, however I'm not in a medium or large corporate environment where this might be more common. This switch could be managed through the mime types. Word coded e-mails would have a mime type of HTML/word or something like that and Outlook would know to render the e-mail with the Word engine. If the e-mail uses the mime type of HTML/text or whatever the typical HTML e-mail mime type is then the IE engine would be used. I should add that I don't care if Outlook can actually generate proper W3C compliant code, all I care about is the ability to properly render the e-mail sent to me. I'm going to continue to use third parties to manage mail lists and send newsletters so Outlook's ability to design and generate code is irrelevant. Damien BRUN made a simple and compelling argument above. If Outlook is to support HTML, it should be standard HTML (even if it needs to be a subset). Using tables for layout is no longer standard. The argument: "because there's no standard specifically for email, we're going to ignore all standards" is very, very weak. We already have to design special CSS and even whole pages to support IE (good job on improving IE7 and 8 though). Sending out emails to customers should be at least that easy. A third rendering option in Outlook is just another annoyance that, if you followed standards (or used your IE8 engine) could be avoided. We're not complaining that Word is a terrible editor; but having Outlook react differently to universally-standard HTML makes it a pain to keep a uniform appearance independent of the user's email client. This is the right decision, Outlook should continue using the word engine... ababiec completely misses the point though. No one is complaining about Word being used to CREATE emails in Outlook. That's all well and good. Where the problem lies is with Word being what is used to DISPLAY the emails that are received. This is a problem because Word does not understand CSS, a key component in how a modern, standards-compliant email is designed. This failure prevents emails from looking the way the author intended them. Trying to skew this as a problem that only exists for "zomg dirty evil email marketers" (whoever the hell that is supposed to represent) completely misses the point and devalues the conversation. This is about the content-rich emails YOU want to receive and YOU signed up for from YOUR favorite brand and companies, and what MSFT has done since Outlook 2007 is to prevent those emails from displaying properly by using a rendering engine designed for word processing not for web pages. See, emails today use the same code as web pages and the logical thing to do is to use your webpage rendering engine (IE) not your word processing engine (Word). Failure to do this has added an extra burden on email designers everywhere to create a regressive version of what their email could otherwise be, just to ensure it displays somewhat reasonably in Outlook in addition to the rest of the email clients that use proper rendering engines. Can you have it display email from other users of Outlook using the Word engine and email from other services using the IE one? Outlook is the number 1 problem e-mail client for us. For some reason, MSFT turned off the ability to submit forms from outlook. We have an application that sends notifications to users and allows them to respond with a form submission. It works fine in every single mail client EXCEPT Outlook. Basically, we're just recommending that our clients switch to Thunderbird or some other decent MUA. Outlook is just garbage. ababiec, Techibird, I work for a public K-12 school district. Though we do not use e-mail for marketing purposes, our administrative staff, teachers, and coaches use a listserv type system to communicate with parents on a group basis. Our system has the option of sending HTML email, plain text, or both. Some, though certainly not all, of our users spend a good deal of time composing the emails that will be sent out to a large audience in our community. I would love to see support for basic HTML/CSS rendering according to web standards in Microsoft's bleeding edge email client, as well as other market leading clients & webmail etc. If for no other reason than to have the communication to parents display in a professional and consistent way. That said, we use Group Policy to turn images off by default in Outlook. However, for an e-mail one is interested in our users have the option of displaying the images. I suppose we are indeed a special-interest group as well, not really representative of business users in general, but here we are. I very good reason to finally switch away from Outlook. Especially with the growing support for exchange server through alternative clients... Of course you believe Word is the best tool - it's your tool. The problem is that it is not standards compliant with the rest of the web. Not surprising given the MS track record in this area. "charting tools, SmartArt, and richly formatted tables for our professional customers" Why do you need these things in an email when you could attach a Word document? Do people really want to compose a chart in an email client and send it? Why listen to us though when you can just develop products like a drunken monkey and create such software gems as Vista? Great job on that by the way! Keep up the good work! The problem is not the authoring experience, but rather the display of HTML e-mails. Word is much worse than IE6 at displaying HTML (and why shouldn't it be, it's a word processor, not a browser). Just use a locked-down version of the IE8 engine for rendering HTML e-mail. And by all means continue to use Word for authoring - and make it produce proper HTML. I don't understand why you had to ruin a working experience in Outlook 2007. I am not a marketer, but an Outlook and Word user. I am actually a big fan of MS Office products, they're ubiquitous for a reason, they're high quality products. MS argument here is that customers are used to Word and prefer to use rich tools to compose emails, no counterpoint there. Word (even though has more overhead) is a decent way to compose emails... but that does not make Word a good renderer for email. Why not use Word to compose and IE to render? What's the draw back there... give the users the flexibility they deserve in presenting their information the way to present it. While I do get spam and if that breaks I don't care but, I also subscribe to marketing emails that I would like to see unbroken. MS please don't push users further towards using web based productivity software... won't you hate to loose customers over something silly like how emails render? And users will switch over something silly like that! This just make no sense, there is a subset of html for email, it's called web standard, just as always your behind of everyone Microsoft won't listen unless they feel threatened by some sort of competition. So, here's what I do. All email messages have a header explaining that, if the recipient is using a Microsoft product, then they probably can't see the message properly. I have a filter on my inbox that returns to sender any mail created using Microsoft products along with a short explanation. On some of my web sites I redirect IE users to a more basic version of the site complete with an expanation as to why this is happening. These steps (especially the inbox filter) have an interesting effect on some people. A few get in touch and start asking questions. I tell them the truth as I understand and experience it. It is amazing how many of them end up using Thunderbird/Firefox/Open Office after this conversation. This is a shame because I would prefer it if Microsoft listened, got it right and all my clients (and me) used nothing but Microsoft. Support would be so much easier. Until I can rely on Microsoft products to inter-operate with the rest of the world then this won't be possible. We all have to take a stand, vote with our wallets, use any opportunity to spread the word(!) and create some sort of impetus for Microsoft to just LISTEN! Simon. You guys have got to be kidding me. No standards for email? Then why does Outlook Express, now called Internet Mail, support modern CSS? Why does Apple Mail support modern CSS? Why does Thunderbird support modern CSS? Why does Microsoft's own XBox division not bother to format emails for Outlook 07 users? There's a reason the Office division is viewed derisively at Microsoft, and this is a good example of why. Rather than admit that your decision is based on something idiotic like trying to cater to users who use Outlook to send email to other people using Outlook, you make up some ridiculous nonsense about a lack of standards. If standards didn't exist, why do HTML emails look nearly the same across all email clients except for those that use WORD as their rendering engine? I was at a web standards conference a couple of years ago and a rep from Microsoft sat at our table at breakfast and asked what Microsoft could do to gain more acceptance from the standards community. It's unfortunate that only ONE product division seemed to take those sentiments to heart. Heed the collective cry* of anguish over your misanthropic decision to continue to use Word to render email in Outlook 2010. Please, listen to your customers and use a proper rendering engine that supports the basic HTML and CSS standards set down by the W3C. These standards have been in place for many, many years now, and it is up to you to bring your software up to meet those standards as best you can. Anything less is both a flagrant disregard for your customers, and a grave lack of ambition from the Outlook engineering team. *Over 21,000 as of writing this comment -> I have no desire for CSS or anything else that -> is not needed in regular office -> Even if the renderer was improved, what would -> it do when I click on reply and started editing? -> If the editor could not support everything the -> renderer could, the display of the email would -> change drastically. Is Outlook there to let marketing people send me fancy looking emails, or to let me commutate with my co-workers? 99% of important emails I read in Outlook come from within the company, most internet emails display with no problems. E.g. I never had an email from gmail fail to display nicely in Outlook. Why don’t marking people just keep the message (text) and use a tool like Gmail to design and send their emails? Also: to the "text email only" purists out there, you probably also think the internet should only be text-based web pages. A lot of people LIKE to receive nice-looking emails. That's what multipart messages are for, so if you want plain text you can have it. That's not the issue here. How about HTML 4 and CSS 2.1? Hows that for standards? Clearly, not everyone agrees that Word is the best way to author e-mail content and I can't imagine anyone would say Word's HTML rendering capabilities are "great." While I understand that it needs to be easy for people to author rich e-mail that aren't web developers, just about anyone who does e-mail campaigns needs the same cross-platform compatibility that we pretty much have in browsers now. It's been far too long that developers have to struggle to make designs (email or web) work in MS products, and, frankly, well designed HTML e-mails that look great in most clients look terrible in Outlook, which is bad for end users. Can you not have a DOCTYPE switch or HTTP-EQUIV META tag like IE has to decide whether the e-mail should be rendered via Word or via Trident? By default, you can use Word rendering, and those of us that know what we are doing can turn on Trident / IE rendering with a flag. This only seems fair given that we all have to spend a lot of time (and someone has to pay for it, like your end users that pay designers to make e-mail campaigns) making great HTML e-mails look marginally good in Outlook. Outlook is becoming the new IE6, and it's pretty clear that Microsoft doesn't actually care about interoperability, despite making statements otherwise, and certainly doesn't care about developers. Is there not some middle ground like my suggestion above that can benefit everyone without hurting anyone? Nice to know that as Microsoft moves forward with Outlook, they seem to go in reverse. I work in a large company that, of course, uses the Microsoft Office suite. To date, I do not believe that I have EVER seen a single email from any other fellow employee using Microsoft Outlook that contained native Word-based SmartArt, Charting, or Tables (expect perhaps some tables that were created in either MS Word or MS Excel that get pasted into an email either as a word-based table object or HTML-based table object... I'm not entirely sure). If users want to send something that clearly depends upon Word-based rendering, then they simply ATTACH a word-based document with a clearly indentified Word filename extension. On the other hand, I have seen thousands of emails that utilize HTML-rendering or HTML-based tables. It cannot be disputed, at least with a clear conscience, that any HTML-renderer based email object or mechanism would have far more broad acceptance than a Word-specific email object. Even Microsoft's tools such as Word, Excel, Powerpoint support conversion to HTML-based forms. Wouldn't be a much more universally accepted solution to allow users that chose to do so, to author their email content in whatever application they wanted to... and then support HTML-based cut-n-paste operations into Outlook? That seems far more sensible to the vast majority of users. Please do not let the misguided FTC and European Union actions of the past, intimidate Microsoft away from the sensible use of HTML-rendering in other applications such as Microsoft Outlook. After reviewing these blog entries, I don't see a good reason why Microsoft can't use Internet Explorer to render emails in Outlook. Security should be a non-issue, I mean, IE is supposedly secure enough to browse the web, so it ought to be able to handle spam. I'm in end user support, not marketing. Believe it or not, there are actually people out there who like to read their spam (one guy even prints out his travel offer emails) and if they don't look right, I hear about it. Use Word for composition, and IE 8 for display, problem solved. no one cares how microsoft solves this problem - whether you fix the word engine or use the IE8 renderer. I fully sympathize with the concept that a Word-based UI makes it easier for MS' users to author rich emails. but rich emails are based on HTML, and HTML is "a sanctioned standard or an industry consensus"... just 'meet the bar' of the standard. I can't imagine how the biggest software company in the world could accept any less of themselves. Why on earth would anybody want to use HTML in an email message? My experience is that it is only ever spammers and scammers who want to do this. I read all my emails in plain text. If the sender can't get their message across in plain text, then I don't need to know. And for all those talking about a standard, there is one already. It's called RFC822. The nasty conclusion we can all figure out from this is... Word doesn’t compose HTML in any remotely-standard way. It's full of inside tricks and shivs to make it do Office-only crap and account for backward compatibility going back 20 years. If Word’s HTML was even remotely easy to normalize – even by post process – Microsoft would eagerly do it. This shows that Word HTML really is THAT bad. (And William's "answer" totally misses the point. The point is about rendering, not composition) Judging by the official response to the campaign initiated by Campaign Monitor (and supported by industry professionals using a wide variety of ESPs), I'm guessing Microsoft won't pay any additional heed to these comments. But it does seem to me that the official response is a clear indication that William Kennedy, Corporate Vice President, Office Communications and Forms Team, Microsoft Corporation (or the PR person who wrote this response) either doesn't understand the problem or truly doesn't care. Forcing the industry to take a step backwards in compliance to use table-based layout simply to render email properly -- when they could use the IE rendering engine -- is the business equivalent of taking your ball and going home. Hopefully the exodus from Windows to other operating systems (or, at the least) to other email clients will continue to the point where Redmond is forced to at least consider the argument from a perspective other than the status quo. Personally, I feel like the campaign was interesting and worth it, and a nice example of how new technologies can be used to give voice to a disparate group. Too bad, in this case, it was a proverbial tree falling in an empty forest... no one was there to hear it. /Jim It's not about authoring emails you ninnies, its about displaying the incoming emails with any semblance of standards compliance. Why does Microsoft continue to fracture internet standards. This is why every web developer hates IE. Go ahead and keep Word for composing emails if you must, but use a true web engine for rendering the incoming emails. How do users of Outlook without Office do HTML mail ? either they can't or somehow there's a way to switch Word editing off in this scenario. Lots of time is spent on design and compatibility. Why make it harder? We want our designs to appear as they should! Lame and disappointing campaign execution by 'Let's Fix It'. While I agree with the cause, using Twitter Corporation's proprietary service as the only means for an internet user to submit feedback is just as bad/backward as using HTML tables in Outlook. That's like, um, forcing people to use Microsoft Corporation's technology (READ: Outlook) as the sole means to manage email communication. So let's see: No web site feedback form was constructed for this cause. Meaning, that no web designer was hired to design a form---even a (gasp!) table-based one!, CSS out an email and/or contact page, etc., to collect the data, nor were any database professionals used to set up a system to parse the collected feedback and organize/collate for the organizers. Nope. Just passed off to a single company: Twitter---that you are stuck having to use as the sole means to communicate this campaign via the (open?) web. This is so late 1990's when 'AOL Keyword: National Geographic' was on the cover of that pretty yellow magazine instead of the more sensible and open '' they use now. Backward we go... just like Outlook 2010. There is nothing wrong with using Twitter as an additional means, like Facebook, etc. to communicate and enhance a web site. But it should not be the <strong>only</strong> means, just like using IE should not be the only means to access the web. That said, can you folks please make Outlook 2010 work better with CSS? Chris We have already run into this problem with a newsletter that was generated by a vendor for internal distribution. Doesn't display properly in Outlook 2007, though it does in IE7 as well as Firefox, Outlook 2003, Outlook Express and even Windows Mail (Vista). If you try to edit it in Outlook 2007 you can get it to display properly but the resulting E-Mail is now 400k instead of 17k. Outlook should display web content exactly the same way as web browsers do. All I want to do is float a div. While there is no technical HTML standard, it is a function widely used. Can you explain why, a globally used platform, such as outlook has decided to not support this frequently applied HTML code? Mail messages should not be in Word rendered HTML, in fact they should not be in HTML at all. Mail messages should be in plain ASCII. GP> Nope they don't get it. They just don't get it. I respectfully disagree. There is an informal consensus where most e-mail clients support at least some HTML layout features. There is certainly a consensus that tables should not be used for layout. Unless the opinion of the Outlook team is that e-mails should not contain layout at all? E-mail is moving beyond messages between individuals and is now seen as a multimedia communications channel. To cripple Outlook 2010 by considering rich formatting only as a way for people to slightly embellish their individual messages is to hold the product back from the way people and organizations are using e-mail. @TechieBird, ababiec: This isn't just marketers - many of my business clients have expressed disappointment at the limitations of e-mail stationery. Email is essentially an internet experience. Increasingly, email creation and dissemination is performed exclusively through the internet. Rather than embracing the standard language and construction conventions of the internet, Microsoft is sticking to a format that has repeatedly been proven to be incompatible with online use. Not only is this disappointing from the standpoint of someone who would like to be able to use the full range of html tools to create emails, it seems like Microsoft is shooting itself in the foot from a business standpoint, especially given the rise of internet-based hand held devices. Mirosoft is clearly focusing on the priorities of someone whacking together 'professional looking' HTML emails here and not the experience of people recieving them... While in one sense I can understand them thinking of the needs of those who might want to create such emails I doubt this would be 100% of outlook users - on the other hand almost all users will use the software to recieve HTML email and using Word to render it is a crappy solution. They are failing not only the designers and developers who want to create user friendly newsletters, but also their customers who want to recieve them. To spout complete tripe about 'subsets of HTML' and a lack of industry consensus or standards is at best delusional and at worst downright dishonest. There are WC3 standards for HTML and CSS. These are applicable to browsers and email programs alike. Just because Microsoft does not feel they should apply to their software does not make the known and commonly accepted standards simply not exist. Of all companies, Microsoft should not be accusing groups such as the Email Standards Project (which may included companies but also includes individuals and users with no affiliation to freshview) of pushing their own agenda or interests. If anything, that is the pot calling the kettle black. And a final note... "Word has always done a great job of displaying the HTML which is commonly found in e-mails around the world" Perhaps on mars, but on planet earth, it ignores the most common HTML and CSS standards and makes HTML emails which work in almost every other email program look, quite frankly, like a pigs breakfast. How is it that one of the largest and most successful companies is unable to manage what a large number of smaller companies have already achieved. The main problem is that Word doesn't look the same when you are composing it, as when you receive it in the IE display engine. So your solution is to replace the valid display engine (IE) with an invalid Word HTML display engine? That seems a little backwards to me. Open up to real standards, not your own internal walled off standards. You are starting to back yourself into a corner with these sorts of moves. Why not concentrate on updating the HTML composing portion of Word so that it emits (renders?) valid HTML that will be displayed correctly in an IE8-type display engine. (or other valid HTML renderer eg. gecko etc) Otherwise Outlook becomes a walled garden.. and as more businesses run email marketing campaigns (yes not spam ones) the CEOs are going to start wondering why their carefully constructed emails look fine in every other email client, but look like crap in Outlook. That's when Outlook is going to get tossed out... "But in the real world where resources are limited and features are prioritised," Yet somehow Mozilla, Google, Opera, and Apple have all managed to make wonderfully featured browsers with full current standards compliance, with several aspects of HTML5 and CSS3... Microsoft has no less of a resource supply than any of them. What's their excuse for being the ONLY ONES still stuck in 2000? The rest of the internet has moved on, and we're busy yelling back at them, trying to help them catch up, but they continue to ignore us. If Microsoft actually fixed IE's rendering engine and made it work right in Outlook, I'd have one less major reason to be mad at them. @TechieBird: You're missing the point. Yes, your emails may look right to you. Know why? Because some poor developer spent days ripping his hair out trying to make it look right. That's the problem. Emails continue to look right because we continue to build them using horrid techniques, as they're the only ones that work on all clients. All we want is for Microsoft to fix the rendering engine in Outlook so we can build proper HTML that will still look right in all the other clients (which already understand standards), and also look right in Outlook. Whether or not fixoutlook.org is sponsored by Campaign Monitor is irrelevant. What *is* relevant is that Microsoft does not support web standards. And this needs to be fixed regardless of who is pushing the agenda. ababiec / TechieBird - if all you want to do is send and receive internal email then the current Microsoft stance is fine for you. But Outlook needs to work for all users, not just internal corporate email. The problem is that if someone sends you an email that wasn't created in Outlook, whether that's - an external supplier that you _do_ want to hear from - a website you've registered with sending a welcome email - or any other email client on the planet then there is a fair chance that email could look scrappy with the current Outlook rendering just because basic CSS attributes are not supported. This is especially the case when emails that rendered fine in Outlook 2003 were suddenly broken in Outlook 2007 seemingly without any warning. Having to add outdated attributes just to support the same thing as CSS is already designed to do just makes no sense (e.g. align="right" for images instead of being able to use a class that sets "float:right"). How can there be a security issue in supporting one and not the other? All that is being asked is that Word 2010 include support for a few basic _standard_ CSS rendering attributes that should be just as simple to support on the editing side; Word already knows how to render things that CSS describes with floats and padding/margins, it just decides to ignore converting these at the moment. Please continue ignoring the mass-mailers on this, Microsoft. In fact, please start a counter-tweet campaign so I can voice my solidarity with you against the tweeters. While bold text and bulleted lists are useful, email should never look like a web page. There should never be navigation bars, sidebars, or anything else that requires CSS-based or even table-based layouts. It's been stated before and I totally agree: I don't care if you want to use Word to AUTHOR an email, but the HTML that it generates is awful. The end result is passing emails between Outlook and any other client is a hassle -- Anything I send from my Mac to Outlook recipients gets mangled (on their end) and vice-versa. So go with my blessings: use Word to author emails, just please update the HTML it outputs to something that every other program in the rest of the world can understand properly. And if you're not going to use IE8 to render emails, help Word interpret the good, clean HTML that every other program in the rest of the world generates. As for standards, they DO exist. There are a number of very, very basic CSS commands that pretty much every other email program supports and Outlook doesn't. As a number of people have stated, standards for HTML exist as outlined by W3C -- if you're using HTML in email, shouldn't it adhere to the same standards?! (As a sidenote, you want people to use Word to create HTML -- shouldn't the HTML it creates adhere to W3C standards and basic web design best practices also?) To all the people saying this is about making life easier for marketers, you're wrong. Look at what having standards on the web has done for us all: more quality websites that look better, work better on all browsers, and can do more. If Microsoft makes life easier for marketers, they'll be making life easier for consumers also. Microsoft has a unique opportunity here: as the maker of one of the most widely-used email clients, they have the opportunity to lead the charge to universal standards for email that make EVERYONE'S lives better. Instead, they're going with their own proprietary stuff that creates bloated, junky emails. It's just sad. I think MS should update either the Word engine or give users flexibility to use standard HTML engine (from browser) for email display. It drives me nuts when the email cannot display the correct CSS tags because stupid word cannot decipher it right. Please, please, please support CSS rendering. I am a developer and writer, and I support increasing standardization between web and email rendering. As far as Outlook, I switched from 2007 BACK to 2003 partly because of the lack of composing and rendering choices (plus performance). I have nothing to do with marketing email -- just writing user documentation for software. "Word has always done a great job of displaying the HTML which is commonly found in e-mails around the world." No it hasn't. That's because the consensus is to support HTML as a whole, not just a subset; and to add on support for CSS as well. "The 'Email Standards Project' does not represent a sanctioned standard or an industry consensus in this area. Should such a consensus arise, we will of course work with other e-mail vendors to provide rich support in our products." If by "industry consensus" you mean "Microsoft's Consensus," you're right. Yahoo!, Apple, Google, and other email client developers are all supporting web standards. The Email Standards Project does represent the consensus of the *WEB DESIGN* industry. I found this issue interesting, because I recently encountered this problem in Outlook 2007 when rendering HTML generated by Microsoft's own software. Automatically e-mailed reports from SQL Server Reporting Services are not properly displayed in Outlook 2007 - the tables are compressed horizontally. This annoyance was reported to me by some of my colleagues. I still use Outlook 2003 and the reports are rendered correctly in my e-mail. Reading all this makes me smile. Justifies my stance that: a: I really like only getting text emails, not html b: I really really hate email in general Funny that twitter folks are complaining about email. I am also an Outlook user, from 2000, to XP, 2003 to 2007. I would much rather Microsoft adhere to open standards than 'cripple' their software in ways to achieve their marketing goals. People who know me, know that I don't use that term lightly! Come on Microsoft, you are better than this! I love Word. I love Outlook. But please *do* follow web standards. Don,t do another IE out of Outlook where IE "extended" the web standards. Compliance is key. Prove us that you really rock at MS! I'm with ababiec and TechieBird. E-mail is not the web. It's an entirely different medium. Saying that "it's not about how it's composed but about how it's rendered" is like saying "It doesn't matter that an aitplane must take off from an aitport. It should be able to land on any piece of concrete the pilot wishes, including the Interstate, and still deliver the passengers." A hammer should never be used to drive a screw. Frankly, anything a mail client can do to _disrupt_ mail marketing is 100% OK by me. If I want the information, I'll use a web browser to read the marketer's web page. Well, After looking at the campaign efforts I thought Microsoft would take the user experience stuff seriously and work on to improve it. We don't care what rendering engine you use, but at the end of the day Outlook can display what a web based mail can then the job is done. Its so sad that outlook till today does not support image as background, which I feel would give lots of new rich experience when sending creative mails. Siddharth Menon Borget Solutions There’s some merit to Word generating OUTGOING Emails. This allows additional functionality and user familiarity for many. However, I would like to see RECEIVED emails rendered with the Explorer engine by default. Wouldn’t this solve our problem? You have got to be kidding me. You are sacrificing web standards in the name of MS Word graphs and clip art? Give me a break. How about you(Microsoft) stop being the least common denominator when it comes to standards? Is abusing standards the the hill you want to die on? Jeeze! You are holding EVERYONE back by doing this. What a reliable disappointment Microsoft has become. P.S. You do not need MS Word to enable a WYSIWYG editor... you have the ASP.NET custom control already developed in the AJAX Control Toolkit! As has been said at length above, the complaint is about Outlook continuing to use word to Render HTML, not word as an editor to create messages. It's nothing but disingenuous to claim otherwise. Most people making the complaint are not spammers. Lots of people send out HTML email, lots of businesses large and small send 100% legitimate newsletters, etc. Asking for proper rendering of HTML email (and YES HTML IS A STANDARD) is not something that only spammers want. 99% of my contacts send me plain-text messages and that's why I switched Outlook to show me text-only messages as default. I don't care about HTML messages. HTML belongs inside websites and not inside e-mails. Why not have a flag or something? A flag that says, "this was made with the Word engine" and when outlook sees that it uses it's handy Word engine renderer. Otherwise, it uses IE8. Surely I'm the 400123125th person (or so) to suggest such a 'solution' so I'm guessing it's not actually a solution. Would be curious to hear why not. The Outlook team wants to thank everyone who has responded to this post and the online campaign around Outlook and Word. We value your feedback and have read and logged every comment on this page. At this time, we believe that the unique and relevant perspectives and opinions of this community have been stated and appropriately noted, and rest assured we will continue to read and record any additional feedback made, though it will not be published. Dev Balasubramanian Outlook Product Manager The power of Word? Are you kidding me? Open any web page today in Word and it can't even retain the exact layout, let alone generate clean HTML. Is Word only for designing from blank HTML pages and editing existing pages created in Word? If not, why is the renderer stuck in the 90s? Please improve Office's horrible renderer and bring it up-to-date so it at least retains the layout of modern web pages. If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://web.archive.org/web/20090627004005/http:/blogs.msdn.com/outlook/archive/2009/06/24/the-power-of-word-in-outlook.aspx
CC-MAIN-2017-34
en
refinedweb
Refer LWP::UserAgent The great pleasure in my life is doing what people say you cannot do. # now lets actually get the image.. my $ua = LWP::UserAgent->new; $ua->timeout(10); $ua->env_proxy; my $response = $ua->get($image); if ($response->is_success) { #print $response->content; # or whatever open(WRITEIT,">$save_to") || die qq|Cant write to $save_to, r +eason: $!|; binmode WRITEIT; print WRITEIT $response->content; close(WRITEIT); my ($x,$y) = imgsize($save_to); print qq|Got sizes: $x x $y \n|; return ($x,$y,$save_to_url); } else { # save a bad status... return (0,0,undef); } .
http://www.perlmonks.org/?node_id=765421
CC-MAIN-2017-47
en
refinedweb
PHYSFS_Allocator man page PHYSFS_Allocator — PhysicsFS allocation function pointers. Synopsis #include <physfs.h> Data Fields int(* Init )(void) void(* Deinit )(void) void *(* Malloc )(PHYSFS_uint64) void *(* Realloc )(void *, PHYSFS_uint64) void(* Free )(void *) Detailed Description. See also: PHYSFS_setAllocator Field Documentation void(* PHYSFS_Allocator::Deinit) (void) Deinitialize your allocator. Can be NULL. void(* PHYSFS_Allocator::Free) (void *) Free memory from Malloc or Realloc. int(* PHYSFS_Allocator::Init) (void) Initialize. Can be NULL. Zero on failure. void*(* PHYSFS_Allocator::Malloc) (PHYSFS_uint64) Allocate like malloc(). void*(* PHYSFS_Allocator::Realloc) (void *, PHYSFS_uint64) Reallocate like realloc(). Author Generated automatically by Doxygen for physfs from the source code. Referenced By The man pages physfs-Deinit(3), physfs-Free(3), physfs-Init(3), physfs-Malloc(3) and physfs-Realloc(3) are aliases of PHYSFS_Allocator(3).
https://www.mankier.com/3/PHYSFS_Allocator
CC-MAIN-2017-47
en
refinedweb
I have a parent list something like so :- ParentList = {a,b,c,a,c,d,b,a,c,c} ListA = {a,a,a} ListB = {b,b} ListC= {c,c,c,c} ListD = {d} 4 ListC Assuming you just want the count, and not which character/string gives that count, here is a one liner (you'll need using System.Linq;) var highestCount = ParentList.GroupBy(p => p).Max(p => p.Count());
https://codedump.io/share/vhQ4E8MoYmAW/1/split-list-into-multiple-lists-of-similar-values-c
CC-MAIN-2017-47
en
refinedweb
LA3635 - com- plaining. N, F 10000: the number of pies and the number of friends. One line with N integers ri with 1 ri 10000: the radii of the pies. Output For each test case, output one line with the largest possible volume V such that me and my friends can all get a pie piece of size V . The answer should be given as a oating point number with an absolute error of at most 10 #include <iostream> #include <stdio.h> #include <cmath> using namespace std; #define PI acos(-1) const int MAXN = 10010; double S[MAXN]; int N, F; bool check(double mid) { int sum = 0; for(int i=0; i<N; i++) { sum += floor(S[i]/mid); } return sum>=F+1; } int main() { int t, r; scanf("%d",&t); while(t--) { scanf("%d%d",&N,&F); for(int i=0; i<N; i++) { scanf("%d",&r); S[i] = r*r*PI; } double min1 = 0, max1 = 1e14, mid; while(max1-min1>1e-5) { mid = (max1+min1)/2; if(check(mid)) { min1 = mid; }else{ max1 = mid; } } printf("%.4f\n",min1); } return 0; }
http://blog.csdn.net/fljssj/article/details/46821969
CC-MAIN-2017-47
en
refinedweb
What is the meaning of 3d graph? I read the reference manual. There I see only x,y. So how is the value of z decided? Example: def f(x,y): return math.sin(y*y+x*x)/math.sqrt(x*x+y*y+.0001) P = plot3d(f,(-3,3),(-3,3), adaptive=True, color=rainbow(60, 'rgbtuple'), max_bend=.1, max_depth=15) P.show() I also don't know how to paste the sage cell code, as only latex is the way here.
https://ask.sagemath.org/question/37707/what-is-the-meaning-of-3d-graph/
CC-MAIN-2017-47
en
refinedweb
How to build a symbol solver for Java, in Clojure Originally published on my blog LastSymbol References and Symbol Declarations First, we have to consider that the same symbol could point to different declarations in different contexts. Consider this absurd piece of code: package me.tomassetti; public class Example { private int a; void foo(long a){ a += 1; for (int i=0; i<10; i++){ new Object(){ char a; void bar(long l){ long a = 0; a += 2; } }.bar(a); } } } It contains several declaration of the symbol a: at line 5 it is declared as field of type int at line 7 it is declared as a parameter of type long at line 11 it is declared as a field of anonymous class, and it has type char at line 14 it is declared as a local variable of type long We have also two references to the symbol a: at line 8, when we increment a by 1 at line 15, when we increment a by 2 Now to solve symbols we can figure out which symbol references refer to which symbol declarations. So that we can do things like understanding the type of a symbol, and so which operations can be performed on a symbol. Scoping RulesScoping Rules The principle to solve symbol is easy (but the implementation could be... tricky): given a symbol reference we look for the closest corresponding declaration. Until we find one we keep moving further away from the reference, going up in the AST. In our example we would match the reference on line 15 with the declaration on line 14. We would also mach the reference on line 8 with the declaration on line 7. Simple, eh? Consider this other example: package me.tomassetti; class A { public void foo(){ System.out.println("I am the external A"); } } public class Example { A a; class A { public void foo(){ System.out.println("I am the internal A"); } } void foo(A a){ final int A = 10; new A().foo(); } } Now we have different definitions of A, in some cases as a Class (lines 3 and 13), in others as a variables (line 20). The reference on line 21 ( new A().foo();) matches the declaration on line 13, not the declaration on line 20, because we can use only type declarations to match symbol references inside a new object instantiation statement. Things starts to become not so easy... There are other things to consider: - import statements: importing com.foo.A makes references to A be resolved with the declarations of the class A in package com.foo - classes in the same package which can be referred by the simple name, while for the others we have to use the fully qualified name - classes in the default package cannot be referred outside the default package - fields and methods inherited should be considered (for methods we have to consider overloading and overriding without confusing them) - when matching methods we have to consider the compatibility of the parameters (not easy at all) - etc. etc. etc. While the principle is simple, there are tons of rules and exceptions and things to consider to properly resolve the symbols. The nice thing is: we can start easily and then improve the solution as we go on. Design Our Symbol SolverDesign Our Symbol Solver One note: In this post I will explain the approach I am currently using to build my symbol solver for effectivejava. I am not saying this is a perfect solution and I am sure I am going to improve this solution over time. However this approach seems to cover a large number of cases and I think it is not an awful solution. A simple reference to a name (whatever it indicates a variable, a parameter, a field, etc.) is represented by a NameExpr in the AST produced by JavaParser. We start by creating a function that takes a NameExpr node and return the corresponding declaration, if any could be found. The function solveNameExpr basically just invoke the function solveSymbol passing three parameters: The AST node itself: It represents the scope in which to solve the symbol nil: this represent extra context information, in this case it is not needed the name to be resolved: well, it should be self-explanatory (defn solveNameExpr "given an instance of com.github.javaparser.ast.expr.NameExpr returns the declaration it refers to, if it can be found, nil otherwise" [nameExpr] (let [name (.getName nameExpr)] (solveSymbol nameExpr nil name))) We start by declaring a protocol (which is somehow similar to a Java interface) for the concept of scope: (defprotocol Scope ; for example in a BlockStmt containing statements [a b c d e], when solving symbols in the context of c ; it will contains only statements preceeding it [a b] (solveSymbol [this context nameToSolve]) ; solveClass solve on a subset of the elements of solveSymbol (solveClass [this context nameToSolve])) The basic idea is that in each scope we try to look for declarations corresponding to the symbol to be solved, if we do not find them we delegate to the parent scope. We specify a default implementation for the AST node (com.github.javaparser.ast.Node) which just delegate to the parent. For some node types we provide a specific implementation. (extend-protocol Scope com.github.javaparser.ast.Node (solveSymbol [this context nameToSolve] (solveSymbol (.getParentNode this) this nameToSolve)) (solveClass [this context nameToSolve] (solveClass (.getParentNode this) this nameToSolve))) For BlockStmt we look among the instructions preceding the one examined for variable declarations. The current examine statement is passed as the context. If you are interested for the functions used by this one just look at the code of effectivejava: it is on GitHub (extend-protocol Scope BlockStmt (solveSymbol [this context nameToSolve] (let [elementsToConsider (if (nil? context) (.getStmts this) (preceedingChildren (.getStmts this) context)) decls (map (partial declare-symbol? nameToSolve) elementsToConsider)] (or (first decls) (solveSymbol (.getParentNode this) this nameToSolve))))) For a MethodDeclaration we look among the parameters (defn solve-among-parameters [method nameToSolve] (let [parameters (.getParameters method) matchingParameters (filter (fn [p] (= nameToSolve (.getName (.getId p)))) parameters)] (first matchingParameters))) (extend-protocol Scope com.github.javaparser.ast.body.MethodDeclaration (solveSymbol [this context nameToSolve] (or (solve-among-parameters this nameToSolve) (solveSymbol (.getParentNode this) nil nameToSolve))) (solveClass [this context nameToSolve] (solveClass (.getParentNode this) nil nameToSolve))) For ClassOrInterfaceDeclaration we look among the fields of the class or interface (classes could have static fields). (defn solveAmongVariableDeclarator [nameToSolve variableDeclarator] (let [id (.getId variableDeclarator)] (when (= nameToSolve (.getName id)) id))) (defn- solveAmongFieldDeclaration "Consider one single com.github.javaparser.ast.body.FieldDeclaration, which corresponds to possibly multiple fields" [fieldDeclaration nameToSolve] (let [variables (.getVariables fieldDeclaration) solvedSymbols (map (partial solveAmongVariableDeclarator nameToSolve) variables) solvedSymbols' (remove nil? solvedSymbols)] (first solvedSymbols'))) (defn- solveAmongDeclaredFields [this nameToSolve] (let [members (.getMembers this) declaredFields (filter (partial instance? com.github.javaparser.ast.body.FieldDeclaration) members) solvedSymbols (map (fn [c] (solveAmongFieldDeclaration c nameToSolve)) declaredFields) solvedSymbols' (remove nil? solvedSymbols)] (first solvedSymbols'))) (extend-protocol Scope com.github.javaparser.ast.body.ClassOrInterfaceDeclaration (solveSymbol [this context nameToSolve] (let [amongDeclaredFields (solveAmongDeclaredFields this nameToSolve)] (if (and (nil? amongDeclaredFields) (not (.isInterface this)) (not (empty? (.getExtends this)))) (let [superclass (first (.getExtends this)) superclassName (.getName superclass) superclassDecl (solveClass this this superclassName)] (if (nil? superclassDecl) (throw (RuntimeException. (str "Superclass not solved: " superclassName))) (let [inheritedFields (allFields superclassDecl) solvedSymbols'' (filter (fn [f] (= nameToSolve (fieldName f))) inheritedFields)] (first solvedSymbols'')))) amongDeclaredFields))) (solveClass [this context nameToSolve] (solveClass (.getParentNode this) nil nameToSolve))) For CompilationUnit we look for other classes in the same package (both using their simple or qualified names), we consider the import statements and we look for the types declared in the file. (defn qNameToSimpleName [qualifiedName] (last (clojure.string/split qualifiedName #"\."))) (defn importQName [importDecl] (str (.getName importDecl))) (defn isImportMatchingSimpleName? [simpleName importDecl] (= simpleName (qNameToSimpleName (importQName importDecl)))) (defn solveImportedClass "Try to solve the classname by looking among the imported classes" [cu nameToSolve] (let [imports (.getImports cu) relevantImports (filter (partial isImportMatchingSimpleName? nameToSolve) imports) importNames (map (fn [i] (.getName (.getName i))) imports) correspondingClasses (map typeSolver importNames)] (first correspondingClasses))) (extend-protocol Scope com.github.javaparser.ast.CompilationUnit (solveClass [this context nameToSolve] (let [typesInCu (topLevelTypes this) ; match types in cu using their simple name compatibleTypes (filter (fn [t] (= nameToSolve (getName t))) typesInCu) ; match types in cu using their qualified name compatibleTypes' (filter (fn [t] (= nameToSolve (getQName t))) typesInCu)] (or (first compatibleTypes) (first compatibleTypes') (solveImportedClass this nameToSolve) (solveClassInPackage (getClassPackage this) nameToSolve) ; we solve in nil context: it means look for absolute names (solveClass nil nil nameToSolve))))) If we do not manage to solve a symbol in the scope of the Compilation Unit there is no parent to delegate, so we use the nil scope which represents the absence of scope. In that case only absolute names can be solved, like the canonical names of classes. We do that using the typeSolver function. The typeSolver need to know which classpath to use and it basically look up for classes among source files directories and JAR. Also in this case feel free to dig into the code of effectivejava. (extend-protocol Scope nil (solveClass [this context nameToSolve] (typeSolver nameToSolve))) ConclusionConclusion I think Clojure is great to build solutions incrementally: we can implement the different methods of the protocols as we move forward. Build simple tests and improve them one piece at the time. We have built this simple solution iteratively and until now it is working fine for our goals. We will keep write more test cases and we could have to refactor a thing or two but I think we are heading in the right general direction. In the future could be useful to extract this symbol resolver in a separate library, to be used with JavaParser to perform static analysis and refactoring.
https://www.codementor.io/ftomassetti/how-to-build-symbol-solver-java-clojure-du107xlqc
CC-MAIN-2017-47
en
refinedweb
Recently I had a requirement where using Spring MVC we had to take inputs multiple rows of data from user. The form had many rows which user can edit and submit. Spring MVC provides very simple yet elegant way of collecting data from multiple rows from HTML form and store them in List of Beans in Java. Lets look at the requirement first. We have a screen where data for multiple Contacts is displayed. The Contact data is displayed in an HTML table. Each row in the table represents a single contact. Contact details consist of attributes such as Firstname, Lastname, Email and Phone number. Related: Spring 3 MVC Tutorial Series (Must Read) The Add Contact form would look like following: Lets see the code behind this example. Tools and Technologies used: - Java 5 or above - Eclipse 3.3 or above - Spring MVC 3.0 Step 1: Create Project Structure Open Eclipse and create a Dynamic Web Project. Enter project name as SpringMVC_Multi_Row and press Finish. Step 2: Copy Required JAR files Once the Dynamic Web Project is created in Eclipse, copy the required JAR files under WEB-INF/lib folder. Following are the list of JAR files: -Multi-Row<.java package net.viralpatel.spring3.form; public class Contact { private String firstname; private String lastname; private String email; private String phone; public Contact() { } public Contact(String firstname, String lastname, String email, String phone) { this.firstname = firstname; this.lastname = lastname; this.email = email; this.phone = phone; } // Getter and Setter methods } File: /src/net/viralpatel/spring3/form/ContactForm.java package net.viralpatel.spring3.form; import java.util.List; public class ContactForm { private List<Contact> contacts; public List<Contact> getContacts() { return contacts; } public void setContacts(List<Contact> contacts) { this.contacts = contacts; } } Note line 7 in above code how we have defined a List of bean Contact which will hold the multi-row data for each Contact. File: /src/net/viralpatel/spring3/controller/ContactController.java package net.viralpatel.spring3.controller; import java.util.ArrayList; import java.util.List; import net.viralpatel.spring3.form.Contact; List<Contact> contacts = new ArrayList<Contact>(); static { contacts.add(new Contact("Barack", "Obama", "barack.o@whitehouse.com", "147-852-965")); contacts.add(new Contact("George", "Bush", "george.b@whitehouse.com", "785-985-652")); contacts.add(new Contact("Bill", "Clinton", "bill.c@whitehouse.com", "236-587-412")); contacts.add(new Contact("Ronald", "Reagan", "ronald.r@whitehouse.com", "369-852-452")); } @RequestMapping(value = "/get", method = RequestMethod.GET) public ModelAndView get() { ContactForm contactForm = new ContactForm(); contactForm.setContacts(contacts); return new ModelAndView("add_contact" , "contactForm", contactForm); } @RequestMapping(value = "/save", method = RequestMethod.POST) public ModelAndView save(@ModelAttribute("contactForm") ContactForm contactForm) { System.out.println(contactForm); System.out.println(contactForm.getContacts()); List<Contact> contacts = contactForm.getContacts(); if(null != contacts && contacts.size() > 0) { ContactController.contacts = contacts; for (Contact contact : contacts) { System.out.printf("%s \t %s \n", contact.getFirstname(), contact.getLastname()); } } return new ModelAndView("show_contact", "contactForm", contactForm); } } In above ContactController class, we have defile two methods: get() and save(). get() method: This method is used to display Contact form with pre-populated values. Note we added a list of contacts (Contacts are initialize array. Multipe Row Submit - viralpatel.net</title> </head> <body> <h2>Spring MVC Multiple Row Form Submit example</h2> <form:form <table> <tr> <th>No.</th> <th>Name</th> <th>Lastname</th> <th>Email</th> <th>Phone</th> </tr> <c:forEach <tr> <td align="center">${status.count}</td> <td><input name="contacts[${status.index}].firstname" value="${contact.firstname}"/></td> <td><input name="contacts[${status.index}].lastname" value="${contact.lastname}"/></td> <td><input name="contacts[${status.index}].email" value="${contact.email}"/></td> <td><input name="contacts[${status.index}].phone" value="${contact.phone}"/></td> </tr> </c:forEach> </table> <br/> <input type="submit" value="Save" /> </form:form> </body> </html> In above JSP file, we display contact details in a table. Also each attribute is displayed in a textbox. Note that modelAttribute=”contactForm” is defined in <form:form /> tag. This tag defines the modelAttribute name for Spring mapping. On form submission, Spring will parse the values from request and fill the ContactForm bean and pass it to the controller. Also note how we defined textboxes name. It is in form contacts[i].a. Thus Spring knows that we want to display the List item with index i and its attribute a. contacts[${status.index}].firstname will generate each rows as follows: contacts[0].firstname // mapped to first item in contacts list contacts[1].firstname // mapped to second item in contacts list contacts[2].firstname // mapped to third item in contacts Then instead of converting it to following HTML code: <input name="contacts[0].firstname" /> <input name="contacts[1].firstname" /> <input name="contacts[2].firstname" /> It converts it into following: <input name="contacts0.firstname" /> <input name="contacts1.firstname" /> <input name="contacts2.firstname" /> Multipe Row Submit - viralpatel.net</title> </head> <body> <h2>Show Contacts</h2> <table width="50%"> <tr> <th>Name</th> <th>Lastname</th> <th>Email</th> <th>Phone</th> </tr> <c:forEach <tr> <td>${contact.firstname}</td> <td>${contact.lastname}</td> <td>${contact.email}</td> <td>${contact.phone}<-MVC-Multiple-Row-List-example.zip (2.9 MB) Hi, Nice article, is there any way to add a new row in the table using Jquery ? thanks Hi Kamal, have anyone answered your question? I would like to see an example as well. Viral, You have done a Good Job here. Can you do a tutorial on Apache Tiles to use with Spring MVC? Thanks for your help. Hi Ramesh, Thanks for the compliment. Here is the Spring MVC Apache Tiles Integration Tutorial. I hope this helps. Hi Viral, The tutorial is very helpful but is there anything similar without annotations? We are using an older version of Spring Web MVC. Thanks Sree The filename associated with the last code block should be show_contact.jsp, not add_contact.jsp (it is repeated from the block above) @Phil – Opps, well caught :-) I’ve updated the typo in above tutorial. Thanks Hi Vishal, Thanks for this great tutorial. I have similar kind of requirement but with one change. I need to allow use to add row using script. So what i did is i generate the input text boxes as u specified in the tutorial but its not working. Can you please guide me how should i achieve this? Thanks a lot, Ankur Hi Viral. Thanks for this article. I am able to do al these thing with normal HTTP request. But my requirement is to do this with the help of DOJO(AJAX). Can you point me to some tutorial or can provide some help. Thanks Hi Viral, Thanks for this awesome article. I have a question related to this. When I want to do a form validation, how can I specify a filed name in the validation class. For example, in this error.rejectValue (“fieldName”, “errorMsgCode”, “errorMessage”) method, what is the right way to write the fieldName part? Thanks for your help in advance. Hi Viral, Thanks very much for this article. It is well done and was quite helpful. I use a whitelist in my controllers to initialize the binder and found that I needed to use a wildcard to get the list to bind properly. As in “contacts*” in the following code: hi viral, thanks very much,this good article,detail explained for all concept and easily understand for all people include beginner. hi i am only new to programing plus new to use a frame work i have choosen spring mvc as my first i have installed it sucessfully …though i think so…can any one tell me where will i get those step 2 jar files from….? and in my server its showing the vmware one….. do i have to install apache or vmware one is ok…. hello, nice tuto! i have this error in this line Multiple annotations found at this line: – Can not find the tag library descriptor for “ jstl/core” someone can help me please Hi Viral, Can you give me any information about Java Reflection API? Thanks in advance hi I tried to aplly your example. but what I found that the reterived List object is coming null. Any clus why it is happening? When I set form tag as enctype=”multipart/form-data” I’m getting list as null. Can you suggest me what should I do Hi Sir, I would like to thank you for teaching me spring through your blog. Now I am seeing you as GURU as far as Spring is concerned. thank you Sreekumar when i run the above application,I found the error in jsp page The error is:-cannot find the tag library descriptor for will any one help me to resolve this problem. Thanks Hi Viral, Thanks for such a nice article along with the source code. Could you please provide another article on Struts 2 (with commons validator & Tile frameworks) and Spring 3.1 Integretaion. Thanks, In the tutorial, you showed how to edit pre-populated form. Any tutorial on how to collect multiple row form. Thanks so much for this great tutorial! Now it’s much clearer for me! That’s what I was looking for! Hi Viral Patel, I have a requirement where I display contact list in the jsp page and have edit link. When you click edit link it opens up another div (dialog style div) for you to update the contact list and save. Here I am trying to create a reusable jsp in which I have code to display a form of contacts and which can be saved. I use jQuery load function to load this editcontacts.jsp with tag with modelAttribute(command)=”contacts”. I can access the modelAttribute without the path value but I cannot use the modelAttribute to load form field using path in each form:input field. Can you please tell me how can I use the modelAttribute set by controller in form to set form field? Thanks, Krunal Hi :) Thanks very much for this article :) I need a help , I search the way that makes me passing a array list to javascript . I’m looking forward for your response :) Excellent examples with details. Appreciate your help. Your site is one stop shop. I stumbled upon this page while looking for solution to my problem. I have the exact same requirement but have to use it in earlier version of Spring. Would you be able to help? Here is the issue I have – I have upgraded to Spring 3, but don’t see the change with square brackets like you described. I am using tags, but the output still has square brackets in the name attribute. Do you have any references on this change? I can’t seem to find any on the net. The closest I have found is this: but it is only the ID attribute that has changed, not the name attr. The spring form:input (and form:hidden) tags put the square brackets correctly in the input name attrib in Spring 3.1. The id is as you describe. Seems to work. I tried the code provided by Viral. But its not putting the square brackets in the name attribute. Can you provide the code snippet you are using. Does this work in Spring 3.1 and not in Spring 3.0? Hi, I have a similar requirement where I need to export search results into excel. In my search results JSP I have made the input tag style as disabled=”disabled” because user must not change the results data. In this scenerio spring will not parse the values from request and fill the ContactForm bean and pass it to the controller. Any help appreciated. Hi Viral, thanks a ton.. this is exactly what i was looking for. Keep up the good work :-) Thanks for writing such excellent articles. Great article!! Helped me solve my issue..Was’nt using form tag lib to build list, updated were not binding back. After adding below index, it WORKED!! Thanks! <input name="contacts[${status.index}].firstname"… Spring is binding updates for form input type text <input name="contacts.firstName" type="text" But, NOT not binding the updates for check box back to controller , Any HELP is appreciated !!! <input name="contacts[0].active" type="checkbox" This is a really interesting article, but I would like to know if there’s a way of using combobox loaded with a list from database. can you tell me that i have 4 textboxes that like A,B,C,D i have to take the values for previous year ,present year and current year ie multiple columns we can take can u tell me how to do,THANKS IN ADVANCE In the above example Spring MVC: Multiple Row Form Submit using List of Beans, how do you implement the pagination and save button should send changes back to servlet. I implemented the pageListHolder from spring I am able to display the data but not able to get the changes back in the ModelAttribute which I set a list. thanks Just change name to path for spring 3.1.0 i.e . Thanks a bunch for the article the info on your site is really valuable stuff. Hello, thank you for your article! The following code works for me with spring 3.1.2: Hi Viral, All of your posts were very helpful to go on while i am stuck with any issue.But now i am in the middle of one issue that i couldn’t found any solution from your post.i wish to do “Multiple Row Form Submit using struts 2”.hope you can help me in this. Hi Can you please tell me if we search one record in search page, then if it exist it will be printed like grid view in the same page…… The example was good, but I am facing a problem with passing date in list to controller The init binder I am using as @InitBinder public void initBinder(WebDataBinder binder) { SimpleDateFormat dateFormat = new SimpleDateFormat(“MM/dd/yy”); dateFormat.setLenient(false); binder.registerCustomEditor(Date.class, new CustomDateEditor(dateFormat, true)); } The exception is Caused by: java.lang.NumberFormatException: For input string: “” Hi good simple application , i have small doubt here, when i run this application , how add_contact.jsp is getting displayed , u did not mention index.jsp any where , how it is directly forwarding to add_contact .jsp , is this the behaviour of the Dispatcher Servlet . Can any one reply to this. Hi, add_contact.jsp is displayed via Spring controller. The get() method of ContactController renders add_contact page as you can see it in source code. I strongly recommend you to go through Spring MVC tutorial series to understand these concepts. In the same example if u try to create a FormValidator and had to put error messages for all fileds what path value will u mention. suppose for the below field Please reply.Thanks. What ever u said is true, but my doubt is how the controller is detecting index.jsp which contains “get.html” , which takes to Controller and results to add_contact.jsp , How controller detects index.jsp which is not even declared as welcome file in web.xml By default, tomcat (or other JEE Containers) renders index.jsp from WebContent folder. Although we havn’t defined index.jsp as welcome file, still by default it takes as welcome file. Read more Hey, Thanks Viral.. It helped me a lot… Great job . keep it up… Hi Viral, I am new to this SPRING MVC framework. I referred your example and created all the files like .java, .jsp etc. When i tried executing it i get an error as “WARNING: No mapping found for HTTP request with URI [/SpringMVC_Multi_Row/get.html] in DispatcherServlet with name ‘spring'”. Please help me to resolve this error. Thanks in Advance. I too facing same problem… Hi Gireesh, I was using eclipse IDE. We need to clean the project before we execute the application when using tomcat. When i cleaned the project and executed it was working fine for me. Please try “Project -> clean”. and execute the application. Regards, Raja Hi, nice tutorial… your tutorials are always a great source of help. I have a requirement to create dynamic forms. our requirement is like , if the user clicks on a add button, dynamically the fields should be displayed for him to enter his details like his experiences in various companies. could you please guide me where to start with? i am using spring mvc and hibernate annotations. Hi Viral, Nice tutorial. I need your help. I want to generate a table of records dynamically based on given input fields. Hi Viral, Thanks for providing such a good tutorial. It really helps beginners to start with this application and is easy to understand the flow. Keep posting. Thanks once again. Regards, Raja Can I get the similar code for struts 1.3 I was wondering if you ran into any ConcurrentModificationException problems with this? I get this when rendering the multi row form. Hi H, I’m having same problem and resolved it. I’m using the ‘count’ variable badly :(, the ‘count’ starts 1, and the List get method use 0 based system. I replace this tag with and resolve the problem. The tags: < form:input path=”xxx[${status.count}].yyy” id=”xxx[${status.count}].yyy” / > < form:input path=”xxx[${status.count-1}].yyy” id=”xxx[${status.count-1}].yyy” / > Could you please suggest what changes required in controller/jsp if i have used LinkedHashMap instead of List collection object. Mahesh Very nice articals, helped me many times. Thanks. -Ganesh. sorry my html elment does’t show up in my example post above. I have a form that dynamically add text boxes and that are binded to list object in command object. The same form is used to retrieve the value form list.The form enable us to delete the textboxes ,which is binded to list in command object,but the list in command object is not reinitialized .What is the solution for this? Any solution for this? I am facing the same problem I have same problem…please help) Hello Viral I always appreciate your tutorial and work,for most of java framework i have followed your tutorials at starting phase to accelerate my learning.Now i am working with one US Client project (confidential) .I stuck with spring MVC binding with some complex forms. The forms are getting populated with data and binding but when i submit the form its showing exception like contacts[0] invalid property of form bean….root : spring autogrow nested property path. i followed this tutorial to make every complex forms(multiple list.multiple radio,multiple checkbox) Please Reply Do you found any solution?? I m facing the same issue Excellent explanation Thanks a lot, very nice example. With the newer release of Spring MVC (I am using 3.1.1) you can use the {/code] way of expressing the input boxes (on the JSP page). Spring has apparently corrected the value generated in the name=”…” parameter so that it conforms to the w3c spec! Woo hoo! Hmm looks like the commenting did not come across properly. You can you use as was a way to express the input fields…Spring MVC has updated their code to be w3c compliant Hey, Please can you provide the example of webflow example with multiple controller Hi, Here you populated the list before viewing it on “add_contact.jsp”. I have a requirement where user can add n number of objects at runtime from view layer. Please suggest. Thanks!! That should’nt be a problem. You can define a method in ContactControllerto add new users in List. Map this new method using @RequestMapping(“/adduser”). Hope you got whats to be done here. contacts IMPORTANT Actually w3c spec is not saying “name” attribute in “input” element can’t have square bracket. The “NAME” is just its internal term to categorize attribute. See this and find “name” attribute for “input” element: The attribute type is CDATA! There is little restriction for CDATA type, see Nice example…btw, Spring 3.2.2 form:input works and will create the same hand-coded input. However, I noticed that on submission, Spring creates new instances of Contact and does not reuse the Contact instances that were populated in the ContactForm at form get time… I wonder if I set something up incorrectly. Following up on my observation: if I use @SessionAttribute(“myFormObject”) then it works fine…Spring 3.2.2 Can you please tell me how to validate the multiple input values in validate method since I am using xml based spring framework Hi, Create new domain class like studentList.class. Declare the following variable. . In front end : . its exactly like matrix . thank so much viralpatel ..this Example is very help to me..u did a great job Hi, I have implemented the multiple rows with editable and its not working for bulk records. In my case I have more than 3000 records(all are editable) in a page. When I submit the page the list object is coming null. If tried with 250 records its working fine. Please help on this. Thanks Viral, this one workes correctly, without doing any change in code. What will be the REST request for this service? IS THERE ANY NECCISITY OF USING A GENRIC LIST , STILL I AM GETTING ARRAY OUT OF BOND EXCEPTION , AS MY LIST IS NOT GENRIC we can map list by spring tag also. Please update in your site. Thax Example: What if you dont know contacts size already, how do you do, in my case I don’t know the table size which is going to be enter, it’s dynamic, it has add row button. can you please suggest how should I do? Hi Viral, Nice explaination. But I need to save multi rows of table where no. of columns in table is not fixed it is also dynamic or user configuration value that can be fetched from DB at time of form load. Hence, bean structure is not fixed, so no bean like contact can be defined. Only info to relate with is header-name, row-value and row-no. Hence row-values for all header-names corresponding to single row-no. form a single row of table. Do you have any idea of implementing Save for this kind of scenerio. In this case you might want to create a HashMap in your bean and map multiple values within the map. Check this tutorial for more info: Using HashMap in Spring MVC Forms. Thanks a lot Viral.., good example and excellent explanation! :) Many many thanks I really elated to the way you demonstrated the complex concepts in an elegant manner. it’s simply super. Hi, Anybody would help me. I’m string to bind String array. I’m getting the below error com.ibm.ws.webcontainer.servlet.ServletWrapper service SRVE0014E: Uncaught service() exception root cause useradmin: com.ibm.ws.jsp.webcontainerext.JSPErrorReport: JSPG0036E: Form contains two String arrays String[] parameterkeys; String[] parametervalues; JSP looping thru for each.. i tried the code, the problem is the attribute is not getting updated to the save page and it has null value. What can be the problem? Hi, i tried this code and i m trying to render the same page instead of new page but i m not able to update the attribute and instead it comes to null. can some one help me with this problem? can you let me know how to perform spring validation on this example do u got the solution for the validation part. Excellent Article!!! Can you give a example how to validate the multiple rows Hi viral Patel, Good article dude. i have situation to submit more than 255 objects from jsp page. Spring MVC throws index out of bound excetion, when the list exceeds 255. Is there any to set auto grow collection. Thanks dude. In add jsp page name=”contacts[${status.index}].firstname”.how do u write validation for this. Excellent explanation :) Hi Viral, It is such a nice tutorial. But I am stuck in validation part. I am using Spring annotation validation. In above scenario how do valid input field using annotation. For eg : name cannot be blank how to @NotBlank for name. hi i am getting above error need ur help i have downloaded your application(Multiple Row Form Submit Using List Of Beans) and running in eclipse its giving below errror org.springframework.beans.factory.BeanDefinitionStoreException: IOException parsing XML document from ServletContext resource [/WEB-INF/spring-servlet.xml]; nested exception is java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/spring-servlet.xml] thanks in advance Rahul you rock man, good one Hi Viral, I have a same situation but one of the attribute of the list object is an object i.e. ithis case : public class Contact { private String firstname; private String lastname; private String email; private String phone; private ClassName obj_name; …. } I creating an object of ClassName and assing to every Contact object of list but on form submit I am getting the value of object obj_nmae as null. What could be the reason for this??? its sending null over post request and model attribute coming null at controller . Not able to figure out? Very nice article. I had to create a similar page and this tutorial provided me overview of how to go about it. Thanks and keep up the good work. Very good article. I have tried with a checkbox instead of textbox (used in this sample) and binding it with a boolean variable. The textbox changes are reflecting but the checkbox changes are not getting updated in the pojo. I finally got the solution , form:checkbox path=”contacts[${status.index}].delete” This thing is not going to work. Try start with two browsers and enter different data, play around see what hapened. it is going to corrupt each other. The reason the controller is stateless, should not have field ‘Contacts’. Nice Explanation. But I have a situation where I dont have list of beans initially. User can add and deleter rows on form. Initially row will be empty and user will fill the data and submit the form. Do you have any suggestion for this? Hi in my case when i am submitting the form after editing i am getting an exception like… org.springframework.beans.NullValueInNestedPathException: Invalid property ‘contacts[0]’ of bean class [com.merck.uk.web.to.ContactForm]: Cannot access indexed value of property referenced in indexed property path ‘contacts[0]’: returned null Hi ! Very usefull article ! In your example you use only a list of contact. Byt what if the Bean Contact get a list too. How do you process to display the information ? Like that ? Thank you ! :) Hello, Check my answer here for an alternate solution without creating wrapper class. The download link is broken, any chance to update? Hi, How to update deleted records? Because if i delete the row on JSP using JavaScript, it’s not updating on controller. but changing the value is updating on controller. thank you TABLE id=”dataTable” width=”350px” border=”2px” style=”margin:5px;”> Option project_id isMaster Y N Hi: I really appreciated if you can posted the code above. The link doesn’t work anymore to download the code. Greatly need your help!! its not multiple rows! its static 4 rows!!!
http://viralpatel.net/blogs/spring-mvc-multi-row-submit-java-list/
CC-MAIN-2017-47
en
refinedweb
First solution in Clear category for Painting Wall by Amachua # This function update the tuple d by taking into account the new data e. # The output is the amount of value in the N-space which aren't already taken into account in d. def update(d, e): # minmin and maxmax are the lowest and highest value for the new added set in d. # minmax and maxmin are the values needed to compute the number of new painted walls. # intersect is also used for the computation of the painted walls, it's for the old set contains in the new one. minmin, minmax, maxmax, maxmin, intersect = e[0], e[0], e[1], e[1], 0 for i in range(len(d)): a = d.pop(0) # update the limit by taking into account the value of the data a. if a[0] <= e[0] <= a[1]: minmin = a[0] if a[0] < minmin else minmin minmax = a[1] if a[1] < maxmin else maxmin if a[0] <= e[1] <= a[1]: maxmax = a[1] if a[1] > maxmax else maxmax maxmin = a[0] if a[0] > minmax else minmax # if the data a is in the set e update the intersect value. # Check if the upper or lower value was updated. # If it's not add it in d. if a[0] > e[0] and a[1] < e[1]: intersect+=a[1]-a[0] else: d.append(a) if maxmin - minmax != 0: d.append((minmin, maxmax)) return maxmin - minmax - intersect def checkio(num, data): a = [] for i, d in enumerate(data, 1): num -= update(a, d) print(a, num) if num <= 1: return i return -1 Feb. 28, 2014 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/painting-wall/publications/Amachua/python-3/first/share/ed7f21e806c4c1792a63497ec34c4dc3/
CC-MAIN-2021-31
en
refinedweb
C# Corner In Part 2 of this three-part series on dataflow programming with the Task Parallel Library, Eric Vogel shows you how to create a Windows 8 application that uses a composite parallel data flow. In Part 1 of this series on building Windows 8 applications with the Task Parallel Library (TPL) dataflow components in the .NET Framework, I covered how to use the ActionBlock and the TransformBlock dataflow blocks. This time, I'll take it a step further and show you how to link dataflow blocks, to create more complex parallel data flows. A common usage pattern is the producer/consumer scenario. By linking dataflow blocks together, one block can post a message that is then pumped through one or many dataflow blocks that further process the data. To demonstrate this concept, let's look at a hypothetical data flow for an application. Say you're reading temperatures from a sensor in regular sub-second intervals. The data needs to be displayed as it's read in real-time. In addition, you need to be able to apply some post processing to the raw temperatures such as formatting for Fahrenheit, and comparisons to the average temperature for the current date. This scenario tackles a few common dataflow issues. For one, the data needs to be broadcast to multiple sources. Secondly, the data must be read by multiple sources in order to be further processed and formatted. Luckily, the Task Parallel Dataflow (TDF) library includes the BroadcastBlock, which makes implementing this scenario straightforward. Without further ado, let's get down to the details. Open up Visual Studio 2012 Release Candidate and create a C# Metro style App. First, open up MainPage.xaml and use the XAML from the root Grid element in Listing 1. The UI is fairly simple. Three sets of stack panels are associated with the three buttons on the page. Broadcasting Data For this sample application, you'll need to have the Parallel Dataflow NuGet package installed. Refer to Part 1 for installation instructions. Now that the TDF package is installed, open up MainPage.xaml.cs and add the following using statements: using System.Threading.Tasks; using System.Threading.Tasks.Dataflow; Then add the BroadcastBlock to the MainPage class: BroadcastBlock<int> _broadcaster; The BroadcastBlock will be responsible for transferring temperature data to the various TransformBlock objects for further processing. Once a Transform block has processed and formatted a piece of datum, it'll pump the datum to an ActionBlock for UI display. Next, instantiate the _broadcaster block in the OnNavigatedTo event of the page: protected override void OnNavigatedTo(NavigationEventArgs e) { _broadcaster = new BroadcastBlock<int>(x => x); } Now setup the click event handler for the BroadCast button. Within the event, an ActionBlock is created that will display a message via the Message TextBlock on the page: private async void BroadCast_Click(object sender, RoutedEventArgs e) { ActionBlock<int> simpleDisplayer = CreateUiUpdateActionBlock<int>(Message); _broadcaster.LinkTo(simpleDisplayer); await BroadCastData(); } The CreateUiUpdateActionBlock<T> method is a simple helper function that creates an ActionBlock to set the Text property of a TextBlock element on the UI thread: private ActionBlock<T> CreateUiUpdateActionBlock<T>(TextBlock element) { return new ActionBlock<T>(x => element.Text = x.ToString(), new ExecutionDataflowBlockOptions() { TaskScheduler = TaskScheduler.FromCurrentSynchronizationContext() }); } Next, the simpleDisplayer ActionBlock is linked to the _broadcaster block through the LinkTo method: _broadcaster.LinkTo(simpleDisplayer); Now, there's a lot of power in that link statement. Whenever any data is posted to the _broadcaster block, it will immediately be propagated to the simpleDisplayer block asynchronously. Next, I'll generate and post some randomized temperature data to the _broadcaster block by awaiting the BroadCastData method: await BroadCastData(); private async Task BroadCastData() { Random r = new Random((int)System.DateTime.Now.Ticks); int temp = 0; for (int i = 0; i < 1000; i++) { await Task.Delay(125); temp = r.Next(60, 80); await _broadcaster.SendAsync(temp); } } I've added a 125-msec delay between temperature datum postings to more accurately simulate reading from a sensor. Post Processing Data Now that the application is correctly broadcasting and displaying temperature data, let's get started on the second requirement. When the Transform button is clicked, the temperature should be displayed in the Fahrenheit format. To accomplish this task, the TransformBlock is ideal because it can receive a raw integer temperature and format it to a Fahrenheit formatted string: TransformBlock<int, string> _formatter = new TransformBlock<int, string>(x => String.Format("{0:G2}°F", x)); Once the data has been cleaned up, it can be displayed to the user via an ActionBlock: ActionBlock<string> transformedDisplayed = CreateUiUpdateActionBlock<string>(TransformedMessage); The last step is to link all of the dataflow blocks together so that the _broadcaster sends data to the formatter, which displays data via the transformDisplayed block: _broadcaster.LinkTo(formatter); formatter.LinkTo(transformedDisplayed); private void Transform_Click(object sender, RoutedEventArgs e) { TransformBlock<int, string> formatter = new TransformBlock<int, string>(x => String.Format("{0:G2}°F", x)); ActionBlock<string> transformedDisplayed = CreateUiUpdateActionBlock<string>(TransformedMessage); _broadcaster.LinkTo(formatter); formatter.LinkTo(transformedDisplayed); } Now, let's implement the last requirement for displaying a formatted Fahrenheit delta temperature between a read temperature and the average temperature for the day. The processing will occur in the PostProcess button click event handler as shown in Listing 2 . Computing the delta average temperature is easily accomplished via a TransformBlock: const int average = 75; TransformBlock<int, int> averageDelta = new TransformBlock<int, int>(x => x - average); Now another TransformBlock is created to format the delta temperature to be displayed in degrees Fahrenheit. In addition, a '+' or '-' is pretended to the temperature to indicate the change: TransformBlock<int, string> formatPost = new TransformBlock<int, string>(x => { string preFix = string.Empty; if (x > 0) preFix = "+"; return String.Format("{0}{1:G2}°F", preFix, x); }); Finally, an ActionBlock is created to update the text of the PostProcessMessage TextBlock on the page, and the data flow blocks are linked together: ActionBlock<string> postProcessDisplayer = CreateUiUpdateActionBlock<string>(PostProcessMessage); _broadcaster.LinkTo(averageDelta); averageDelta.LinkTo(formatPost); formatPost.LinkTo(postProcessDisplayer); You should now be able to run the completed application shown in Figure 1. As you can see, the Task Parallel Dataflow library is very versatile for implementing complex parallel data flows. Through dataflow block composition, a myriad of problems can be solved in a simple, efficient and elegant manner. Stay tuned for Part 3 in this series on dataflow programming with the Task Parallel Library dataflow components in .NET Framework 4.5 to learn how to create custom dataflow blocks.
https://visualstudiomagazine.com/articles/2012/09/19/parallel-dataflow-part-2.aspx
CC-MAIN-2021-31
en
refinedweb
Java enum is a class in java that has a group of variables that cannot be changed. We can also call them constants or final variables. To create an enum, we use the enum keyword. It is used for things which are fixed such as days of the week – Monday, Tuesday, and Wednesday etc. Or the direction – East, West, North and South. Or the colors such as green, blue, red, yellow etc. Java enum constants are static and final with a fixed set of constants. In Java, the pro of defining an enum is that we can create inside of the class as well as outside depending on our wish. But this cannot be done in other languages. Moreover, an enum cannot create its own class but it can extend other classes. It can also implement other interfaces just as java enum inherits enum class, but cannot extend any class. Java enum can also be used in switch statements. We use an enum to represent numeric or textual data which gives us a small set of possible value as output. Enum implements serializable interfaces in java, hence it is also possible to override enum. As we know java ensures type safety which means we cannot perform an operation in java unless it is fully valid for the object. Java enum ensures type safety as well. Java enum Example enum direction { EAST, WEST, NORTH; } public class Test { public static void main(String[] args) { direction d = direction.EAST; System.out.println(d); } } The output of this program will be: EAST Inside the program, the first thing should be a list of final variables and then could be constructor or methods etc. Now we will discuss an example to see if we can how we can use enum inside class? Now we will discuss an example to see if we can how we can use enum inside class? class Enum{ enum Direction { EAST, WEST, NORTH, SOUTH; } public static void main(String[] args) { Direction d=Direction.EAST; System.out.println(d); } } The output of the above program will be: EAST Talking about constructors of enum type, they can only be private. As a result, it will automatically create final variables for itself which are defined at the starting of the enum block of the program. We can always invoke an enum constructor ourselves. ENUM CONSTANTS Enum is a special data type with which we can enable pre defined variables for a constant in java. Their initial is always capital as they are constants. The enum constants can have a value which starts from 0,1,2,3 and so on. But we have to define fields and methods further if we want to initialise a specific value to them. Summary - We should always write the initial of enum final variables in Capital letter. - We use enum keyword for creating an enum class. - Enum can be traversed. - We can use enum in switch statements. - Enum constant will always represent an object or type enum. - Main() method can always be put inside enum. - We can always iterate an enum variable. - We cannot change the value of enum once created. - Enum cannot create its own class.
https://www.developerhelps.com/java-enum/
CC-MAIN-2021-31
en
refinedweb
A Sordid Little Tale Of Unexpected Security Exceptions It was a dark and stormy coding session; the rain fell in torrents as my eyes were locked to two LCD screens in a furious display of coding … …sorry sorry, I just can’t continue. It’s all a lie. This actually a cautionary tale describing one subtle way that you can run afoul Code Access Security (CAS) when attempting to run an application in partial trust. But who wants to read about that? Right? Right? Well this isn’t a sordid tale, but if you bear with me, you may just find it interesting. Either that, or you may just take pity on me that I find this type of thing interesting. I was hacking on NuGet the other day and all I wanted to do was write some code that accessed the version number of the current assembly. This is something we do in Subtext, for example. If you scroll to the very bottom of the admin section, you’ll see the following. ![Subtext Admin - Feedback - Google Chrome Subtext Admin - Feedback - Google Chrome]() As you can imagine, the code for to get the version number is very straightforward: System.Reflection.Assembly.ExecutingAssembly().GetName().Version Or is it!? (cue scary organ music) What the code does here (besides appearing to smack the Law of Demeter in the mouth) is get the currently executing assembly. From that it gets the Assembly name and extracts the version from the name. What could go wrong? I tested this in medium trust and it received the “works on my machine” seal of approval! But does it work all the time? Well if it did, I wouldn’t be writing this blog post would I? Fortunately, my colleague David Fowler caught this latent bug during a code review. Levi (no blog) Broderick was brought in to help explain the whole issue so a dunce like me could understand it. These two co-workers are scary smart and must never be allowed to fall into a life of crime as they would decimate the countryside. Just letting you know. As it turns out, code exactly like this was the source of a medium trust bug in ASP.NET MVC 2 (that we fortunately caught and fixed before RTM). So what gives? Well there’s very subtle latent bug with this code. To illustrate, I’ll put the code in context. The following snippet is a class library that makes use of the code I just wrote. using System.Reflection; using System.Security; [assembly: SecurityTransparent] namespace ClassLibrary1 { public static class Class1 { public static string GetExecutingAssemblyVersion() { return Assembly.GetExecutingAssembly().GetName().Version.ToString(); } } } We need an application to reference that code. The following is code for an ASP.NET MVC controller with an action method that calls the method in the class library and returns it as a string. It may seem odd that the action method returns a string rather than an ActionResult, but that’s allowed. ASP.NET MVC simply wraps it in a ContentResult. using System.Web.Mvc; namespace MvcApplication1.Controllers { public class HomeController : Controller { public string ClassLibAssemblyVersion() { return ClassLibrary1.Class1.GetExecutingAssemblyVersion(); } } } Still with me? When I run this application and visit /Home/ClassLibAssemblyVersion everything works fine and we see the version number. ![httplocalhost29519homeClassLibAssemblyVersionFixed - Windows Internet Explorer httplocalhost29519homeClassLibAssemblyVersionFixed - Windows Internet Explorer]() Now’s where the party gets a bit wild (but still safe for work). At this point, I’ll put the class library assembly in the GAC and then recompile the application. I’m going to assume you know how to do that. Note that I’ll need to remove the local copy of the class library from the bin directory of my ASP.NET MVC application and also remove the project reference and replace it with a GAC reference. When I do that and run the application again, I get. Oh noes! So what happened here? Reflector to the rescue! Looking at the stack trace, let’s dig into RuntimeAssembly.GetName(Boolean copiedName) method. [SecuritySafeCritical] public override AssemblyName GetName(bool copiedName) { AssemblyName name = new AssemblyName(); string codeBase = this.GetCodeBase(copiedName); this.VerifyCodeBaseDiscovery(codeBase); // ... snipped for brevity ... return name; } I’ve snipped out some code so we can focus on the interesting part. This method wants to return a fully populated AssemblyName instance. One of the properties of AssemblyName is CodeBase, which is a path to the assembly. Once it has this path, it attempts to verify the path by calling VerifyCodeBaseDiscovery. Let’s take a look. [SecurityCritical] private void VerifyCodeBaseDiscovery(string codeBase) { if ((codeBase != null) && (string.Compare(codeBase, 0, "file:", 0, 5 , StringComparison.OrdinalIgnoreCase) == 0)) { URLString str = new URLString(codeBase, true); new FileIOPermission(FileIOPermissionAccess.PathDiscovery , str.GetFileName()).Demand(); } } Notice that last line of code? It’s making a security demand to check if you have path discovery permissions on the specified path. That’s what’s failing. Why? Well before you put the assembly in the GAC, the assembly was being loaded from your bin directory. Naturally, even in medium trust, you have rights to discover that path. But now that the class library is in the GAC, it’s being loaded from a subdirectory of c:\Windows\Assembly and guess what. Your medium trust application doesn’t have path discovery permissions to that directory. As an aside, I think it’s too bad that this particular property doesn’t check its security demand lazily. That would be my kind of property access. My gut feeling is that people don’t often ask for an assembly’s Codebase as much as they ask for the other “safe” properties, like Version! So how do we fix this? Well the answer is to construct our own AssemblyName instance. new AssemblyName(typeof(Class1).Assembly.FullName).Version.ToString(); This implementation avoids the security issue I mentioned earlier because we’re generating the AssemblyName instance ourselves and it never has a reference to the disallowed path. If you want to see this in action, I put together a little demo showing the bad approach and the fixed approach. You’ll need to GAC the ClassLibrary1 assembly to see the exception occurred. I have another action that has the safe implementation. Try it out. As a tangent, the astute reader may have noticed that I used the assembly level SecurityTransparentAttribute in my class library. Is that a case of my assembly attempting to deal with self esteem issues and shying away from a clamoring public? Why did I put that attribute there? The answer to that, my friends, is a story for another time. 6 responses
http://haacked.com/archive/2010/11/04/assembly-location-and-medium-trust.aspx/
CC-MAIN-2021-31
en
refinedweb
In my last post I introduced the all-new Apex Metadata API. In this post we will take a look at the security aspects of this new feature. When discussing this new API with customers, partners, and Salesforce employees, security is often the first topic raised. The Apex Metadata API empowers developers to automate changes to an org’s configuration. Changing the structure of an org is a big deal, and also has big implications for the data in that org. Trust is our number one value at Salesforce, and the Apex Metadata API is built to be a trusted interface. Three features provide secure access to an orgs’ metadata: With these features you can “trust, but verify.” The first two help you trust the functionality of apps using the Apex Metadata API. The third lets you verify the app’s behavior. While we intend to support many more types of metadata than the two in this debut of the Apex Metadata API, we will not expose the entire Metadata API in Apex. This assures customers that packages they install can only modify safe metadata types — types that get modified in predictable ways. For example, we will not provide the ability to create Apex classes, Visualforce pages, or Lightning components via Apex. If managed packages have the ability to write code in a subscriber org, it becomes difficult for Salesforce to review their security profile. To assure customers that apps they install only modify metadata types in predictable ways, we will not support automated code generation. In addition, we’re limiting which packages can execute a metadata deploy via Apex. The Apex Metadata API can be executed in three scenarios: These restrictions ensure that the deploy is coming from a trusted entity. Metadata changes can be made by a certified managed package, which is provided by a known, registered ISV. Partner apps in AppExchange that can make metadata changes in a subscriber org will alert subscribers. Partner apps must include this notification to pass the AppExchange security review. Metadata changes can also be made by code that is known to the org in question. The latter can be unmanaged code developed or installed and vetted in the org itself. Or it can be uncertified managed code, but only if the subscriber has explicitly allowed it. Uncertified managed packages can only do metadata operations if the subscriber has set the Apex setting shown on the right. With this setting ISVs can test managed packages that aren’t yet certified, and enterprises can use managed packages to manage their apps. These scenarios are summarized in the following table that shows which permissions and settings are needed to use Apex Metadata API. All metadata operations using the Apex Metadata API are tracked and the namespace of the code performing the deploy is recorded in the setup audit trail. You always know which namespace made what changes and when. This is where you verify the behavior of your trusted apps. Apex Metadata API deploys can modify metadata outside its own namespace. This is necessary to support many important use cases. But it makes some people nervous, so let’s look at the implications. Knowing what metadata can get updated and how will help you make the right choices about how to use this API. But before we dig into the details, it’s important to understand that managed Apex manipulates metadata in a subscriber org in the same way that unmanaged Apex does, with two exceptions: A managed package’s code does a deploy on behalf of the subscribing org. If you remember this, everything else is intuitive. All metadata created by a managed package’s Apex is created in the subscriber namespace. Managed Apex never creates metadata that has the same namespace as the package the code is running from. A managed package’s Apex can update any metadata in the org where the metadata is subscriber-controlled and the metadata is visible from within the managed package’s namespace. Therefore, it can update any public subscriber-controlled metadata, whether it’s in the same package, the subscriber org, or a different managed package. It can also update private subscriber-controlled metadata in its own namespace. If you are a managed package developer, this makes the Apex Metadata API a great tool for securing more of your app. You can now hide your app configurations as protected metadata and still manipulate them with Apex. Apex in a managed package can update developer-controlled metadata only if it’s in the subscriber org namespace. For example, if Apex in the managed package creates a record of a custom metadata type, that record will be in the subscriber namespace. Code in the managed package can update any of the fields. However, Apex cannot update developer-controlled fields of records contained in its own package, even though they’re in the same namespace. That metadata can only be updated with a package upgrade. Some of this may seem counterintuitive. If so, remember that aside from its ability to access protected metadata, the code acts like non-namespaced code, but with an audit trail showing the namespace of the Apex that made the change. With the exception of protected metadata, all the capabilities and limitations I just described are the same capabilities and limitations an admin in the subscriber org has. Here’s how that plays out in all the scenarios you may encounter: The Metadata API itself adds an additional layer of trust. Metadata API permissions are respected by an Apex Metadata API deployment. While Apex lets you write code that enables end users to enqueue a deployment, that deployment will fail. Only users who can already do a Metadata API by other means will be able to do it in Apex. In contrast, retrieve calls from the Metadata API work for any users your app has granted access. As more metadata types are exposed in Apex, this will be a handy way to provide read access to info not available in metadata describes. Some developers leverage remote site settings and call the Metadata API from their app. This provides the same capabilities as the Apex Metadata API, but lacks most of the security controls. The Summer ‘17 release marks the beginning of the end to this approach (or the complete end if you only rely on remote site settings to update custom metadata records or page layouts). Using the Apex Metadata API has many advantages over code that relies on remote site settings. The Apex Metadata API enables you to: In addition to the security benefits, the Apex Metadata API is much easier to use! Wrapping the Metadata API and calling it from Apex requires a lot more code than using this new native solution. And remote site settings can be challenging for partners with a large, low-touch customer base. It isn’t difficult to guide a few large customers through the remote site setting setup. But if you have thousands of customers, this is a manual step that many admins can overlook or do incorrectly, which can prevent your code from functioning properly. If you’re a developer in an ISV, there are a few things to keep in mind as you use the Apex Metadata API: As with any great power, this one comes with great responsibility. We have therefore provided many features to maximize the safety of the Apex Metadata API. These features enable you to trust the apps you and others build, but verify their behavior. Check out my previous post for an overview of the Apex Metadata API. Keep an eye out for follow up posts diving deeper into the setup UI and post install script use cases. And join us at TrailheaDX June 28-29 to see the new Apex Metadata API in action! Then go forth, build some cool stuff, and tell us all about it in the Success Community’s Apex Metadata API group. And be safe!.
https://developer.salesforce.com/blogs/engineering/2017/06/apex-metadata-api-security.html
CC-MAIN-2021-31
en
refinedweb
@decoratoror directly), or as a React Hook Read more in the Times Open blog post. If you just want a quick sandbox to play around with: npm install --save react-tracking import track, { useTracking } from 'react-tracking'; Both @track() and useTracking() expect two arguments, trackingData and options. trackingDatarepresents the data to be tracked (or a function returning that data) optionsis an optional object that accepts three properties (the object passed to the decorator also accepts a fourth forwardRefproperty): dispatch, which is a function to use instead of the default dispatch behavior. See the section on custom dispatch()below. dispatchOnMount, when set to true, dispatches the tracking data when the component mounts to the DOM. When provided as a function will be called in a useEffect on the component's initial render with all of the tracking context data as the only argument. process, which is a function that can be defined once on some top-level component, used for selectively dispatching tracking events based on each component's tracking data. See more details below. forwardRef(decorator/HoC only), when set to true, adding a ref to the wrapped component will actually return the instance of the underlying component. Default is false. trackingprop The @track() decorator will expose a tracking prop on the component it wraps, that looks like: { // tracking prop provided by @track() tracking: PropTypes.shape({ // function to call to dispatch tracking events trackEvent: PropTypes.func, // function to call to grab contextual tracking data getTrackingData: PropTypes.func, }); } The useTracking hook returns an object with this same shape, plus a <Track /> component that you use to wrap your returned markup to pass contextual data to child components. We can access the trackEvent method via the useTracking hook from anywhere in the tree: import { useTracking } from 'react-tracking'; const FooPage = () => { const { Track, trackEvent } = useTracking({ page: 'FooPage' }); return ( <Track> <div onClick={() => { trackEvent({ action: 'click' }); }} /> </Track> ); }; The useTracking hook returns an object with the same getTrackingData() and trackEvent() methods that are provided as props.tracking when wrapping with the @track() decorator/HoC (more info about the decorator can be found below). It also returns an additional property on that object: a <Track /> component that can be returned as the root of your component's sub-tree to pass any new contextual data to its children. Note that in most cases you would wrap the markup returned by your component with <Track />. This will merge a new tracking context and make it available to all child components. The only time you wouldn't wrap your returned markup with <Track />is if you're on some leaf component and don't have any more child components that need tracking info. import { useTracking } from 'react-tracking'; const Child = () => { const { trackEvent } = useTracking(); return ( <div onClick={() => { trackEvent({ action: 'childClick' }); }} /> ); }; const FooPage = () => { const { Track, trackEvent } = useTracking({ page: 'FooPage' }); return ( <Track> <Child /> <div onClick={() => { trackEvent({ action: 'click' }); }} /> </Track> ); }; In the example above, the click event in the FooPage component will dispatch the following data: { page: 'FooPage', action: 'click', } Because we wrapped the sub-tree returned by FooPage in <Track />, the click event in the Child component will dispatch: { page: 'FooPage', action: 'childClick', } The default track() export is best used as a @decorator() using the babel decorators plugin. The decorator can be used on React Classes and on methods within those classes. If you use it on methods within these classes, make sure to decorate the class as well. Note: In order to decorate class property methods within a class, as shown in the example below, you will need to enable loose mode in the babel class properties plugin. import React from 'react'; import track from 'react-tracking'; @track({ page: 'FooPage' }) export default class FooPage extends React.Component { @track({ action: 'click' }) handleClick = () => { // ... other stuff }; render() { return <button onClick={this.handleClick}>Click Me!</button>; } } You can also track events by importing track() and wrapping your stateless functional component, which will provide props.tracking.trackEvent() that you can call in your component like so: import track from 'react-tracking'; const FooPage = props => { return ( <div onClick={() => { props.tracking.trackEvent({ action: 'click' }); // ... other stuff }} /> ); }; export default track({ page: 'FooPage', })(FooPage); This is also how you would use this module without @decorator syntax, although this is obviously awkward and the decorator syntax is recommended. options.dispatch()for tracking data By default, data tracking objects are pushed to window.dataLayer[] (see src/dispatchTrackingEvent.js). This is a good default if you use Google Tag Manager. You can override this by passing in a dispatch function as a second parameter to the tracking decorator { dispatch: fn() } on some top-level component high up in your app (typically some root-level component that wraps your entire app). For example, to push objects to window.myCustomDataLayer[] instead, you would decorate your top-level <App /> component like this: import React, { Component } from 'react'; import track from 'react-tracking'; @track({}, { dispatch: data => window.myCustomDataLayer.push(data) }) export default class App extends Component { render() { return this.props.children; } } This can also be done in a functional component using the useTracking hook: import React from 'react'; import { useTracking } from 'react-tracking'; export default function App({ children }) { const { Track } = useTracking({}, { dispatch: data => window.myCustomDataLayer.push(data) }); return <Track>{children}</Track>; } NOTE: It is recommended to do this on some top-level component so that you only need to pass in the dispatch function once. Every child component from then on will use this dispatch function. options.dispatchOnMount You can pass in a second parameter to @track, options.dispatchOnMount. There are two valid types for this, as a boolean or as a function. The use of the two is explained in the next sections: options.dispatchOnMountas a boolean To dispatch tracking data when a component mounts, you can pass in { dispatchOnMount: true } as the second parameter to @track(). This is useful for dispatching tracking data on "Page" components, for example. @track({ page: 'FooPage' }, { dispatchOnMount: true }) class FooPage extends Component { ... } function FooPage() { useTracking({ page: 'FooPage' }, { dispatchOnMount: true }); } Will dispatch the following data (assuming no other tracking data in context from the rest of the app): { page: 'FooPage' } Of course, you could have achieved this same behavior by just decorating the componentDidMount() lifecycle event yourself, but this convenience is here in case the component you're working with would otherwise be a stateless functional component or does not need to define this lifecycle method. Note: this is only in effect when decorating a Class or stateless functional component. It is not necessary when decorating class methods since any invocations of those methods will immediately dispatch the tracking data, as expected. options.dispatchOnMountas a function If you pass in a function, the function will be called with all of the tracking data from the app's context when the component mounts. The return value of this function will be dispatched in componentDidMount(). The object returned from this function call will be merged with the context data and then dispatched. A use case for this would be that you want to provide extra tracking data without adding it to the context. @track({ page: 'FooPage' }, { dispatchOnMount: (contextData) => ({ event: 'pageDataReady' }) }) class FooPage extends Component { ... } function FooPage() { useTracking({ page: 'FooPage' }, { dispatchOnMount: (contextData) => ({ event: 'pageDataReady' }) }); } Will dispatch the following data (assuming no other tracking data in context from the rest of the app): { event: 'pageDataReady', page: 'FooPage' } options.process When there's a need to implicitly dispatch an event with some data for every component, you can define an options.process function. This function should be declared once, at some top-level component. It will get called with each component's tracking data as the only argument. The returned object from this function will be merged with all the tracking context data and dispatched in componentDidMount(). If a falsy value is returned ( false, null, undefined, ...), nothing will be dispatched. A common use case for this is to dispatch a pageview event for every component in the application that has a page property on its trackingData: @track({}, { process: (ownTrackingData) => ownTrackingData.page ? { event: 'pageview' } : null }) class App extends Component {...} ... @track({ page: 'Page1' }) class Page1 extends Component {...} @track({}) class Page2 extends Component {...} function App() { const { Track } = useTracking( {}, { process: (ownTrackingData) => ownTrackingData.page ? { event: 'pageview' } : null, } ); return ( <Track> <Page1 /> <Page2 /> </Track> ); } function Page1() { useTracking({ page: 'Page1' }); } function Page2() { useTracking({}); } When Page1 mounts, event with data {page: 'Page1', event: 'pageview'} will be dispatched. When Page2 mounts, nothing will be dispatched. Asynchronous methods (methods that return promises) can also be tracked when the method has resolved or rejects a promise. This is handled transparently, so simply decorating an asynchronous method the same way as a normal method will make the tracking call after the promise is resolved or rejected. // ... @track() async handleEvent() { return await asyncCall(); // returns a promise } // ... Or without async/await syntax: // ... @track() handleEvent() { return asyncCall(); // returns a promise } You can also pass a function as an argument instead of an object literal, which allows for some advanced usage scenarios such as when your tracking data is a function of some runtime values, like so: import React from 'react'; import track from 'react-tracking'; // In this case, the "page" tracking data // is a function of one of its props (isNew) @track(props => { return { page: props.isNew ? 'new' : 'existing' }; }) export default class FooButton extends React.Component { // In this case the tracking data depends on // some unknown (until runtime) value @track((props, state, [event]) => ({ action: 'click', label: event.currentTarget.title || event.currentTarget.textContent, })) handleClick = event => { if (this.props.onClick) { this.props.onClick(event); } }; render() { return <button onClick={this.handleClick}>{this.props.children}</button>; } } NOTE: That the above code utilizes some of the newer ES6 syntax. This is what it would look like in ES5: // ... @track(function(props, state, args) { const event = args[0]; return { action: 'click', label: event.currentTarget.title || event.currentTarget.textContent }; }) // ... When tracking asynchronous methods, you can also receive the resolved or rejected data from the returned promise in the fourth argument of the function passed in for tracking: // ... @track((props, state, methodArgs, [{ value }, err]) => { if (err) { // promise was rejected return { label: 'async action', status: 'error', value: err }; } return { label: 'async action', status: 'success', value // value is "test" }; }) handleAsyncAction(data) { // ... return Promise.resolve({ value: 'test' }); } // ... If the function returns a falsy value (e.g. false, null or undefined) then the tracking call will not be made. propsand state Further runtime data, such as the component's props and state, are available as follows: @track((props, state) => ({ action: state.following ? "unfollow clicked" : "follow clicked", name: props.name })) handleFollow = () => { this.setState({ following: !this.state.following }) } } props.tracking.getTrackingData()usage Any data that is passed to the decorator can be accessed in the decorated component via its props. The component that is decorated will be returned with a prop called tracking. The tracking prop is an object that has a getTrackingData() method on it. This method returns all of the contextual tracking data up until this point in the component hierarchy. import React from 'react'; import track from 'react-tracking'; // Pass a function to the decorator @track(() => { const randomId = Math.floor(Math.random() * 100); return { page_view_id: randomId, }; }) export default class AdComponent extends React.Component { render() { const { page_view_id } = this.props.tracking.getTrackingData(); return <Ad pageViewId={page_view_id} />; } } Note that if you want to do something like the above example using the useTracking hook, you will likely want to memoize the randomId value, since otherwise you will get a different value each time the component renders: import React, { useMemo } from 'react'; import { useTracking } from 'react-tracking'; export default function AdComponent() { const randomId = useMemo(() => Math.floor(Math.random() * 100), []); const { getTrackingData } = useTracking({ page_view_id: randomId }); const { page_view_id } = getTrackingData(); return <Ad pageViewId={page_view_id} />; } Note that there are no restrictions on the objects that are passed in to the decorator or hook. The format for the tracking data object is a contract between your app and the ultimate consumer of the tracking data. This library simply merges the tracking data objects together (as it flows through your app's React component hierarchy) into a single object that's ultimately sent to the tracking agent (such as Google Tag Manager). You can get the type definitions for React Tracking from DefinitelyTyped using @types/react-tracking. For an always up-to-date example of syntax, you should consult the react-tracking type tests. The props.tracking PropType is exported for use, if desired: import { TrackingPropType } from 'react-tracking'; Alternatively, if you want to just silence proptype errors when using eslint react/prop-types, you can add this to your eslintrc: { "rules": { "react/prop-types": ["error", { "ignore": ["tracking"] }] } }
https://awesomeopensource.com/project/nytimes/react-tracking
CC-MAIN-2021-31
en
refinedweb
The Code Style Guide For end-users, the most important parts of the software are functionality and UI/UX. But for developers, there is one more important aspect - code style. While ugly code can do everything that it has to do, developing it further may be a difficult task, especially if the developer didn't write the original code. Which one of the following do you prefer to read and work with? MyPath = '/file.txt' from pathlib import * import os.path,sys def check(p): """Uses os.path.exist """ return os.path.exists(p) def getF( p): """Not sure what this do, this just worked. """ return Path(p ) result=[check(MyPath),getF(MyPath)] or import os.path from pathlib import Path FILE_PATH = '/file.txt' def check_file_exists(path: str) -> bool: """Checks does file exists in path. Uses os.path.exists.""" return os.path.exists(path) def get_path_object(path: str) -> Path: """ Returns Path object of the path provided in arguments. This is here for backward compatibility, will be removed in the future. """ return Path(path) result = [ check_file_exists(FILE_PATH), get_path_object(FILE_PATH), ] The second is definitely easier to read and understand. These scripts are small and even with the first code snippet you can understand what the code does pretty quickly, but what if the project has thousands and thousands of files in a really complex folder structure? Do you want to work with code that looks like the first example? You can save hours sometimes if you write beautiful code that follows the style guidelines. The most important code style document for Python is PEP 8. This Python Enhancement Proposal lays out the majority of all Python code style guidelines. This article will cover the most important aspects of PEP 8. Linters But everyone makes mistakes and there are so many style rules that can be really difficult to remember and always follow. Luckily, we have amazing tools that help us - linters. While there are many linters, we'd like code jam participants to use flake8. Flake8 points out to you rules what you did break in your code so you can fix them. Guidelines Basics For indentation, you should use 4 spaces. Using tabs is not suggested, but if you do, you can't mix spaces and tabs. PEP 8 defines a maximum line length of 79 characters, however, we are not so strict - teams are welcome to choose a maximum line length between 79 and 119 characters. 2 blank lines should be left before functions and classes. Single blank lines are used to split sections and make logical breaks. Naming Module, file, function, and variable names (except type variables) should be lowercase and use underscores. # File: my_module.py/mymodule.py def my_function(): my_variable = "value" Class and type variable names should use the PascalCase style. from typing import List class MyClass: pass ListOfMyClass = List[MyClass] Constant names should use the SCREAMING_SNAKE_CASE style. MY_CONSTANT = 1 You should avoid single-character names, as these might be confusing. But if you still do, you should avoid characters that may look like zero or one in some fonts: "O" (uppercase o), "l" (lowercase L), and "I" (uppercase i). Operators If you have a chain of mathematic operations that you split into multiple lines, you should put the operator at the beginning of the line and not the end of the line. # No result = ( 1 + 2 * 3 ) # Yes result = ( 1 + 2 * 3 ) If you ever check if something is equivalent to None, you should use is and is not instead of the == operator. # No if variable == None: print("Variable is None") # Yes if variable is None: print("Variable is None") You should prefer using <item one> is not <item two> over not <item one> is <item two>. Using the latter makes it harder to understand what the expression is trying to do. # No if not variable is None: print("Variable is not None") # Yes - it is much easier to read and understand this than previous if variable is not None: print("Variable is not None") Imports Imports should be at top of the file, the only things that should be before them are module comments and docstrings. You shouldn't import multiple modules in one line, but give each module import its own line instead. # No import pathlib, os # Yes import os import pathlib Wildcard imports should be avoided in most cases. It clutters the namespace and makes it less clear where functions or classes are coming from. # No from pathlib import * # Yes from pathlib import Path You should use isort imports order specification, which means: - Group by type: order of import types should be: __future__imports, standard library imports, third-party library imports, and finally project imports. - Group by import method: inside each group, first should come imports in format import <package>and after them from <package> import <items>. - Order imports alphabetically: inside each import method group, imports should be ordered by package names. - Order individual import items by type and alphabetically: in from <package> import <items>format, <items>should be ordered alphabetically, starting with bare module imports. Comments are really important because they help everyone understand what code does. In general, comments should explain why you are doing something if it's not obvious. You should aim to write code that makes it obvious what it is doing and you can use the comments to explain why and provide some context. Keep in mind that just as important as having comments, is making sure they stay up to date. Out-of-date and incorrect comments confuse readers of your code (including future you). Comments content should start with a capital letter and be a full sentence(s). There are three types of comments: block comments, inline comments, and docstrings. Block comments Probably most common comment type. Should be indented to the same level as the code they describe. Each line in the block comment has to start with #and should be followed by a single space. To separate paragraphs, use one line containing only #. if variable is None or variable == 1: # If variable is None, something went wrong previously. # # Here starts a new important paragraph. Inline comments You should prefer block comments over inline comments and use inline comments only where it is really necessary. Never use inline comments to explain obvious things like what a line does. If you want to use an inline comment on a variable, think first, maybe you can use a better variable name instead. After code and before the start of inline comments should be at least two spaces. Just like block comments, inline comments also have to start with #followed by a single space. # Do not use inline comments to explain things # that the reader can understand even without the inline comment. my_variable = "Value!" # Assign value to my_variable # Here better variable name can be used like shown in the second line. x = "Walmart" # Shop name shop_name = "Walmart" # Sometimes, if something is not obvious, then inline comments are useful. # Example is from PEP 8. x = x + 1 # Compensate for border Docstrings Last, but not least important comment type is docstring, which is a short version of documentation string. Docstring rules haven't been defined by PEP 8, but by PEP 257 instead. Docstrings should start and end with three quotes ("""). There are two types of docstrings: one-line docstrings and multiline docstrings. One-line docstrings have to start and end in the same line, while multiline docstrings start and end in different lines. Multiline docstring has two parts: summary line and a longer description, which are separated by one empty line. The multiline docstring start and end quotes should be on different lines than the content. # This is a one-line docstring. """This is one line module docstring.""" # This is a multiline docstring. def my_function(): """ This is the summary line. This is the description. """ Too much for you? Do all these style rules make your head explode? We have something for you! We have a song! We have The PEP 8 Song (featuring lemonsaurus)! Great way to get started with writing beautiful code.
https://pythondiscord.com/events/code-jams/code-style-guide/
CC-MAIN-2021-31
en
refinedweb
sqlAlchemy has sessionmaker to create a session from which you can use query to get whatever you need. For example: someSession.query(SomeTableRepresentationObject).filter...ect If you’ve used sqlalchemy, nothing new going on here but it wouldn’t be me if I didn’t point out the obvious and take three sentences to do so. Now what you may run into is something like this: def getCountOfUsersByUserName(userName, session): return session.query(User).filter(User.userName == userName).count() Ok now that I typed that out, I see it’s kind of a dumb method. But f–k it, what’s done is done. Now from a testing situation, this could look tough. After all if you come from a more static language background like I did, you should know how hard things can be to mock at times. (.Net Session… I’m looking at you with judging eyes. Sooooo judging.) But this isn’t a static language, it’s Python. For purposes of making things less confusing, which is hard for me, when I use the word “Object” I mean method OR class. And remember as always, “banana” is the safe word. You see, to mock that out all you need is an object that has a query object that takes in something and a filter object on the query object that takes in something and has a count method on it. Yeah that totally makes sense. Try working it backward. Filter has one parameter and a count method on it. Whatever that parameter is, it doesn’t matter since it is Python. As long as Filter takes in one parameter, you have a winner. Query, like Filter, takes in one parameter and has a object named Filter on it. Once again, it doesn’t matter what’s passed into Query, just that it takes something in. Session is basically an object that has no parameters and has a Query object on it. Now in a static language, this would be annoying. You would need 3 interfaces and mock a whole ton of stuff dynamically with something like Rhino Mocks or just create a bunch of classes of those interfaces that you can pass in. Either way, there are complications. Once you get into things like Web Session or ViewState, it’s a ton of work. Python? Eh, you can do it in 20ish lines. How you ask? Well I’ll show you how! class mockFilter(object): #This is the ONE parameter constructor def __init__(self): self._count = 0 self._first = dynamicObject() def first(self): #This is the another method that's just coming along for the ride. return self._first def count(self): #This is the needed Count method return self._count class mockQuery(object): #This is the ONE parameter constructor def __init__(self): self._filter = mockFilter() def filter(self, placeHolder): #This is used to mimic the query.filter() call return self._filter class mockSession(object): def __init__(self): self._query = mockQuery() self.dirty = [] def flush(self): pass def query(self, placeHolder): #This is used to mimic the session.query call return self._query #and this... THIS IS SPARTA!!1111... yeah I know, I'm about 3 years too late on that joke. How does this work? Say I have the method from above: def getCountOfUsersByUserName(userName, session): return session.query(User).filter(User.userName == userName).count() I could test it using this: session = mockSession() session.query('').filter('')._count = 0 #Initialize the mock session so it returns 0 from count() getCountOfUsersByUserName('sadas', session) And boom, you have mocking in ten minutes or less or your code is free. As you can see, the highly dynamic nature of Python makes it a great fit for any project that will need unit tests and mocking. And which project doesn’t? You know what I’m sayin’? You now what I’m saying’? High five! Note: dynamicObject is …. nothing but a cheesy class that inherits Object but has nothing on it. (Turns out that if I did someObject = Object() I couldn’t do this since Object by default doesn’t contain the ability to add things dynamically… and this was by design.) And yes, I just quoted myself. I am that awesome.
https://byatool.com/uncategorized/mock-sqlalchemy-scoped_session-query-and-why-python-is-my-bff/
CC-MAIN-2021-31
en
refinedweb
Warning: You are browsing the documentation for Symfony 2.5, recipe assumes you need a field definition that holds a person’s gender, based on the existing choice field. This section explains how the field is defined, how you can customize its layout and finally, how you can register it for use in your application. Defining the Field Type¶ In order to create the custom field type, first you have to create the class representing the field. In this situation the class holding the field type will be called GenderType and the file will be stored in the default location for form fields, which is <BundleName>\Form\Type. Make sure the field extends Symfony\Component\Form\AbstractType: // src/AppBundle/Form/Type/GenderType.php namespace AppBundle\Form\Type; use Symfony\Component\Form\AbstractType; use Symfony\Component\OptionsResolver\OptionsResolverInterface; class GenderType extends AbstractType { public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'choices' => array( 'm' => 'Male', 'f' => 'Female', ) )); } public function getParent() { return 'choice'; } public function getName() { return 'gender'; } } Tip The location of this file is not important - the Form\Type directory is just a convention. Here, the return value of the getParent function indicates that you’re extending the,. setDefault., simply by creating a new instance of the type in one of your forms: // src/AppBundle/Form/Type/AuthorType.php namespace AppBundle\Form\Type; use Symfony\Component\Form\AbstractType; use Symfony\Component\Form\FormBuilderInterface; class AuthorType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder->add('gender_code', new GenderType(), array( 'empty_value' => 'Choose a gender', )); } }: // src/AppBundle/Form/Type/GenderType.php namespace AppBundle\Form\Type; use Symfony\Component\OptionsResolver\OptionsResolverInterface; // ... // ... class GenderType extends AbstractType { private $genderChoices; public function __construct(array $genderChoices) { $this->genderChoices = $genderChoices; } public function setDefaultOptions(OptionsResolverInterface $resolver) { $resolver->setDefaults(array( 'choices' => $this->genderChoices, )); } // ... } Great! The GenderType is now fueled by the configuration parameters and registered as a service. Additionally, because you used the form.type alias in its configuration, using the field is now much easier: // src/AppBundle/Form/Type/AuthorType.php namespace AppBundle\Form\Type; use Symfony\Component\Form\FormBuilderInterface; // ... class AuthorType extends AbstractType { public function buildForm(FormBuilderInterface $builder, array $options) { $builder->add('gender_code', 'gender', array( 'empty_value' => 'Choose a gender', )); } } Notice that instead of instantiating a new instance, you can just refer to it by the alias used in your service configuration, gender. Have fun! This work, including the code samples, is licensed under a Creative Commons BY-SA 3.0 license.
https://symfony.com/doc/2.5/cookbook/form/create_custom_field_type.html
CC-MAIN-2021-31
en
refinedweb
Built-in filters - Scripted workflow filter (WorkflowFilter) Introduction The scripted workflow filter allows conditions and actions that can be executed during content filtering to be defined. Configuring the scripted workflow filter Enabling Edit the filter.classesparameter in your collectino configuration and add the following string to the end com.funnelback.common.filter.WorkflowFilter. Example filter.classes=TikaFilterProvider,ExternalFilterProvider:DocumentFixerFilterProvider:com.funnelback.common.filter.WorkflowFilter Create a workflow.cfgfile using the file manager. This file will contain the conditions and actions you wish to define. Configuring scripted workflow rules The workflow.cfg contains Groovy code consisting of a number of if statements that perform a specified action. Syntax The syntax for each workflow command is as follows: if (<CONDITION>) { ACTION } Statements can be nested if (<CONDITION1>) { if <CONDITION2> { ACTION } } Conditions can be combined using and and or commands: if ((<CONDITION1>).and(<CONDITION2>)) { ACTION1 } if ((<CONDITION3>).or(<CONDITION4>)) { ACTION2 } Variables can be defined using the def keyword. groovy def pubs = urlContains("publications"); if (publications == true) { ACTION } Examples This section gives some examples of the script language that might be put in the workflow.cfg file. if ((contentContains("(?i)ovum")).or(contentContains("Gartner"))) { if (urlContains("analyst-reviews")) { insertMetaTag("robots", "noindex"); } } In the example above the content must contain either Ovum or Gartner and the URL must contain analyst-reviews. The (?i) syntax means to use a case-insensitive match. If these conditions are met then a robots noindex meta tag will be inserted into the content, meaning that the document will not be indexed. // Example of extraction of content for re-insertion if ((urlContains("funnelback")).and(urlDoesNotStartWith("test")).and(contentContains("\\w+")).and(urlEndsWith(".pdf"))) { def matched = getMatchingContent("original(.*?)text"); replaceContent "original(.*?)text", "replaced text: middle was [" + matched + "]" } In this second example we are extracting content for re-insertion. The def keyword is used to define a variable in the scripting language we use (Groovy). // Example of title replacement if ((urlContains("amazon")).or(urlDoesNotStartWith("test"))) { replaceContent "<title>(.*?) </title>", " <title>New Title </title>" } Here we are inserting a new title into the content using the replaceContent action, which takes a regular expression to match with and then some replacement text. // Example of extracting content and inserting into metadata if (urlEndsWith(".pdf")) { def matched = getMatchingContent("middle(.*?)content"); if (matched != "") { insertMetaTag("my_meta_data", matched); } } In this last example we extract some matching content and insert it as meta data. It will be inserted into the "…" section of the document if it has one, or after the opening tag otherwise.
https://docs.squiz.net/funnelback/docs/latest/build/data-sources/document-filtering/builtin-filters-workflow.html
CC-MAIN-2021-31
en
refinedweb
A class in java that can be created using ‘new’ keyword is called a concrete class in java. It is also known as the complete blueprint of its own self and can be instantiated. This class has the implementation of all the methods in it. Hence, this class can never contain any unimplemented methods. A concrete class can always extend an abstract class. Difference between Abstract and Concrete class - An abstract class can never be directly instantiated whereas a concrete class can be instantiated. We can also instantiate an abstract class using concrete class. - A concrete class implements all the abstract methods of an abstract parent class. - We declare an abstract class using an abstract modifier. Whereas, we can instantiate a concrete class using new keyword. If we use abstract keyword in the concrete class, it will become an abstract class only. - An abstract class is impossible without abstract methods. Whereas, a concrete class can never have abstract methods. - We cannot declare an abstract class as a final class. However, we can declare a concrete class as a final class. Below is how we can directly instantiate a concrete class using new keyword in Java. abstract class DeveloperHelps { abstract void display(); } class ConcreteClass extends DeveloperHelps { void display() { System.out.println("My name is Megha"); } } public class test { public static void main(String[] args) { ConcreteClass x = new ConcreteClass(); x.display(); System.out.println("Welcome to Developer Helps"); } } The output of this java program will be: My name is Megha Welcome to Developer Helps Another example will helps you in better understanding of difference between an abstract and concrete class public class DeveloperHelps { public static void main(String args[]) { fruit mango = new mango(); mango.eat(); } } abstract class fruit { abstract public void eat(); } class mango extends fruit{ public void eat(){ System.out.println("Mango is a fruit"); } } The output of this java program will be: Mango is a fruit Summary: - A class which is not an abstract class is always a concrete class. - A concrete class can extend abstract class. - An abstract class can never extend a concrete class. - If a concrete class will have abstract methods, it will be one abstract class only.
https://www.developerhelps.com/concrete-class-in-java/
CC-MAIN-2021-31
en
refinedweb
Swap two numbers using bitwise operator There are a number of methods to swap two numbers using Java. Today we will focus on swapping those numbers using a bitwise operator in java. In computers, arithmetic calculations such as addition, division, subtraction, multiplication etc are done at a bit level. To perform these calculations, we have bitwise operators such as OR(|), AND(&), XOR(^), complement(~), shift right (>>), shift left(<<) etc. Bitwise is an XOR operator that compares bits of two numbers. If the numbers are equal, it returns the output as 1. If the numbers are not equal, it returns the output as 0. For example, if we take the binary output of say number 1 which is: 00000101 and binary output of number 5 is: 00001010. As we compare the outputs of these 2 binary numbers, we can conclude that they are not the same on comparison. Hence, the output for the above numbers will be 0. The other methods for swapping two numbers are by using the temp variable, using an array, using the third variable with the arithmetic operator, using temp with multiplication and division etc. Swapping in a process in which the values of two integer numbers are exchanged with each other. Below is a program to understand how we can swap two numbers using Java. For this, we will use the nextInt() method of the Scanner class. Swap two numbers using bitwise operator import java.util.Scanner; public class DeveloperHelps { public static void main(String args[]) { int num1, num2; Scanner scanner = new Scanner(System.in); System.out.print("Enter first number:"); num1 = scanner.nextInt(); System.out.print("Enter second number:"); num2 = scanner.nextInt(); num1 = num1 ^ num2; num2 = num1 ^ num2; num1 = num1 ^ num2; scanner.close(); System.out.println("First number after swapping is:"+num1); System.out.println("Second number after swapping is:"+num2); } } The output of the above program for swapping two number using bitwise operator in java will be: Enter first number: 160 Enter second number: 260 First number after swapping is: 260 Second number after swapping is: 160 Here we have used nextInt() method of Scanner class to swap two numbers in Java. We use scanner class to breakdown the input into small token values.To know more about scanner class, click here.
https://www.developerhelps.com/swap-two-numbers-using-bitwise-operator/
CC-MAIN-2021-31
en
refinedweb
Her-)> ...-------------------- Start of forwarded message --------------------Date: Thu, 29 May 2003 09:21:27 +0100From: viro@parcelfarce.linux.theplanet.co.ukTo: Shaya Potter <spotter@cs.columbia.edu>Cc: linux-fsdevel@vger.kernel.orgSubject: [long] per-mountpoint readonlyMessage-ID: <20030529082127.GL14138@parcelfarce.linux.theplanet.co.uk>On Thu, May 29, 2003 at 12:20:23AM -0400, Shaya Potter wrote:> On Wed, 2003-05-28 at 20:55, viro@parcelfarce.linux.theplanet.co.uk> wrote:> > So give it a _REAL_ namespace. And bind /dev/null over /bin/sh in it.> > And revoke the capability to call mount(2) and umount(2). End of story.> > The issue w/ the fs namespaces the kernel supports (at least AFAI> understand them) is that you can't change the underlying permission> model.> > so for example, an "apache namespace" you might want it to only have> read only access to everything besides a few files in /var/log. Can one> do that w/ namespaces?Now, _that_ would be a good thing to implement. To do it the right waywe need to make readonly status per-mountpoint. Note that we already havethat for e.g. nosuid and friends. If we get that done for readonlyyou can * in your namespace remount everything readonly * mount --recbind /var/log /var/log mount -o remount,rw /var/logand enjoy.The first question is obviously "could it be done at all?"Situation with IS_RDONLY() is interesting. First of all, there areuses of that beast in ->permission() instances. We definitely can(and should) take them upstream. No matter how hard somebody mightwish to allow write access to e.g. regular file, if fs is readonly,it's readonly. Period. No ACLs can override it.IOW, that check (IS_RDONLY and (regular|directory|symlink) ==> EROFS)should be moved into fs/namei.c::permission(). Ditto for checks inxattr methods - they also go upstream.ioctls (ext2, ext3, reiserfs) have struct file * and thus are nota problem.fs/open.c callers also have struct file * or struct nameidata. Not aproblem.fat_truncate() - actually, both checks there are bogus; callers docheck for that stuff. Should be removed.ncpfs uses should be replaced with update_atime() (and that will bea huge can of worms). We have struct file * in them, anyway.ntfs uses. These are actually about the state of filesystem itself -they attempt to fill the missing permission bits.arch/sparc64/solaris/fs.c: there we have vfsmount.fs/namei.c::may_open(). After we make sure that permission() _always_honors IS_RDONLY() (i.e. move the checks into beginning of the thing),that particular check is not needed anymore.vfs_permission() - check moves to permission(). NB: we need to verifythat it's only called by permission() and instances of ->permission().nfsd_permission() - there we have the export, ergo the vfsmount.presto_settime() - the check is bogus, we only call it from ->create(),->unlink(), etc. and the upper layer logics takes care of these checks.presto_do_setattr() - fsck knows, ask Peter. I suspect that callersshould do the check, if they are not doing it already.ext3_change_inode_journal_flag() - called only from ioctl.And that leaves us with two big ones. Namely, permission() and update_atime().The above will take quite a few accurate steps, but it's doable and wouldmake sense regardless of the following. And yes, there's a lot of work onaccurate verification of correctness that I'd skipped - it has to be doneand there might be nasty surprises. However, assuming that it's done,we are at the interesting point.The thing is, we have no hope in hell to propagate vfsmount into all callsof permission(). _HOWEVER_, we might not need to do that. Note that partpotentially interested in vfsmount is very small and specific. Again,we are talking about want write and is readonly and (file or directory or symlink).Right now it needs only inode, but we want to make "is readonly" a functionof vfsmount. Let's start with taking that check out of permission() andinto its callers. Note that callers in floppy_open() (the worst from the "where dowe get vfsmount/dentry" POV) do not give a damn - there the check isalways false, since we are talking about block device. So they arenot a problem. Places where we got the inode from struct file * or struct nameidataare also not a problem - there we have everything we might possibly ask for.That kills a lot of permission() instances. We also do not care for cases when we don't ask for write access(obviously - there the check can be dropped). Call in nfsd_permission() is not a problem, for the same reasonsas with explicit IS_RDONLY() there. We have export, ergo we have a vfsmount. Call in presto_permission() is an utter bullshit - it's a hack ofrather bad taste and should have been replaced with call of vfs_permission()(I mean, just look at it - Peter, please, LART whoever had done that). Checks that come out of calls in ext2 and ext3 xattr stuff can bedropped - after the changes above we would have them already done in callers. Now we are down to 8 functionsmay_create(), may_delete(), vfs_rename_dir()their intermezzo copieshpfs_unlink() and update_atime().Let's take a look at update_atime(). We want to propagate struct vfsmountand struct dentry into that guy. It's worth the effort for a lot of reasons,not the least of them being that we will get per-mountpoint noatime andnodiratime out that - immediately.All callers that have inode obtained by struct file * are trivial. We havewhat we need there. Now, a large group actually comes from the same kindof place - it's ->readdir() in the majority of filesystems. We might bebetter off just taking that to caller of ->readdir() - provided that NFSfolks are OK with that (nfs does _not_ update atime on client afterreaddir()). In any case, we have struct file * there anyway.nfsd_readlink() doesn't have struct file *, but it has export. Same as above.sys_readlink() has nameidata.open_namei() and do_follow_link() have pointers to relevant vfsmount (shouldbe done accurately - it's easy to get confused there).autofs4 use - AFAICS there we want atime updated unconditionally, so callingupdate_atime() (update atime after checking noatime/nodiratime/readonly flags)is wrong.That's it with update_atime() and by that time we get visible result - abilityto turn noatime and nodiratime on and off on a subtree basis.Now, what's left is hpfs_unlink(), may_create()/may_delete()/vfs_rename_dir()and intermezzo analogs of these 3. Let's see.Check in hpfs_unlink() can be dropped. It should call permission(), allright, but check for filesystem being writable for us is already done, sowe can skip it.Situation with may_create()/may_delete() is more interesting. They arecalled from vfs_{create,link,unlink,....} and there we do not have avfsmount. Let's start with taking the check ("fs is writable for us")from may_...() into these functions. They are _always_ done on a singlefs, so even if we get several checks, they'll combine nicely. Moreover,in all cases except vfs_rename() that check can be done as the very firstoperation - if it fails we shoudl return -EROFS and do nothing.vfs_rename() is almost OK - there we have a POSIX-mandated idiocy: if (old_dentry->d_inode == new_dentry->d_inode) return 0;in the very beginning. The check can go immediately after that, butwe can't move it in the very beginning. IOW, there is a case whenrename() on a read-only filesystem is required to succeed - whensource and target are links to the same inode. In that case POSIXrequires to return 0 and do nothing.Check that had migrated into vfs_rename_dir() can be dropped - wedo the same check in its caller earlier (it's called from vfs_rename()).BTW, why the fsck vfs_rename_dir() is not static? We don't use itanywhere else in the tree and it's not exported, so...Now, we are left with that merry bunch - vfs_<operation>. Let's takethe POSIX idiocy to the callers of vfs_rename() and let's move thecheck for fs being writable to us into callers of all these guys._NOW_ we are done. Indeed, there are two groups of callers (syscallsand nfsd) and both know what we are dealing with. The former havevfsmount, the latter have export.Note that passing vfsmount into vfs_...() would be a Wrong Thing(tm) -they are deliberately fs-local. I.e. they don't care where (and if)fs is mounted - they operate below the mount tree level.Intermezzo analogs are analogous ;-) We can deal with them in thesame manner.OK, so far we have shown that per-mountpoint ro/noatime/nodiratime aredoable. However, the solution as described above is *NOT* going tobe accepted - it sprinkles a lot of crap all over the tree for novisible reason. Let's take a look at the resulting picture and seehow it can be cleaned up.What had actually changed? We used to have checks for "fs is writable"on a fairly low level. The end result has their analogs on earlierstages - at the point where we have decided that operation will happenon *this* part of mount tree.That has an obvious functionality problem - what happens if we decide thatfs is writable and remount it read-only between the check and actual work?*However*, this is not a new issue. Current code has exact same problem.We need to deal with it anyway.Basically, what's wrong with both current and modified trees is thatwe ask "can I write to that sucker now?" and consider the positiveanswer as go-ahead.Theoretically, it could even work - if remount would block until thetransient operation is done or say just say "busy" in all cases whensomething is going on. We even do some checks of that sort. However,they deal only with long-term stuff - files being opened for write orunlinked files kept alive by being open. Transient stuff is ignored,so we can very well get fs remounted r/o right after e.g. mkdir()does all checks and decides to go ahead.How to fix that? Well, the obvious way would be to bracket transientstuff with WILL WRITE/WONT WRITE. With filesystem being able to sayDO WRITE/DONT WRITE, obviously ;-)But look, we have already done half of that - we have the openingbrackets. And that's exactly what our shifted checks are - we areasking for write access to filesystem, we call a function that assumessuch access and once it returns we are done with the thing. IOW,adding the closing brackets is easy now.And that's what we were missing in the above. Now we can get sanesemantics for all that stuff. Why would filesystem be read-onlyor read-write? Well, (1) it can be read-only because it refuses to be read-write, period.E.g. no matter what you do, filesystem on a CD will *not* be writable. Ever.Ditto for remote filesystem that is not exported read-write. Ditto for afilesystem that doesn't know how to be writable. (2) it can be read-only because nobody asked it to be read-write. (3) it can be read-only because sysadmin *told* it to stop beingwritable.What happens to these cases if we want per-mountpoint writability? (1) doesn't change at all (2) becomes "none of the instances are asked to be writable" (3) becomes independent from (2)IOW, we need to distinguish between "make that instance read-only, stopallowing writes to come via it" and "make fs readonly if it's not busywriting stuff; do _not_ consider lusers wanting it to be writable anobstacle, they are welcome to go forth and procreate".And with that distinction we get a very nice semantics, indeed. Namely, * vfsmounts have a "Don't write through me" flag. * superblock has a counter for transient write accesses (->s_writes). * superblock has a "Sod off, I'm readonly" flag (->s_baldrick).Requesting a write access checks both flags and bumps ->s_writes.Saying that write is over decrements ->s_writes.Requesting a global readonly remount does current checks (for non-transientwrites), checks ->s_writes and if it's 0 - sets ->s_baldrick and initiatessyncing.Asking to remount read-write asks filesystem if it would agree to resetaforementioned flag and resets the vfsmount flag if fs does agree.That's it. Now, we might actually go further and try to eliminate thecurrent mess in check for non-transient write access. It's not eventhat hard - there are only two kinds of non-transients. 1) file can be opened for write. We have asked for write accessin may_open(); so let's not relinquish it until the final fput(). 2) file can be unlinked but kept open. Let the filesystem bumpthe ->s_writes when the ->i_nlink reaches 0 and have the ->delete_inode()decrement it when it's done.That will make the current mess unnecessary - we can just check ->s_writesand it will tell us if the thing is busy.Moreover, with that scheme we could even play with "if hadn't beenasked for write access in a while, tell it to go readonly but to beready to become read-write as soon as the next request for write accesswill be issued" - the policy would be in userland and kernel-side partwouldn't be hard. One such scheme would keep a flag that could be resetby userland if ->s_writes was currently 0 and set whenever we grantwrite access; attempt to reset it when it's already reset would do a"soft" remount - it would tell the fs driver to do usual steps of r/oremount, but switch back to r/w when write access is requested again.Userland process would simply try to reset the flag periodically.There's a lot of possible variations - basically, with that stuff donewe get remount logics into sane shape and that allows a lot of interestingthings. *Besides* the immediate results (per-subtree read-only).So there... It certainly appears to be doable, it doesn't require toonasty surgery of in-kernel APIs and it's splittable into reasonably smallsteps that make sense on their own. If somebody would start doing thatthey'd have to do (accurately) verification of correctness on each step -thing that is missing in the above and can require dealing with problemsI'd overlooked. FWIW, my gut feeling from looking at that stuff is thatthere won't be anything too nasty, but that needs to be checked ifsomebody will start that work - the stuff above is a quick and dirtyattempt of a roadmap, but there might be pits with something unpleasant.I would estimate that as 3-4 months' work / 4-6 months on gradual merge(started in parallel) / several months of playing with the resultsafter the semantics of remount will be sanitized. YMMV.-To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at-------------------- End of forwarded message ---------------------To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2003/8/4/253
CC-MAIN-2021-31
en
refinedweb
In the earlier versions of .NET framework, writing code to perform asynchronous IO operations was not possible and hence the IO operations had to be synchronous. The problems that the developers were encountering with the synchronous approach were: 1. Unresponsiveness of UI - if the application is a thick client and had to perform file IO operations based on the user actions. 2. Performance issue - In case of back ground process, where it has to process large files. In .NET Framework 4.0 asynchronous IO provisions were given for classes like StreamReader, StreamWriter, etc. through the methods BeginRead, BeginWrite, etc., involving callbacks. Though it provided a way to write asynchronous code there was yet another drawback--the code complexity! In .NET Framework 4.5 the IO classes are packed with new Async methods using await and async keywords, which can be used to write straight-forward and clean asynchronous IO code. Below are the advantages of using these new async IO methods. 1. Responsive UI - In Windows apps, the user will be able to perform other operations while the IO operation is in progress. 2. Optimized performance due to concurrent work. 3. Less complexity - as simple as synchronous code. In this article we look at a few examples of async IO operations in .NET Framework 4.5. StreamReader and StreamWriter StreamReader and StreamWriter are the widely used file IO classes in order to process flat files (text, csv, etc). The 4.5 version of .NET Framework provides many async methods in these classes. Below are some of them. 1.ReadToEndAsync 2.ReadAsync 3.ReadLineAsync 4.FlushAsync - Reader 5.WriteAsync 6.WriteLineAsync 7.FlushAsync - Writer The code below reads the content from a given list of files asynchronously. namespace AsyncIOSamples { class Program { static void Main(string[] args) { List<string> fileList = new List<string>() { "DataFlatFile1.txt", "DataFlatFile2.txt" }; foreach (var file in fileList) { ReadFileAsync(file); } Console.ReadLine(); } private static async void ReadFileAsync(string file) { using (StreamReader reader = new StreamReader(file)) { //Does not block the main thread string content = await reader.ReadToEndAsync(); //Gets called after the async call is done. Console.WriteLine(content); } } } } Now let us try with the ReadLineAsync and read the content from a single file asynchronously. namespace AsyncIOSamples { class Program { static void Main(string[] args) { ReadFileLineByLineAsync("DataFlatFile1.txt"); Console.WriteLine("Continue with some other process!"); Console.ReadLine(); } private static async void ReadFileLineByLineAsync(string file) { using (StreamReader reader = new StreamReader(file)) { string line; while (!String.IsNullOrEmpty(line = await reader.ReadLineAsync())) { Console.WriteLine(line); } } } } } In these examples the main point to note is that these asynchronous operations do not block the main thread and are able to utilize the concurrency factor. A similar example holds good for StreamWriter as well. Here is the sample code, which reads the content from a list of files and writes it to the output files without blocking the main thread execution. namespace AsyncIOSamples { class Program { static void Main(string[] args) { ProcessFilesAsync(); //Main thread is not blocked during the read/write operations in the above method Console.WriteLine("Do something else in the main thread mean while!!!"); Console.ReadLine(); } private static async Task ProcessFilesAsync() { List<string> fileList = new List<string>() { "DataFlatFile1.txt", "DataFlatFile2.txt" }; foreach (var fileName in fileList) { string content = await ReadFileAsync(fileName); WriteFileAsync(content, "Output" + fileName); } } private static async void WriteFileAsync(string content, string outputFileName) { using (StreamWriter writer = new StreamWriter(outputFileName)) { await writer.WriteAsync(content); } } private static async Task<string> ReadFileAsync(string fileName) { using (StreamReader reader = new StreamReader(fileName)) { return await reader.ReadToEndAsync(); } } } } WebClient This class is used for data request operations over protocols like HTTP, FTP, etc. This class is also bundled with a bunch of Async methods like DownloadStringTaskAsync, DownloadDataTaskAsync and more. It doesn't end here but extends to classes like XmlReader, TextReader and many more. I will leave it to the readers to explore them. Happy reading!
https://mobile.codeguru.com/csharp/.net/net_framework/supporting-asynchronous-io-operations-with-.net-framework-4.5.htm
CC-MAIN-2021-31
en
refinedweb
hey guyss..!! I'm having some difficulties working with strings (cstyle, object oriented strings). as this is something new for me so I'm not exactly familiar with functions of strings aswell.. write now i have a question for which i was making a solution but the code is missing something please help me out. Question statement. "Write a program that reads a whole paragraph (you can do that with a little common sense) from the user. Now prompt the user enters a word to be searched. Your program should read the search word and search for all the occurrences of that word in the entered text. You need to print the total number of occurrences of this word in the entered text." My code. #include<iostream> #include<conio.h> #include<string> using namespace std; int main() { string my_str; cout<<"Please enter your paragraph.."<<endl; getline (cin, my_str); cout<<endl<<endl; cout<<"please enter a word to be searched..!!"<<endl; char x; cin>>x; my_str.find("x"); cout<<endl; cout<<"The total number of"<< x <<"is: "<<x; getch(); return 0; }
https://www.daniweb.com/programming/software-development/threads/494420/finding-total-occurrences-of-any-word-in-strings
CC-MAIN-2018-39
en
refinedweb
Shows the above error in spite of the input being in the range. I expect the code to work perfectly. from random import randint board = [] for x in range(0, 5): board.append(["O"] * 5) def print_board(board): for row in board: print " ".join(row) print_board(board) def random_row(board): return randint(1, len(board)) def random_col(board): return randint(1, len(board))="x" guess_col="x" print_board(board)
https://discuss.codecademy.com/t/make-sure-to-enter-a-col-and-row-that-are-on-board-error/75454
CC-MAIN-2018-39
en
refinedweb
I'm having some trouble wrapping my head around this part. def compute_bill(food): total = 0 for item in food: if stock[item] > 0: total = total + prices[item] stock[item] = stock[item] - 1 return total I know item is a variable defined by the for statement, and that it relates to food, so I guess I'm uncertain how it relates to food in this case, since food is the argument of compute_bill. I must be expecting the name food to be the name of one of the lists or dictionaries. Would you be able to give more insight about how these arguments relate to the rest of the code, or how they call certain variables? Hopefully I'm not asking you to repeat an answer you gave in this thread already. Any help is much appreciated, thanks!
https://discuss.codecademy.com/t/stocking-out/87856
CC-MAIN-2018-39
en
refinedweb
#include "slepceps.h" PetscErrorCode EPSSetInitialSpace(EPS eps,PetscInt n,Vec *is)Collective on EPS and Vec These vectors do not persist from one EPSSolve() call to the other, so the initial space should be set every time. The vectors do not need to be mutually orthonormal, since they are explicitly orthonormalized internally. Common usage of this function is when the user can provide a rough approximation of the wanted eigenspace. Then, convergence may be faster. Location: src/eps/interface/epssetup.c Index of all EPS routines Table of Contents for all manual pages Index of all manual pages
http://slepc.upv.es/documentation/current/docs/manualpages/EPS/EPSSetInitialSpace.html
CC-MAIN-2018-39
en
refinedweb
On date Thursday 2011-05-05 01:27:19 +0200, Michael Niedermayer encoded: > On Thu, May 05, 2011 at 01:08:55AM +0200, Stefano Sabatini wrote: > > On date Tuesday 2011-05-03 01:49:19 +0200, Michael Niedermayer encoded: > > > On Tue, May 03, 2011 at 12:20:48AM +0200, Stefano Sabatini wrote: > > > >); > > > > > > i was thinking of > > > void avfilter_copy_frame_props(AVFilterBufferRef *dst, const struct AVFrame *src); > > > >. It the user wants to use libavfilter without libavcodec, she configures FFmpeg with --disable-libavcodec and compiles, libavfilter is compiled but the libavfilter/avcodec.o module is not compiled in (and thus avfilter_copy_frame_props won't be available at run-time). In the application she won't need AVFrame, *and* she won't include libavfilter/avcodec.h, so no need to have libavcodec/avcodec.h installed. If she wants libavfilter+libavcodec, she will configure/compile FFmpeg with --enable-avcodec, and will include libavcodec/avcodec.h. Since she's already using libavcodec in her application, avoiding libavcodec/avcodec.h inclusion in libavfilter/avcodec.h is pointless. In case she may need libavfilter/libavcodec conditionally, she can implement the inclusion logic in the application: #if CONFIG_AVCODEC #include "libavfilter/avcodec.h" #endif and do the same in the code. Of course in order to avoid missing symbols problems, libavfilter compiled for a distro should always enable libavcodec, but this is definitively not required for custom projects. -- FFmpeg = Frightening & Fierce Mean Power Experimenting Generator -------------- next part -------------- A non-text attachment was scrubbed... Name: 0004-lavfi-add-avfilter_copy_frame_props.patch Type: text/x-diff Size: 9331 bytes Desc: not available URL: <>
http://ffmpeg.org/pipermail/ffmpeg-devel/2011-May/111452.html
CC-MAIN-2016-44
en
refinedweb
i would really appreciate if someone can help me out.i would really appreciate if someone can help me out. import java.util.*; import java.util.GregorianCalendar; import java.text.DateFormat; public class leapYear { public static void main(String[] args) throws Exception { GregorianCalendar newCal = new GregorianCalendar(); int m = Integer.parseInt (args[0]); int d = Integer.parseInt (args[1]); int y = Integer.parseInt (args[2]); newCal.set(y, m, d); String[] months = new String[]{ "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December", }; System.out.println( (newCal.get(Calendar.DAY_OF_WEEK)) + " - " + (months[newCal.get(Calendar.MONTH) - 1]) + " " + d + ", " + y); if(newCal.isLeapYear(y) == true) { System.out.println(y + " is a leap year"); } else { System.out.println(y + " is Not a leap year"); } int dayOfYear = newCal.get(newCal.DAY_OF_YEAR); int lastDayOfYear = newCal.getActualMaximum(Calendar.DAY_OF_YEAR); int diff = lastDayOfYear - dayOfYear; System.out.println("There are " + diff + " days left in the year."); } } Gives you 366 no matter if you are at leap year or not, you have to adjust that.Gives you 366 no matter if you are at leap year or not, you have to adjust that. newCal.getActualMaximum(Calendar.DAY_OF_YEAR)
https://community.oracle.com/message/10606873
CC-MAIN-2016-44
en
refinedweb
id summary reporter owner description type status component version severity resolution keywords cc stage has_patch needs_docs needs_tests needs_better_patch easy ui_ux 293 Typo in Tutorial 4 espen@… Jacob "At the end of [ Write a simple form] you got this: {{{ And edit the detail.html template to add this snippet toward the top of the page somewhere: {% if error_message %} {{ error_message }}{% endif %} }}} But that is already done, so this is just the same over again." defect closed Documentation normal fixed Design decision needed 0 0 0 0
https://code.djangoproject.com/ticket/293?format=tab
CC-MAIN-2016-44
en
refinedweb
/* * Copyright (C) 1984-2007 Mark Nudelman * * You may distribute under the terms of either the GNU General Public * License or the Less License, as specified in the README file. * * For more information about less, or for information on how to * contact the author, see the README file. */ /* * Code to handle displaying line numbers. * * Finding the line number of a given file position is rather tricky. * We don't want to just start at the beginning of the file and * count newlines, because that is slow for large files (and also * wouldn't work if we couldn't get to the start of the file; e.g. * if input is a long pipe). * * So we use the function add_lnum to cache line numbers. * We try to be very clever and keep only the more interesting * line numbers when we run out of space in our table. A line * number is more interesting than another when it is far from * other line numbers. For example, we'd rather keep lines * 100,200,300 than 100,101,300. 200 is more interesting than * 101 because 101 can be derived very cheaply from 100, while * 200 is more expensive to derive from 100. * * The function currline() returns the line number of a given * position in the file. As a side effect, it calls add_lnum * to cache the line number. Therefore currline is occasionally * called to make sure we cache line numbers often enough. */ #include "less.h" /* * Structure to keep track of a line number and the associated file position. * A doubly-linked circular list of line numbers is kept ordered by line number. */ struct linenum_info { struct linenum_info *next; /* Link to next in the list */ struct linenum_info *prev; /* Line to previous in the list */ POSITION pos; /* File position */ POSITION gap; /* Gap between prev and next */ LINENUM line; /* Line number */ }; /* * "gap" needs some explanation: the gap of any particular line number * is the distance between the previous one and the next one in the list. * ("Distance" means difference in file position.) In other words, the * gap of a line number is the gap which would be introduced if this * line number were deleted. It is used to decide which one to replace * when we have a new one to insert and the table is full. */ #define NPOOL 50 /* Size of line number pool */ #define LONGTIME (2) /* In seconds */ public int lnloop = 0; /* Are we in the line num loop? */ static struct linenum_info anchor; /* Anchor of the list */ static struct linenum_info *freelist; /* Anchor of the unused entries */ static struct linenum_info pool[NPOOL]; /* The pool itself */ static struct linenum_info *spare; /* We always keep one spare entry */ extern int linenums; extern int sigs; extern int sc_height; /* * Initialize the line number structures. */ public void clr_linenum() { register struct linenum_info *p; /* * Put all the entries on the free list. * Leave one for the "spare". */ for (p = pool; p < &pool[NPOOL-2]; p++) p->next = p+1; pool[NPOOL-2].next = NULL; freelist = pool; spare = &pool[NPOOL-1]; /* * Initialize the anchor. */ anchor.next = anchor.prev = &anchor; anchor.gap = 0; anchor.pos = (POSITION)0; anchor.line = 1; } /* * Calculate the gap for an entry. */ static void calcgap(p) register struct linenum_info *p; { /* * Don't bother to compute a gap for the anchor. * Also don't compute a gap for the last one in the list. * The gap for that last one should be considered infinite, * but we never look at it anyway. */ if (p == &anchor || p->next == &anchor) return; p->gap = p->next->pos - p->prev->pos; } /* * Add a new line number to the cache. * The specified position (pos) should be the file position of the * FIRST character in the specified line. */ public void add_lnum(linenum, pos) LINENUM linenum; POSITION pos; { register struct linenum_info *p; register struct linenum_info *new; register struct linenum_info *nextp; register struct linenum_info *prevp; register POSITION mingap; /* * Find the proper place in the list for the new one. * The entries are sorted by position. */ for (p = anchor.next; p != &anchor && p->pos < pos; p = p->next) if (p->line == linenum) /* We already have this one. */ return; nextp = p; prevp = p->prev; if (freelist != NULL) { /* * We still have free (unused) entries. * Use one of them. */ new = freelist; freelist = freelist->next; } else { /* * No free entries. * Use the "spare" entry. */ new = spare; spare = NULL; } /* * Fill in the fields of the new entry, * and insert it into the proper place in the list. */ new->next = nextp; new->prev = prevp; new->pos = pos; new->line = linenum; nextp->prev = new; prevp->next = new; /* * Recalculate gaps for the new entry and the neighboring entries. */ calcgap(new); calcgap(nextp); calcgap(prevp); if (spare == NULL) { /* * We have used the spare entry. * Scan the list to find the one with the smallest * gap, take it out and make it the spare. * We should never remove the last one, so stop when * we get to p->next == &anchor. This also avoids * looking at the gap of the last one, which is * not computed by calcgap. */ mingap = anchor.next->gap; for (p = anchor.next; p->next != &anchor; p = p->next) { if (p->gap <= mingap) { spare = p; mingap = p->gap; } } spare->next->prev = spare->prev; spare->prev->next = spare->next; } } /* * If we get stuck in a long loop trying to figure out the * line number, print a message to tell the user what we're doing. */ static void longloopmessage() { ierror("Calculating line numbers", NULL_PARG); /* * Set the lnloop flag here, so if the user interrupts while * we are calculating line numbers, the signal handler will * turn off line numbers (linenums=0). */ lnloop = 1; } static int loopcount; #if HAVE_TIME static long startime; #endif static void longish() { #if HAVE_TIME if (loopcount >= 0 && ++loopcount > 100) { loopcount = 0; if (get_time() >= startime + LONGTIME) { longloopmessage(); loopcount = -1; } } #else if (loopcount >= 0 && ++loopcount > LONGLOOP) { longloopmessage(); loopcount = -1; } #endif } /* * Find the line number associated with a given position. * Return 0 if we can't figure it out. */ public LINENUM find_linenum(pos) POSITION pos; { register struct linenum_info *p; register LINENUM linenum; POSITION cpos; if (!linenums) /* * We're not using line numbers. */ return (0); if (pos == NULL_POSITION) /* * Caller doesn't know what he's talking about. */ return (0); if (pos <= ch_zero()) /* * Beginning of file is always line number 1. */ return (1); /* * Find the entry nearest to the position we want. */ for (p = anchor.next; p != &anchor && p->pos < pos; p = p->next) continue; if (p->pos == pos) /* Found it exactly. */ return (p->line); /* * This is the (possibly) time-consuming part. * We start at the line we just found and start * reading the file forward or backward till we * get to the place we want. * * First decide whether we should go forward from the * previous one or backwards from the next one. * The decision is based on which way involves * traversing fewer bytes in the file. */ #if HAVE_TIME startime = get_time(); #endif if (p == &anchor || pos - p->prev->pos < p->pos - pos) { /* * Go forward. */ p = p->prev; if (ch_seek(p->pos)) return (0); loopcount = 0; for (linenum = p->line, cpos = p->pos; cpos < pos; linenum++) { /* * Allow a signal to abort this loop. */ cpos = forw_raw_line(cpos, (char **)NULL, (int *)NULL); if (ABORT_SIGS() || cpos == NULL_POSITION) return (0); longish(); } lnloop = 0; /* * We might as well cache it. */ add_lnum(linenum, cpos); /* * If the given position is not at the start of a line, * make sure we return the correct line number. */ if (cpos > pos) linenum--; } else { /* * Go backward. */ if (ch_seek(p->pos)) return (0); loopcount = 0; for (linenum = p->line, cpos = p->pos; cpos > pos; linenum--) { /* * Allow a signal to abort this loop. */ cpos = back_raw_line(cpos, (char **)NULL, (int *)NULL); if (ABORT_SIGS() || cpos == NULL_POSITION) return (0); longish(); } lnloop = 0; /* * We might as well cache it. */ add_lnum(linenum, cpos); } return (linenum); } /* * Find the position of a given line number. * Return NULL_POSITION if we can't figure it out. */ public POSITION find_pos(linenum) LINENUM linenum; { register struct linenum_info *p; POSITION cpos; LINENUM clinenum; if (linenum <= 1) /* * Line number 1 is beginning of file. */ return (ch_zero()); /* * Find the entry nearest to the line number we want. */ for (p = anchor.next; p != &anchor && p->line < linenum; p = p->next) continue; if (p->line == linenum) /* Found it exactly. */ return (p->pos); if (p == &anchor || linenum - p->prev->line < p->line - linenum) { /* * Go forward. */ p = p->prev; if (ch_seek(p->pos)) return (NULL_POSITION); for (clinenum = p->line, cpos = p->pos; clinenum < linenum; clinenum++) { /* * Allow a signal to abort this loop. */ cpos = forw_raw_line(cpos, (char **)NULL, (int *)NULL); if (ABORT_SIGS() || cpos == NULL_POSITION) return (NULL_POSITION); } } else { /* * Go backward. */ if (ch_seek(p->pos)) return (NULL_POSITION); for (clinenum = p->line, cpos = p->pos; clinenum > linenum; clinenum--) { /* * Allow a signal to abort this loop. */ cpos = back_raw_line(cpos, (char **)NULL, (int *)NULL); if (ABORT_SIGS() || cpos == NULL_POSITION) return (NULL_POSITION); } } /* * We might as well cache it. */ add_lnum(clinenum, cpos); return (cpos); } /* * Return the line number of the "current" line. * The argument "where" tells which line is to be considered * the "current" line (e.g. TOP, BOTTOM, MIDDLE, etc). */ public LINENUM currline(where) int where; { POSITION pos; POSITION len; LINENUM linenum; pos = position(where); len = ch_length(); while (pos == NULL_POSITION && where >= 0 && where < sc_height) pos = position(++where); if (pos == NULL_POSITION) pos = len; linenum = find_linenum(pos); if (pos == len) linenum--; return (linenum); }
http://opensource.apple.com//source/less/less-23/less/linenum.c
CC-MAIN-2016-44
en
refinedweb
Write the for loop or do/while loop? Ads PHP Factorial Example: <?php $n = 5; $i = 1; $f = 1; while($i<$n){ $i = $i + 1; $f = $f * $i; } echo $f; ?> Here is a java example that finds the factorial of a number. public class math{ public static long factorial(int n){ if(n <= 1) return 1; else return n * factorial(n - 1); } public static void main(String [] args){ int num=5; System.out.println(factorial(num)); } } Ads
http://www.roseindia.net/answers/viewqa/PHP/28132-For-Loop-PHP.html
CC-MAIN-2016-44
en
refinedweb
User:Duplode/Apfelmus' loop for Control structures From Wikibooks, open books for an open world Note There are legitimate uses of return () as a placeholder do-nothing action. For instance, take the following action: loop = do c <- getChar if c == 'Q' then return () else loop putStrLn "" putStrLn [c] Admittedly, this looks a lot like a while (true) / break loop in your favourite imperative language. As we can tell now, however, there is something quite different going on here. Explain, step by step, what the loop action in the final box note does, and in particular what is the role of the return (). Hints: getCharis an action analogous to getLine, except that, instead of taking a newline-terminated series of characters from the standard input and making a IO String, it takes just a single character as soon as it is input. - If you have trouble with the control flow within loop, run it in GHCi to get a better feel of what it does.
https://en.wikibooks.org/wiki/User:Duplode/Apfelmus%27_loop_for_Control_structures
CC-MAIN-2016-44
en
refinedweb
Login is not possible Bug Description * Impact: connecting to raring vsftpd servers doesn't work * Test Case: - install vsftpd on raring, configure the server, try to connect to it * Regression potential: the server was failing to accept connections before so should only be better --- I'm using Ubuntu 13.04 dev with vsftpd 3.0.2-1ubuntu1. local_enable and write_enable are set to YES but I'm not able to login: sworddragon@ Connected to localhost. 220 (vsFTPd 3.0.2) Name (localhost: 331 Please specify the password. 530 Login incorrect. Login failed. /var/log/vsftpd.log contains: Thu Mar 21 09:00:33 2013 [pid 2] CONNECT: Client "127.0.0.1" Thu Mar 21 09:00:48 2013 [pid 1] [sworddragon] FAIL LOGIN: Client "127.0.0.1" /var/log/auth.log has created a line for vsftpd too: Mar 21 12:18:29 localhost vsftpd: PAM audit_log_ Related branches - Sebastien Bacher: Approve on 2013-05-16 - Ubuntu branches: Pending requested 2013-05-08 - Diff: 64 lines (+44/-0)3 files modifieddebian/changelog (+8/-0) debian/patches/13-disable-clone-newpid.patch (+35/-0) debian/patches/series (+1/-0) P.S. I'm using 12.3 released version, 64 bit. This is no longer a development version issue. (In reply to comment #30) > A Linux server with no working FTP server is a real black eye! Until this is fixed an easy workaround for this "black-eye" is to use pure-ftpd instead which works just fine and is functional equivalent in (almost) all practical sense to vsftpd changed summary to match the current problem I am facing the same problem with OpenSuSE 12.3 64bit, network install. Pure-ftpd is reported (OpenSuSE forums) to work only if pam athentication is disabled (and local authentication enabled) in the pure-ftpd configuration. (In reply to comment #35) > Pure-ftpd is reported (OpenSuSE forums) to work only if pam athentication is > disabled (and local authentication enabled) in the pure-ftpd configuration. Strange, I'm using pure-ftpd (SuSE 12.3) with configuration PAMAuthentication yes and this works just fine (but vsftpd does not). When I tried it personally, it refused to start. I will check one more time and repost. Status changed to 'Confirmed' because the bug affects multiple users. After upgrade from quantal to current raring I have the same problem too. Ubuntu bug on this also: https:/ The issue is occurring because it seems vsftp has changed it's pid namespace. Probably from sysdeputil. "syscall( There is a specific prohibition in the kernel on this: ------- commit 34e36d8ecbd958b Author: Eric W. Biederman <email address hidden> Date: Mon Sep 10 23:20:20 2012 -0700 audit: Limit audit requests to processes in the initial pid and user namespaces. This allows the code to safely make the assumption that all of the uids gids and pids that need to be send in audit messages are in the initial namespaces. If someone cares we may lift this restriction someday but start with limiting access so at least the code is always correct. ------- Regarding audit=0. I imagine it would solve the issue, rather extreme. Also if I boot with audit=0 then client side ftp fails with "500 OOPS: priv_sock_get_cmd" (seccomp_sandbox=NO in /etc/vsftpd.conf). Can you verify if the above vsftp codepath is indeed being executed and see what happens if VSF_SYSDEP_ vsftpd calls CLONE_NEWPID on SUSE - it is visible in #comment11 (see vsftpd[1]). > Also if I boot with audit=0 then client side ftp fails with "500 OOPS: > priv_sock_get_cmd" (seccomp_sandbox=NO in /etc/vsftpd.conf). This does not makes any sense to me. This bug is related to enabled seccomp sanbox, but it was fixed before 12.3 release. I'll test that. > Can you verify if the above vsftp codepath is indeed being executed and see > what happens if VSF_SYSDEP_ With a traditional fork pam session can be opened, however next test - an attempt to download the file dies on a seccomp sanbox. The same apply for a clone w/o NEW_PID, where an audit error is different. I will track this in an another bug to not pollute this one with third issue. lowering a priority of this issue, patch is in home:mvyskocil: https:/ https:/ Well, I have a question now. Will the system be updated to run VSFTPD correctly or I have to apply the patch manually? (In reply to comment #41) > Well, I have a question now. > > Will the system be updated to run VSFTPD correctly or I have to apply the patch > manually? There will be a maintenance update, once all issues will be resolved. A pal spotted this bug report and suggests "[this] is caused by vsftp switching pid namespaces (audit kernel code prohibits)". Hope this helps. This is an autogenerated message for OBS integration: This bug (786024) was mentioned in https:/ This is an autogenerated message for OBS integration: This bug (786024) was mentioned in https:/ Sent an update to 12.3 via 162608 @maintenance, please open a new maintenance incident accepted Hi all, I see that the update is accepted but not yet released. Is there an ETA on the update? Perhaps a testing repo for the update to see if it works? Cheers, Angelos Thanks Markus, I installed the test-update repository and vsftp from there. I get the following error: Any ideas? Update: I flushed everything from my server, even the yast-ftp module. Then I installed vsftp from test-update and it works. Now I am having issue with Extended Passive Mode that seems to be enabled by default. I reinstalled yast-ftp module and I get the 500 error as above. I have the same problem too. Both anonymous user and local user are unable to login. Update2: I flushed again everything but did not manage to get it working again. The log message when I run "service vsftpd status" shows login success, but the client reports error 500 and closes connection. ? (In reply to comment #52) > ? Hello Angelos :) Yes I tried again, it needs to start through xinetd or it will not start on its own (standalone). I can't say I like it, but I will live until we get the official update for vsftpd through official repos, which I am waiting for very patiantly... Let's hope it doesn't take forever.. Guys the limitations of open source are showing in this case.. I know it's unfair, but the reaction I am gettinig in my enterprise is surprise and dissappointment. We are definately not winning over any business people like that. Personally, I am keeping a low profile till this is resolved. openSUSE- Category: recommended (moderate) Bug References: 786024,812406 CVE References: Sources used: openSUSE 12.3 (src): vsftpd-3.0.2-4.5.1 Unfortunately the update did not work for me. I still get the "500 OOPS: priv_sock_get_cmd" error. Disabling seccomp sandbox is not working for me either... Same problem. Anonymous works though! Reinstalled entire system twice (quantal) and upgraded (do-release-upgrade -d) to raring. Bug occured both times. (In reply to comment #55) > Unfortunately the update did not work for me. > I still get the "500 OOPS: priv_sock_get_cmd" error. > Disabling seccomp sandbox is not working for me either... Well, without a providing any more information I cannot help you much. Would you be so kind to open a new bug? I would need to explain what are you try to do - do you see that with (non)-anonymous download? How your vsftpd.conf look like? Does grep 'vsftpd' /var/log/messages says anything usefull? BTW: the output of strace -tt -s 512 of vsftpd daemon. Created an attachment (id=535776) configuration file that fails # grep 'vsftpd' /var/log/messages Apr 18 12:38:49 aiolos xinetd[23286]: Reading included configuration file: /etc/xinetd. Apr 18 12:39:03 aiolos xinetd[23660]: Reading included configuration file: /etc/xinetd. Thanks, Angelos And the strace: # strace -p 23677 -tt -s 512 Process 23677 attached 12:51:03.048164 accept(3, {sa_family=AF_INET, sin_port= 12:51:12.678545 clone(child_ 12:51:12.678783 close(4) = 0 12:51:12.678855 accept(3, 0x7fffba89a3a0, [28]) = ? ERESTARTSYS (To be restarted if SA_RESTART is set) 12:51:16.044845 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=23929, si_status=2, si_utime=0, si_stime=0} --- 12:51:16.044914 alarm(1) = 0 12:51:16.044968 rt_sigreturn() = -1 EINTR (Interrupted system call) 12:51:16.045047 alarm(0) = 1 12:51:16.045095 wait4(-1, NULL, WNOHANG, NULL) = 23929 12:51:16.045173 wait4(-1, NULL, WNOHANG, NULL) = -1 ECHILD (No child processes) 12:51:16.045224 accept(3, {sa_family=AF_INET, sin_port= 12:51:16.083371 clone(child_ 12:51:16.083620 close(4) = 0 12:51:16.083690 accept(3, 0x7fffba89a3a0, [28]) = ? ERESTARTSYS (To be restarted if SA_RESTART is set) 12:51:25.264770 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=23936, si_status=2, si_utime=0, si_stime=0} --- 12:51:25.264834 alarm(1) = 0 12:51:25.264882 rt_sigreturn() = -1 EINTR (Interrupted system call) 12:51:25.264936 alarm(0) = 1 12:51:25.264977 wait4(-1, NULL, WNOHANG, NULL) = 23936 12:51:25.265053 wait4(-1, NULL, WNOHANG, NULL) = -1 ECHILD (No child processes) 12:51:25.265099 accept(3, {sa_family=AF_INET, sin_port= 12:51:25.302455 clone(child_ 12:51:25.302684 close(4) = 0 12:51:25.302754 accept(3, ^CProcess 23677 detached <detached ...> (In reply to comment #58) > Add allow_ to the bottom of your /etc/vsftpd.conf file. Thanks, it is working locally now. I still cannot access from remote location (error while changing to /home/user) Looking into it. Thanks, Angelos My story: I've done several installs of 12.3. My latest, I tried when installed to start vsftpd from YaST. It would not start, as usual, with the message that for run levels 3, 5, network-remotefs had to be installed (we all know by now there is no run lever 3 or 5 with systemd ??) I tried again a couple of days ago...same thing. I keep installing all the updates so decided last night to attemp to start vsftpd again from YaST only to discover it was running! I was able to connect from another machine! I don't know which fix did it but it seems to have healed itself in some of the updates that have been released. Many thanks to the team working on this (and other) issues. If we get these basic things working 12.3 has potential to be the best since 11.4. KDE4.10.2 is VERY nice! Awesome! Same here. Please fix! I am also affected by this bug after upgrading to 13.04 :( Me too, 13.04 upgrade has caused vsftp to stop working with precisely the same symptoms: auth.log: Apr 26 10:36:29 ftpserv vsftpd: PAM audit_log_ Same here :( I have serious problem because of this bug! PAM unable to dlopen( PAM adding faulty module: pam_ecryptfs.so pam_unix( pam_unix( pam_winbind( pam_winbind( PAM audit_log_ Same here, seems to be a kernel issue. It still works with 3.5.x kernel. SuSE's fix is here https:/ I just rebuilt 3.0.2-1ubuntu1 with their patch, vsftpd works fine now. I began experiencing this problem after upgrading to Kubuntu 13.04 (from 12.10) yesterday. For now, I have removed vsftpd and installed pure-ftpd. That is working fine for my needs at the moment. Exact same issue after upgrading from 12.10 to 13.04, vsftpd is now unusable. Can confirm what Jürgen Kreileder (jk) said in comment #18. Building vsftpd 3.0.2-1ubuntu1 with the changes in vsftpd- I basically used this guide if anyone else want to try: http:// And I tried with a fresh install so it isn't just upgrades that are affected (ref comment #1). Same issue. Made school very difficult today when my paper was due and I could log in. hahaha. Please fix!! the open suse link refered to above: https:/ links to these: https:/ https:/ Hi all, I compiled the vsftpd package Here is the patched vsftpd version in 32bits arch. Hi all, Previous messages were sent too fast and I didn't find a way to remove them. I posted the patched version of vsftpd in both 34 and 32 bits arch : please feel free to download. Don't forget to remove the previous installed version on your system or dpkg will tell you that the package is already installed : sudo apt-get remove vsftpd sudo dpkg -i vsftpd_patched.deb That's all, and it doesn't remove config files. If you prefer to compile your own version, here is the procedure : mkdir vsftpd-patched cd vsftpd-patched sudo apt-get build-dep vsftpd sudo apt-get install fakeroot apt-get source vsftpd --> Go on https:/ patch -p0 < vsftpd- cd vsftpd-3.0.2/ dpkg-buildpackage -us -uc -nc cd ../ You'll get the compiled .deb in the directory. Remove previous installed version of vsftpd on your system and install the brand new patched one. sudo apt-get remove vsftpd sudo dpkg -i vsftpd_patched.deb You can remove the directory where you built the package after installation. Note : you need to build on a 64 bits arch to get a 64bits version of the package and a 32 bits arch for 32bits one. I used VM for this. I've tested Vincents 64bit patch. Confirmed fixed. Same here on a 64bits server install. Merci Vincent Thanks that patch worked for me too! Muchas gracias Vincent, fuciono para mi, me salvaste la vida :D --- Thank you very much Vincent, worked for me, saved my life :D Test 64 patch. Also confirming that Vincent DAVY patched vsftpd package fixes the issue. We need a newer vsftpd on the repository as soon as possible, who knows how many people is having the same problem but haven't found this bug report, I struggled until I found this. I just lost hair over non responsive vsftpd on freshly updated 1304 server till I came here too. I was having trouble with my ssh setup so I thought I'd do a quick install of ftp to transfer some keys .. LOL how wrong I was... Probably everyone who had the unfortunate idea to upgrade to ubuntu 13.04 in the recent days can't use vsftpd anymore. First I get the error "ubuntu vsftpd: PAM unable to dlopen( and then "ubuntu vsftpd: PAM audit_log_ details here: http:// Excuseme, how can i use the patch (#18)? how can i compile it? Thanks for response. Confirm that the version in #25 is working for me too, many thanks! Confirmed that version in #26 is working in Lubuntu 13.04. Thanks Vincent! Ok, I've sponsored the proposed fix to saucy and raring and updated the bug a bit to be SRU compliant (https:/ This bug was fixed in the package vsftpd - 3.0.2-1ubuntu2 --------------- vsftpd (3.0.2-1ubuntu2) saucy; urgency=low * debian/ - patch to remove CLONE_NEWPID syscall see: https:/ Fixes LP: #1160372 -- Daniel Llewellyn (Bang Communications) <email address hidden> Wed, 08 May 2013 14:08:53 +0100 Hi, I am using Opensue 12.3 64 Bit. Freshly installed and updated to the latest packages from the update repository. In my opinion the problems regarding the present version 3.0.2-4.5.1 of vsftp are far from resolved. As other related bugs as https:/ were marked as duplicates of this one I post my findings here. Bug 1 ****** I still need seccomp_sandbox=NO to connect, when TLS is enabled. With this option set to NO everything works as expected. However, if seccomp_sandbox=YES I get the following messages in Filezilla when trying too connect from a remote system which also runs under OS 12.3: Status: TLS/SSL-Verbindung hergestellt. Antwort: 331 Please specify the password. Befehl: PASS ******* Antwort: 230 Login successful. Befehl: SYST Antwort: 215 UNIX Type: L8 Befehl: FEAT Antwort: 211-Features: Antwort: AUTH TLS Antwort: EPRT Antwort: EPSV Antwort: MDTM Antwort: PASV Antwort: PBSZ Antwort: PROT Antwort: REST STREAM Antwort: SIZE Antwort: TVFS Antwort: UTF8 Antwort: 211 End Befehl: OPTS UTF8 ON Antwort: 200 Always in UTF8 mode. Befehl: PBSZ 0 Antwort: 200 PBSZ set to 0. Befehl: PROT P Antwort: 200 PROT now Private. Status: Verbunden Status: Empfange Verzeichnisinha Befehl: CWD / Antwort: 250 Directory successfully changed. Befehl: PWD Antwort: 257 "/" Befehl: TYPE I Antwort: 200 Switching to Binary mode. Befehl: PASV Fehler: GnuTLS error -15: Ein unerwartetes TLS-Paket wurde empfangen. Fehler: Verbindung zum Server getrennt: ECONNABORTED - Connection aborted Fehler: Verzeichnisinhalt konnte nicht empfangen werden Bug 2 (maybe related) ****** 2) Even with "seccomp_ syslog_enable=YES I get the following message in filezilla: Status: Connecting to 192.168.0.37:21... Status: Connection established, waiting for welcome message... Response: 500 OOPS: priv_sock_get_cmd Error: Critical error Error: Could not connect to server Bug 3: ****** From some OS 12.3 remote systems I cannot connect in case the following option is not set to NO: require_ So all in all vsftp still shows major deficiencies on Opensuse 12.3 which were not present in OS 12.2. Any ideas what I could do ? (In reply to comment #63) > From some OS 12.3 remote systems I cannot connect in case the following option > is not set to NO: > > require_ > I have seen that the OS 12.3-systems for which the setting "require_ is required all had the original Filezilla version 3.5.3 form the OS 12.3 OSS repository installed. After installing Filezilla version 3.7.0.1 from the network repository http:// this problem, which is obviously client related, disappears and the setting require_ works. The other problems described in comment #63, however, remain. guys, a fresh install of the vsftp will still show this problem, we had to use the workaround provided. If a configuration setting has changed, ie "require_ @abonilla, @rm: hi, please open a **new** report. It's quite hard to follow the discussion in this one. And please attach the vsftpd.conf and an output of strace -f -tt You might copy the vsftpd.service to /etc/systemd/ change the ExecStart line to ExecStart= and issuse systemctl daemon-reload && systemctl restart vsftpd.service Dec 1 07:26:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND= Dec 1 07:26:22 watcher-U56E sudo: pam_unix( Dec 1 07:26:28 watcher-U56E sudo: pam_unix( Dec 1 07:30:01 watcher-U56E CRON[2648]: pam_unix( Dec 1 07:30:01 watcher-U56E CRON[2648]: pam_unix( Dec 1 07:41:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND= Dec 1 07:41:22 watcher-U56E sudo: pam_unix( Dec 1 07:41:29 watcher-U56E sudo: pam_unix( Dec 1 07:56:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND= Dec 1 07:56:22 watcher-U56E sudo: pam_unix( Dec 1 07:56:30 watcher-U56E sudo: pam_unix( Dec 1 08:11:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND= Dec 1 08:11:22 watcher-U56E sudo: pam_unix( Dec 1 08:11:28 watcher-U56E sudo: pam_unix( Dec 1 08:17:01 watcher-U56E CRON[2784]: pam_unix( Dec 1 08:17:01 watcher-U56E CRON[2784]: pam_unix( Dec 1 08:26:22 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND= Dec 1 08:26:22 watcher-U56E sudo: pam_unix( Dec 1 08:26:28 watcher-U56E sudo: pam_unix( Dec 1 08:31:25 watcher-U56E mdm[1596]: pam_unix( Dec 1 08:31:25 watcher-U56E mdm[1596]: pam_ck_ Dec 1 08:31:27 watcher-U56E dbus[1143]: [system] Rejected send message, 7 matched rules; type="method_ Dec 1 08:31:31 watcher-U56E polkitd( Dec 1 08:31:34 watcher-U56E sudo: watcher : TTY=unknown ; PWD=/home/watcher ; USER=root ; COMMAND= Dec 1 08:31:34 watcher-U56E sudo: pam_unix( Dec 1 08:31:42 watcher-U56E sudo: pam_unix... I've just stumbled into this bug on 14.04.1. Worked around by commenting out "auth required pam_shells.so" in /etc/pam.d/vsftpd and restarting vsftpd as mentioned in 869684. my server environment install vsftpd version 2.3.xx and i want config my vsftpd to jail user directory. but when insert parameter file vsftpd and uncomment 'allow_ one again when insert 'seccomp_ maybe can help for this case :) thank's before. SUSE-RU- Category: recommended (moderate) Bug References: 786024, CVE References: Sources used: SUSE Linux Enterprise Server 12-SP1 (src): vsftpd-3.0.2-31.1 SUSE Linux Enterprise Server 12 (src): vsftpd-3.0.2-31.1 openSUSE- Category: recommended (moderate) Bug References: 786024, CVE References: Sources used: openSUSE Leap 42.1 (src): vsftpd-3.0.2-17.1 Thanks for reporting this bug. I can't reproduce this on a new raring system. Could you please paste your entire /etc/vsftpd.conf and your /etc/pam.d/vsftpd file and any files it @includes?
https://bugs.launchpad.net/ubuntu/+source/vsftpd/+bug/1160372
CC-MAIN-2016-44
en
refinedweb
I have a python script that calls a system program and reads the output from a file out.txt out.txt import subprocess, os, sys filename = sys.argv[1] file = open(filename,'r') foo = open('foo','w') foo.write(file.read().rstrip()) foo = open('foo','a') crap = open(os.devnull,'wb') numSolutions = 0 while True: subprocess.call(["minisat", "foo", "out"], stdout=crap,stderr=crap) out = open('out','r') if out.readline().rstrip() == "SAT": numSolutions += 1 clause = out.readline().rstrip() clause = clause.split(" ") print clause clause = map(int,clause) clause = map(lambda x: -x,clause) output = ' '.join(map(lambda x: str(x),clause)) print output foo.write('\n'+output) out.close() else: break print "There are ", numSolutions, " solutions." You need to flush foo so that the external program can see its latest changes. When you write to a file, the data is buffered in the local process and sent to the system in larger blocks. This is done because updating the system file is relatively expensive. In your case, you need to force a flush of the data so that minisat can see it. foo.write('\n'+output) foo.flush()
https://codedump.io/share/AViaFBLIct4E/1/python-refresh-file-from-disk
CC-MAIN-2016-44
en
refinedweb
Angel Perea Martinez wrote: > Hi, I´m relatively new to PIL, so perhaps I`m > overseeing something trivial, but when I use the font > Symbol with truetype (in the truetype mode, i see only > squares. I have no problem with other types, as arial > etc. > > f = ImageFont.truetype(n,fSize) > self.draw.text ((x,y), text.encode("Latin-1"), font = f) PIL's current freetype driver only supports fonts that use the Unicode character set; Microsoft's Symbol font uses a proprietary encoding. as a quick workaround, change the "getfont" method in _imagingft.c so that the last few lines look like this: if (error) { PyObject_DEL(self); PyErr_SetString(PyExc_IOError, "cannot load font"); return NULL; } /* --- start of patch --- */ /* explicitly select Unicode or Symbol charmap */ if (FT_Select_Charmap(self->face, ft_encoding_unicode)) FT_Select_Charmap(self->face, ft_encoding_symbol); /* --- end of patch --- */ return (PyObject*) self; also, the symbol characters lie in the 0xF000-0xF0FF character range; to draw e.g. symbol 68 (whatever that is), pass in unichr(0xf000+68) to the "text" method. </F>
https://mail.python.org/pipermail/image-sig/2003-September/002433.html
CC-MAIN-2016-44
en
refinedweb
I have built a few off-the-shelf classifiers from sklearn ~/anaconda/lib/python3.5/site-packages/sklearn/metrics/classification.py:1074: UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples. 'precision', 'predicted', average, warn_for) stdout "poor classifier performance" warnings Suppressing all warnings is easy with -Wignore (see warning flag docs) The warnings module can do some finer-tuning with filters (ignore just your warning type). Capturing just your warning (assuming there isn't some API in the module to tweak it) and doing something special could be done using the warnings.catch_warnings context manager and code adapted from "Testing Warnings": import warnings class MyWarning(Warning): pass def something(): warnings.warn("magic warning", MyWarning) with warnings.catch_warnings(record=True) as w: # Trigger a warning. something() # Verify some things if ((len(w) == 1) and issubclass(w[0].category, MyWarning) and "magic" in str(w[-1].message)): print('something magical')
https://codedump.io/share/Rg1OHXD9Dnp0/1/python---replacing-warnings-with-a-simple-message
CC-MAIN-2016-44
en
refinedweb
Formatting Types in the .NET Framework Formatting is the process of converting an instance of a class, structure, or enumeration value to its string representation, often so that the resulting string can be displayed to users or deserialized to restore the original data type. This conversion can pose a number of challenges: The way that values are stored internally does not necessarily reflect the way that users want to view them. For example, a telephone number might be stored in the form 8009999999, which is not user-friendly. It should instead be displayed as 800-999-9999. See the Custom Format Strings section for an example that formats a number in this way. Sometimes the conversion of an object to its string representation is not intuitive. For example, it is not clear how the string representation of a Temperature object or a Person object should appear. For an example that formats a Temperature object in a variety of ways, see the Standard Format Strings section. Values often require culture-sensitive formatting. For example, in an application that uses numbers to reflect monetary values, numeric strings should include the current culture’s currency symbol, group separator (which, in most cultures, is the thousands separator), and decimal symbol. For an example, see the Culture-Sensitive Formatting with Format Providers and the IFormatProvider Interface section. An application may have to display the same value in different ways. For example, an application may represent an enumeration member by displaying a string representation of its name or by displaying its underlying value. For an example that formats a member of the DayOfWeek enumeration in different ways, see the Standard Format Strings section. The .NET Framework provides rich formatting support that enables developers to address these requirements. This overview contains the following sections: Formatting in the .NET Framework Default Formatting Using the ToString Method Overriding the ToString Method The ToString Method and Format Strings Culture-Sensitive Formatting with Format Providers and the IFormatProvider Interface The IFormattable Interface Custom Formatting with ICustomFormatter The basic mechanism for formatting is the default implementation of the Object.ToString method, which is discussed in the Default Formatting Using the ToString Method section later in this topic. However, the .NET Framework provides several ways to modify and extend its default formatting support. These include the following: Overriding the Object.ToString method to define a custom string representation of an object’s value. For more information, see the Overriding the ToString Method section later in this topic. Defining format specifiers that enable the string representation of an object’s value to take multiple forms. For example, the "X" format specifier in the following statement converts an integer to the string representation of a hexadecimal value. For more information about format specifiers, see the ToString Method and Format Strings section. Using format providers to take advantage of the formatting conventions of a specific culture. For example, the following statement displays a currency value by using the formatting conventions of the en-US culture. For more information about formatting with format providers, see the Format Providers and the IFormatProvider Interface section. Implementing the IFormattable interface to support both string conversion with the Convert class and composite formatting. For more information, see the IFormattable Interface section. Using composite formatting to embed the string representation of a value in a larger string. For more information, see the Composite Formatting section. Implementing ICustomFormatter and IFormatProvider to provide a complete custom formatting solution. For more information, see the Custom Formatting with ICustomFormatter section. The following sections examine these methods for converting an object to its string representation. Every type that is derived from System.Object automatically inherits a parameterless ToString method, which returns the name of the type by default. The following example illustrates the default ToString method. It defines a class named Automobile that has no implementation. When the class is instantiated and its ToString method is called, it displays its type name. Note that the ToString method is not explicitly called in the example. The Console.WriteLine(Object) method implicitly calls the ToString method of the object passed to it as an argument. Because all types other than interfaces are derived from Object, this functionality is automatically provided to your custom classes or structures. However, the functionality offered by the default ToString method, is limited: Although it identifies the type, it fails to provide any information about an instance of the type. To provide a string representation of an object that provides information about that object, you must override the ToString method. Displaying the name of a type is often of limited use and does not allow consumers of your types to differentiate one instance from another. However, you can override the ToString method to provide a more useful representation of an object’s value. The following example defines a Temperature object and overrides its ToString method to display the temperature in degrees Celsius. using System; public class Temperature { private decimal temp; public Temperature(decimal temperature) { this.temp = temperature; } public override string ToString() { return this.temp.ToString("N1") + "°C"; } } public class Example { public static void Main() { Temperature currentTemperature = new Temperature(23.6m); Console.WriteLine("The current temperature is " + currentTemperature.ToString()); } } // The example displays the following output: // The current temperature is 23.6°C. In the .NET Framework, the ToString method of each primitive value type has been overridden to display the object’s value instead of its name. The following table shows the override for each primitive type. Note that most of the overridden methods call another overload of the ToString method and pass it the "G" format specifier, which defines the general format for its type, and an IFormatProvider object that represents the current. A standard format string contains a single format specifier, which is an alphabetic character that defines the string representation of the object to which it is applied, along with an optional precision specifier that affects how many digits are displayed in the result string. If the precision specifier is omitted or is not supported, a standard format specifier is equivalent to a standard format string. The .NET Framework defines a set of standard format specifiers for all numeric types, all date and time types, and all enumeration types. For example, each of these categories supports a "G" standard format specifier, which defines a general string representation of a value of that type. Standard format strings for enumeration types directly control the string representation of a value. The format strings passed to an enumeration value’s ToString method determine whether the value is displayed using its string name (the "G" and "F" format specifiers), its underlying integral value (the "D" format specifier), or its hexadecimal value (the "X" format specifier). The following example illustrates the use of standard format strings to format a DayOfWeek enumeration value. For information about enumeration format strings, see Enumeration Format Strings. object. For more information about standard numeric formatting strings, see Standard Numeric Format Strings.. (For more information about custom format strings, see the next section.) The following example illustrates this relationship. using System; using System.Globalization; public class Example { public static void Main() { DateTime date1 = new DateTime(2009, 6, 30); Console.WriteLine("D Format Specifier: {0:D}", date1); string longPattern = CultureInfo.CurrentCulture.DateTimeFormat.LongDatePattern; Console.WriteLine("'{0}' custom format string: {1}", longPattern, date1.ToString(longPattern)); } } // The example displays the following output when run on a system whose // current culture is en-US: // D Format Specifier: Tuesday, June 30, 2009 // 'dddd, MMMM dd, yyyy' custom format string: Tuesday, June 30, 2009. For example, a Temperature class can internally store the temperature in degrees Celsius and use format specifiers to represent the value of the Temperature object in degrees Celsius, degrees Fahrenheit, and kelvins. The following example provides an illustration. using System; public class Temperature {("C"); } public string ToString(string format) { // Handle null or empty string. if (String.IsNullOrEmpty(format)) format = "C"; // Remove spaces and convert to uppercase. format = format.Trim().ToUpperInvariant(); // Convert temperature to Fahrenheit and return string. switch (format) { // Convert temperature to Fahrenheit and return string. case "F": return this.Fahrenheit.ToString("N2") + " °F"; // Convert temperature to Kelvin and return string. case "K": return this.Kelvin.ToString("N2") + " K"; // return temperature in Celsius. case "G": case "C": return this.Celsius.ToString("N2") + " °C"; default: throw new FormatException(String.Format("The '{0}' format string is not supported.", format)); } } } public class Example { public static void Main() { Temperature temp1 = new Temperature(0m); Console.WriteLine(temp1.ToString()); Console.WriteLine(temp1.ToString("G")); Console.WriteLine(temp1.ToString("C")); Console.WriteLine(temp1.ToString("F")); Console.WriteLine(temp1.ToString("K")); Temperature temp2 = new Temperature(-40m); Console.WriteLine(temp2.ToString()); Console.WriteLine(temp2.ToString("G")); Console.WriteLine(temp2.ToString("C")); Console.WriteLine(temp2.ToString("F")); Console.WriteLine(temp2.ToString("K")); Temperature temp3 = new Temperature(16m); Console.WriteLine(temp3.ToString()); Console.WriteLine(temp3.ToString("G")); Console.WriteLine(temp3.ToString("C")); Console.WriteLine(temp3.ToString("F")); Console.WriteLine(temp3.ToString("K")); Console.WriteLine(String.Format("The temperature is now {0:F}.", temp3)); } } // The example displays the following output: // 0.00 °C // 0.00 °C // 0.00 °C // 32.00 °F // 273.15 K // -40.00 °C // -40.00 °C // -40.00 °C // -40.00 °F // 233.15 K // 16.00 °C // 16.00 °C // 16.00 °C // 60.80 °F // 289.15 K // The temperature is now 16.00 °C. and Custom Numeric. All numeric types (that is, the Byte, Decimal, Double, Int16, Int32, Int64, SByte, Single, UInt16, UInt32, UInt64, and BigInteger types) , as well as the DateTime, DateTimeOffset, TimeSpan, Guid, and all enumeration types, support formatting with format strings. For information on the specific format strings supported by each type, see the following topics Although format specifiers let you customize the formatting of objects, producing a meaningful string representation of objects often requires additional formatting information. For example, formatting a number as a currency value by using either the "C" standard format string or a custom format string such as "$ #,#.00" requires, at a minimum, information about the correct currency symbol, group separator, and decimal separator to be available to include in the formatted string. In the .NET Framework, this additional formatting information is made available through the IFormatProvider interface, which is provided as a parameter to one or more overloads of the ToString method of numeric types and date and time types. IFormatProvider implementations are used in the .NET Framework to support culture-specific formatting. The following example illustrates how the string representation of an object changes when it is formatted with three IFormatProvider objects that represent different cultures. using System; using System.Globalization; public class Example { public static void Main() { decimal value = 1603.42m; Console.WriteLine(value.ToString("C3", new CultureInfo("en-US"))); Console.WriteLine(value.ToString("C3", new CultureInfo("fr-FR"))); Console.WriteLine(value.ToString("C3", new CultureInfo("de-DE"))); } } // The example displays the following output: // $1,603.420 // 1 603,420 € // 1.603,420 € The IFormatProvider interface includes one method, GetFormat(Type), which has a single parameter that specifies the type of object that provides formatting information. If the method can provide an object of that type, it returns it. Otherwise, it returns a null reference (Nothing in Visual Basic). IFormatProvider.GetFormat is a callback method. When you call a ToString method overload that includes an IFormatProvider parameter, it calls the GetFormat method of that IFormatProvider object. The GetFormat method is responsible for returning an object that provides the necessary formatting information, as specified by its formatType parameter, to the ToString method. A number of formatting or string conversion methods include a parameter of type IFormatProvider, but in many cases the value of the parameter is ignored when the method is called. The following table lists some of the formatting methods that use the parameter and the type of the Type object that they pass to the IFormatProvider.GetFormat method. The .NET Framework provides three classes that implement IFormatProvider: DateTimeFormatInfo, a class that provides formatting information for date and time values for a specific culture. Its IFormatProvider.GetFormat implementation returns an instance of itself. NumberFormatInfo, a class that provides numeric formatting information for a specific culture. Its IFormatProvider.GetFormat implementation returns an instance of itself. CultureInfo. Its IFormatProvider.GetFormat implementation can return either a NumberFormatInfo object to provide numeric formatting information or a DateTimeFormatInfo object to provide formatting information for date and time values. You can also implement your own format provider to replace any one of these classes. However, your implementation’s GetFormat method must return an object of the type listed in the previous table if it has to provide formatting information to the ToString method. By default, the formatting of numeric values is culture-sensitive. If you do not specify a culture when you call a formatting method, the formatting conventions of the current thread culture are used. This is illustrated in the following example, which changes the current thread culture four times and then calls the Decimal.ToString(String) method. In each case, the result string reflects the formatting conventions of the current culture. This is because the ToString and ToString(String) methods wrap calls to each numeric type's ToString(String, IFormatProvider) method. using System; using System.Globalization; using System.Threading; public class Example { public static void Main() { string[] cultureNames = { "en-US", "fr-FR", "es-MX", "de-DE" }; Decimal value = 1043.17m; foreach (var cultureName in cultureNames) { // Change the current thread culture. Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(cultureName); Console.WriteLine("The current culture is {0}", Thread.CurrentThread.CurrentCulture.Name); Console.WriteLine(value.ToString("C2")); Console.WriteLine(); } } } // The example displays the following output: // The current culture is en-US // $1,043.17 // // The current culture is fr-FR // 1 043,17 € // // The current culture is es-MX // $1,043.17 // // The current culture is de-DE // 1.043,17 € You can also format a numeric value for a specific culture by calling a ToString overload that has a provider parameter and passing it either of the following: A CultureInfo object that represents the culture whose formatting conventions are to be used. Its CultureInfo.GetFormat method returns the value of the CultureInfo.NumberFormat property, which is the NumberFormatInfo object that provides culture-specific formatting information for numeric values. A NumberFormatInfo object that defines the culture-specific formatting conventions to be used. Its GetFormat method returns an instance of itself. The following example uses NumberFormatInfo objects that represent the English (United States) and English (Great Britain) cultures and the French and Russian neutral cultures to format a floating-point number. using System; using System.Globalization; public class Example { public static void Main() { Double value = 1043.62957; string[] cultureNames = { "en-US", "en-GB", "ru", "fr" }; foreach (var name in cultureNames) { NumberFormatInfo nfi = CultureInfo.CreateSpecificCulture(name).NumberFormat; Console.WriteLine("{0,-6} {1}", name + ":", value.ToString("N3", nfi)); } } } // The example displays the following output: // en-US: 1,043.630 // en-GB: 1,043.630 // ru: 1 043,630 // fr: 1 043,630 By default, the formatting of date and time values is culture-sensitive. If you do not specify a culture when you call a formatting method, the formatting conventions of the current thread culture are used. This is illustrated in the following example, which changes the current thread culture four times and then calls the DateTime.ToString(String) method. In each case, the result string reflects the formatting conventions of the current culture. This is because the DateTime.ToString(), DateTime.ToString(String), DateTimeOffset.ToString(), and DateTimeOffset.ToString(String) methods wrap calls to the DateTime.ToString(String, IFormatProvider) and DateTimeOffset.ToString(String, IFormatProvider) methods. using System; using System.Globalization; using System.Threading; public class Example { public static void Main() { string[] cultureNames = { "en-US", "fr-FR", "es-MX", "de-DE" }; DateTime dateToFormat = new DateTime(2012, 5, 28, 11, 30, 0); foreach (var cultureName in cultureNames) { // Change the current thread culture. Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(cultureName); Console.WriteLine("The current culture is {0}", Thread.CurrentThread.CurrentCulture.Name); Console.WriteLine(dateToFormat.ToString("F")); Console.WriteLine(); } } } // The example displays the following output: // The current culture is en-US // Monday, May 28, 2012 11:30:00 AM // // The current culture is fr-FR // lundi 28 mai 2012 11:30:00 // // The current culture is es-MX // lunes, 28 de mayo de 2012 11:30:00 a.m. // // The current culture is de-DE // Montag, 28. Mai 2012 11:30:00 You can also format a date and time value for a specific culture by calling a DateTime.ToString or DateTimeOffset.ToString overload that has a provider parameter and passing it either of the following: A CultureInfo object that represents the culture whose formatting conventions are to be used. Its CultureInfo.GetFormat method returns the value of the CultureInfo.DateTimeFormat property, which is the DateTimeFormatInfo object that provides culture-specific formatting information for date and time values. A DateTimeFormatInfo object that defines the culture-specific formatting conventions to be used. Its GetFormat method returns an instance of itself. The following example uses DateTimeFormatInfo objects that represent the English (United States) and English (Great Britain) cultures and the French and Russian neutral cultures to format a date. using System; using System.Globalization; public class Example { public static void Main() { DateTime dat1 = new DateTime(2012, 5, 28, 11, 30, 0); string[] cultureNames = { "en-US", "en-GB", "ru", "fr" }; foreach (var name in cultureNames) { DateTimeFormatInfo dtfi = CultureInfo.CreateSpecificCulture(name).DateTimeFormat; Console.WriteLine("{0}: {1}", name, dat1.ToString(dtfi)); } } } // The example displays the following output: // en-US: 5/28/2012 11:30:00 AM // en-GB: 28/05/2012 11:30:00 // ru: 28.05.2012 11:30:00 // fr: 28/05/2012 11:30:00 Typically, types that overload the ToString method with a format string and an IFormatProvider parameter also implement the IFormattable interface. This interface has a single member, IFormattable.ToString(String, IFormatProvider), that includes both a format string and a format provider as parameters. Implementing the IFormattable interface for your application-defined class offers two advantages: Support for string conversion by the Convert class. Calls to the Convert.ToString(Object) and Convert.ToString(Object, IFormatProvider) methods call your IFormattable implementation automatically. Support for composite formatting. If a format item that includes a format string is used to format your custom type, the common language runtime automatically calls your IFormattable implementation and passes it the format string. For more information about composite formatting with methods such as String.Format or Console.WriteLine, see the Composite Formatting section. The following example defines a Temperature class that implements the IFormattable interface. It supports the "C" or "G" format specifiers to display the temperature in Celsius, the "F" format specifier to display the temperature in Fahrenheit, and the "K" format specifier to display the temperature in Kelvin. using System; using System.Globalization; public class Temperature : IFormattable {("G", null); } public string ToString(string format) { return this.ToString(format, null); } public string ToString(string format, IFormatProvider provider) { // Handle null or empty arguments. if (String.IsNullOrEmpty(format)) format = "G"; // Remove any white space and convert to uppercase. format = format.Trim().ToUpperInvariant(); if (provider == null) provider = NumberFormatInfo.CurrentInfo; switch (format) { // Convert temperature to Fahrenheit and return string. case "F": return this.Fahrenheit.ToString("N2", provider) + "°F"; // Convert temperature to Kelvin and return string. case "K": return this.Kelvin.ToString("N2", provider) + "K"; // Return temperature in Celsius. case "C": case "G": return this.Celsius.ToString("N2", provider) + "°C"; default: throw new FormatException(String.Format("The '{0}' format string is not supported.", format)); } } } The following example instantiates a Temperature object. It then calls the ToString method and uses several composite format strings to obtain different string representations of a Temperature object. Each of these method calls, in turn, calls the IFormattable implementation of the Temperature class. public class Example { public static void Main() { Temperature temp1 = new Temperature(22m); Console.WriteLine(Convert.ToString(temp1, new CultureInfo("ja-JP"))); Console.WriteLine("Temperature: {0:K}", temp1); Console.WriteLine("Temperature: {0:F}", temp1); Console.WriteLine(String.Format(new CultureInfo("fr-FR"), "Temperature: {0:F}", temp1)); } } // The example displays the following output: // 22.00°C // Temperature: 295.15°K // Temperature: 71.60°F // Temperature: 71,60°F Some methods, such as String.Format and StringBuilder.AppendFormat, support composite formatting. A composite format string is a kind of template that returns a single string that incorporates the string representation of zero, one, or more objects. Each object is represented in the composite format string by an indexed format item. The index of the format item corresponds to the position of the object that it represents in the method's parameter list. Indexes are zero-based. For example, in the following call to the String.Format method, the first format item, {0:D}, is replaced by the string representation of thatDate; the second format item, {1}, is replaced by the string representation of item1; and the third format item, {2:C2}, is replaced by the string representation of item1.Value. result = String.Format("On {0:d}, the inventory of {1} was worth {2:C2}.", thatDate, item1, item1.Value); Console.WriteLine(result); // The example displays output like the following if run on a system // whose current culture is en-US: // On 5/1/2009, the inventory of WidgetA was worth $107.44. In addition to replacing a format item with the string representation of its corresponding object, format items also let you control the following: The specific way in which an object is represented as a string, if the object implements the IFormattable interface and supports format strings. You do this by following the format item's index with a : (colon) followed by a valid format string. The previous example did this by formatting a date value with the "d" (short date pattern) format string (e.g., {0:d}) and by formatting a numeric value with the "C2" format string (e.g., {2:C2} to represent the number as a currency value with two fractional decimal digits. The width of the field that contains the object's string representation, and the alignment of the string representation in that field. You do this by following the format item's index with a , (comma) followed the field width. The string is right-aligned in the field if the field width is a positive value, and it is left-aligned if the field width is a negative value. The following example left-aligns date values in a 20-character field, and it right-aligns decimal values with one fractional digit in an 11-character field. DateTime startDate = new DateTime(2015, 8, 28, 6, 0, 0); decimal[] temps = { 73.452m, 68.98m, 72.6m, 69.24563m, 74.1m, 72.156m, 72.228m }; Console.WriteLine("{0,-20} {1,11}\n", "Date", "Temperature"); for (int ctr = 0; ctr < temps.Length; ctr++) Console.WriteLine("{0,-20:g} {1,11:N1}", startDate.AddDays(ctr), temps[ctr]); // The example displays the following output: // Date Temperature // // 8/28/2015 6:00 AM 73.5 // 8/29/2015 6:00 AM 69.0 // 8/30/2015 6:00 AM 72.6 // 8/31/2015 6:00 AM 69.2 // 9/1/2015 6:00 AM 74.1 // 9/2/2015 6:00 AM 72.2 // 9/3/2015 6:00 AM 72.2 Note that, if both the alignment string component and the format string component are present, the former precedes the latter (for example, {0,-20:g}. For more information about composite formatting, see Composite Formatting. Two composite formatting methods, String.Format(IFormatProvider, String, Object[]) and StringBuilder.AppendFormat(IFormatProvider, String, Object[]), include a format provider parameter that supports custom formatting. When either of these formatting methods is called, it passes a Type object that represents an ICustomFormatter interface to the format provider’s GetFormat method. The GetFormat method is then responsible for returning the ICustomFormatter implementation that provides custom formatting. The ICustomFormatter interface has a single method, Format(String, Object, IFormatProvider), that is called automatically by a composite formatting method, once for each format item in a composite format string. The Format(String, Object, IFormatProvider) method has three parameters: a format string, which represents the formatString argument in a format item, an object to format, and an IFormatProvider object that provides formatting services. Typically, the class that implements ICustomFormatter also implements IFormatProvider, so this last parameter is a reference to the custom formatting class itself. The method returns a custom formatted string representation of the object to be formatted. If the method cannot format the object, it should return a null reference (Nothing in Visual Basic). The following example provides an ICustomFormatter implementation named ByteByByteFormatter that displays integer values as a sequence of two-digit hexadecimal values followed by a space. public class ByteByByteFormatter : IFormatProvider, ICustomFormatter { public object GetFormat(Type formatType) { if (formatType == typeof(ICustomFormatter)) return this; else return null; } public string Format(string format, object arg, IFormatProvider formatProvider) { if (! formatProvider.Equals(this)) return null; // Handle only hexadecimal format string. if (! format.StartsWith("X")) return null; byte[] bytes; string output = null; // Handle only integral types. if (arg is Byte) bytes = BitConverter.GetBytes((Byte) arg); else if (arg is Int16) bytes = BitConverter.GetBytes((Int16) arg); else if (arg is Int32) bytes = BitConverter.GetBytes((Int32) arg); else if (arg is Int64) bytes = BitConverter.GetBytes((Int64) arg); else if (arg is SByte) bytes = BitConverter.GetBytes((SByte) arg); else if (arg is UInt16) bytes = BitConverter.GetBytes((UInt16) arg); else if (arg is UInt32) bytes = BitConverter.GetBytes((UInt32) arg); else if (arg is UInt64) bytes = BitConverter.GetBytes((UInt64) arg); else return null; for (int ctr = bytes.Length - 1; ctr >= 0; ctr--) output += String.Format("{0:X2} ", bytes[ctr]); return output.Trim(); } } The following example uses the ByteByByteFormatter class to format integer values. Note that the ICustomFormatter.Format method is called more than once in the second String.Format(IFormatProvider, String, Object[]) method call, and that the default NumberFormatInfo provider is used in the third method call because the .ByteByByteFormatter.Format method does not recognize the "N0" format string and returns a null reference (Nothing in Visual Basic). public class Example { public static void Main() { long value = 3210662321; byte value1 = 214; byte value2 = 19; Console.WriteLine(String.Format(new ByteByByteFormatter(), "{0:X}", value)); Console.WriteLine(String.Format(new ByteByByteFormatter(), "{0:X} And {1:X} = {2:X} ({2:000})", value1, value2, value1 & value2)); Console.WriteLine(String.Format(new ByteByByteFormatter(), "{0,10:N0}", value)); } } // The example displays the following output: // 00 00 00 00 BF 5E D1 B1 // 00 D6 And 00 13 = 00 12 (018) // 3,210,662,321
https://msdn.microsoft.com/en-us/library/26etazsy(v=vs.110).aspx
CC-MAIN-2016-44
en
refinedweb
I wanted to update all of you on XAF's Mobile UI (CTP) we first publicly announced in late April (XAF Goes Mobile: CTP Version Ships with v15.2.9). To refresh your memory, this new feature allows you to create iOS, Android and Windows Phone apps alongside its Windows and Web app counterparts. XAF's mobile apps reuse the database, as well as certain aspects of the data model, application logic and UI settings of existing XAF applications. In the last couple of months, we've received some great feedback from early adopters (thank you very much for all your support and assistance) and as a result, we've been hard at work improving our implementation. At this stage, the mobile platform is still in CTP and is not yet ready for production use due to known issues, unsupported scenarios and a somewhat long TODO list. Additionally, certain aspects of XAF Mobile's functionality and API are likely to change... In this blog post, I'd like to describe the primary changes made between our last set of iterations. XAF Solution Wizard integration With v16.1.4, you can create XAF Mobile apps via our Solution Wizard - invoked via the File | New | Project... menu in Visual Studio: The wizard's UI is automatically updated based on selected options (Entity Framework, AuthenticationActiveDirectory and Client Side Security UI-level items are filtered out, because they are unsupported). See more wizard screenshots here: one, two, three, four, five. Simplified Navigation Business classes marked with the DefaultClassOptions attribute and residing in the Default navigation group appear in the mobile navigation automatically - without the need to set MobileVisible = True for each corresponding View via the Model Editor. This should simplify your path to a Mobile app for existing projects. At this point, if your navigation items reside in custom navigation groups (other than "Default"), you will need to place them into the Default group. In the future, it's likely that a different DevExpress Navigation Control (like the DevExpress Accordion) will be introduced to cover complex navigation hierarchies. Simplified Testing - Local Simulator Built into your Mobile Project Simulator (Index.html and player.html) - technically this is a web page that contains a client "player" script that queries the aforementioned backend data and UI metadata services and generates the actual HTML5/JS UI inside the web browser. This player script also gets redistributed to the actual mobile device when a native package is installed. The simulator is automatically opened in the web browser when you make a YourSolutionName.Mobile project as startup in Solution Explorer and start debugging (F5). Note: At present, this simulator downloads resources from Azure and thus requires at Internet connection. Going forward, this will not be required as all resources will be obtained from an assembly locally. Secured OData Service with Basic Authentication If you are not yet familiar with Basic Authentication and our its support in XAF v16.1, refer to this Wikipedia article:. Going forward, I will also use examples from this article to better explain the implemented feature.The mobile project created by the Solution Wizard consists of a DataService.svc file, representing the backend service used to serve requests, manage security and execute actions. Technically, it's a standard OData service (WCF Data Services 5.0) based on an XPO OData V3 provider. Let me show how the security portion of this data service works: A service query produced the following results for different users, as expected: 4 for Sam with an empty password (Authorization:Basic U2FtOg==); 1 for Aladdin with the OpenSesame password (Authorization:Basic QWxhZGRpbjpPcGVuU2VzYW1l) Based upon security permissions, only a single task is visible to Alladin while Sam sees all four tasks. User credentials are passed from the client to the service in request headers via the special Authorization field (see that base64-encoded thing in parentheses above). If you are curious as to how I modified request headers for these tests, I used the ModHeader plug-in from the Chrome Web Store: The XAF mobile client queries the data service in a similar way, but does it behind the scene via AJAX requests. Below is a screenshot from the mobile simulator when a user Aladdin navigates to tasks: As you can see, the XAF Mobile UI works much like our Windows and Web interfaces with the same security settings: The most important thing is that this secured XAF Data Service is not limited to XAF Mobile apps. You can leverage it from other non-XAF clients supporting Basic Authentication. For instance, you can use our Data Explorer product for iOS with XAF's data service to generate a secured app: Here I am just showing a couple of configuration steps along with the resulting app (which was generated from the XAF Data Service deployed in Azure --). You can find full configuration instructions for Data Explorer client apps in its getting started guide or in the How to use Data Explorer with the XAF secured OData Service supporting Basic Authentication KB article. To obtain additional information on XAF Data Service and its usage from various clients, be certain to check out the following article: FAQ: New XAF HTML5/JavaScript mobile UI (CTP). Windows Phone 8.1 Support We have tuned native package settings so that they can be used on the Adobe PhoneGap Build service to generate an XAP file ready for Windows Phone 8.1 deployment. You can modify the config.xml file in the *.ZIP file we generate in order to specify the architecture of your device (anycpu, arm, x86 either x64): <preference name="windows-arch" value="anycpu" />As for the future, we hope to simplify the current mechanism towards built-in Visual Studio integration so that you do not need to learn and use a separate service. Updated Learning Materials and Demo for the Mobile platform We have updated our Getting Started Tutorial and Frequently Asked Questions articles to reflect the latest changes in v16.1. If you don't have time to build your own XAF Mobile app but want to experience some of what's possible, feel free to try our Online Mobile Demo today. Things we are still working on We have mostly met the goals we set for ourselves in v16.1, but there are still many features we need to implement in order to consider XAF Mobile ready for its first beta. Specifically, we're focused on : It is likely that there will be more updates on this in the coming months. Follow our team blog for more information as it becomes available... We would love to hear your feedback on XAF Mobile, so please contact us via the Support Center (preferred) or here in the Comments section. Design-time enhancements With our upcoming release, all XAF templates will be available from the DX Template Gallery (look for the DevExpress v16.1 Template Gallery item in the standard Add New Item... dialog or see the "Add DevExpress Item" context menu item for your projects in Solution Explorer): Additionally, we've provided shortcuts for the most recently used (MRU) item templates to the Add DevExpress Item... menu - invoked for XAF projects under the Solution Explorer: Notice that there is a new Non-Persistent Object item template that allows you to create non-persistent classes with ease (it contains all the required boilerplate code and example implementations of the INotifyPropertyChanged, IXafEntityObject, IObjectSpaceLink interfaces). Please review my earlier blog post to learn on more improvements with regard to managing non-persistent objects in standard XAF forms. See the Changes to Visual Studio Item Templates in XAF v16.1 KB article for more details. WinForms SDI: Outlook-Style Navigation Integration For those of you targeting Windows, XAF's integration of DevExpress WinForms Outlook-Style navigation controls and OfficeNavigationBar is now better than ever. The OfficeNavigationBar can be displayed in non-compact mode, as in the screenshot above, or in the compact mode (enabled by default) demonstrated below: You can always switch between compact and non-compact modes via the Navigation Options dialog (•••). Clicking on these group items can be include animations managed by our TransitionManager component where the SlideFadeTransition type is used by default (view a full-size GIF without losing quality HERE): This new feature is enabled only in SDI mode (UIType = SingleWindowSDI) with the ribbon menu (FormStyle=Ribbon) when the new RootGroupStyle property is set to OutlookSimple or OutlookAnimated. You can initialize these configurations in code or via the Model Editor: For a cleaner UI and better end-user experience, the DockPanel previously hosting the NavBarControl was also removed. The NavBarControl is now positioned directly in the form template, which also helped us remove unnecessary borders. The expand/collapse functions of the removed dock panel are now natively managed by the NavBarControl and the two new buttons added into the status bar. The "Normal View" button expands the NavBarControl while pressing the "Reading View" button collapses the NavBarControl. SVG icons support in ASP.NET In XAF ASP.NET applications, SVG images are now supported, which improves your website's appearance on displays with high pixel density (resolution). If you're adding custom images as per this documentation article, note that image display size is determined by the svg element's viewBox attribute. Also, SVG icons are not grayed out automatically for disabled Actions. You should manually add a disabled variant of an SVG icon with the _Disabled suffix (e.g., MyIcon_Disabled.svg). Our UX designers also started to redraw standard XAF images, but this is still in works. You can easily view already updated images in the Model Editor's image picker: Our future plans include completing this image collection and to introduce this same capability for XAF's WinForms UI. Faster rendering and other performance optimizations for popular Web UI scenarios In short, the core idea for all these performance improvements in XAF ASP.NET WebForms apps is that under certain circumstances, we intentionally suppress creation and rendering of current web page controls, disable unnecessary data-binding operations, reduce the number of requests to the server and perform updates on the client side where possible. This allows us to produce a web app that behaves faster and is more responsive, which is essential for hand-held devices (e.g., tablets, smart phones). Desktop web browser users will also benefit from these changes, especially in scenarios involving popup windows. Since several thousand of our unit and complex functional tests have passed, these optimizations are turned on by default in XAF v16.1, so that everyone can benefit from them. For backward compatibility or any unhandled issues in your custom code that might occur due to these optimizations, we also provided various static options in the DevExpress.ExpressApp.Web.WebApplication.OptimizationSettings class allowing you to turn this feature off completely (or partially). I've described these options and scenarios in the following KB Article: Your feedback is needed! As always, my team and I look forward to hearing your thoughts on each of these improvements in comments to this blog or via the service. This. In my. Q: What about this feature for ASP.NET?A: By default, the EnableModelCache property has no effect on an ASP.NET application since a shared application model is usually generated once for all web clients. If you wish, you can manually activate the creation of this cache file by overriding the GetModelCacheFileLocationPath method of your WebApplication descendant.. UPDATE: Refer to the eXpressApp Framework > Getting Started > XAF Mobile (CTP) Tutorial article to learn more on the new Mobile UI, which is technically is a single-page HTML5/JavaScript application (SPA) based on DevExtreme components that is ready to be compiled by Apache Cordova (PhoneGap) into a native package that can then be submitted to app stores. ======================================================== The eXpressApp Framework (XAF) team has been working hard to add a mobile UI option to your existing or new projects and we are ready for the first public preview. This new feature will let you easily create iOS and Android apps in addition to WinForms and ASP.NET UI options already available to you. The mobile apps will reuse the database, as well as certain aspects of the data model, application logic and UI settings of your existing XAF applications. This will help you avoid all the routine work that would take days or weeks of development efforts if building those mobile apps from scratch. We’ve already shown this functionality to a small group of XAF developers at the end of last year and got lots of useful feedback. The team has fixed issues and incorporated a number of improvements and we now feel that the framework is ready to go public. Certain aspects of the new functionality will change and we'd like to think that your feedback will play an important role in that. Please use the resources in this email to evaluate the new features and share your opinion with us. Start your evaluation by reviewing a sample mobile application hosted on Azure. Either use the browser-based simulator or try it on your smartphone by simply scanning the QR code. Run the Demo We encourage you to follow the tutorial below to create a mobile app based on your own XAF solution. The article uses the Project Manager demo as an example, but you can apply the same steps to any XAF project. Follow the Tutorial To learn more about the capabilities and limitations of the XAF Mobile UI, review the knowledge base article that answers the most frequently asked questions. Read FAQ Complete the following survey so we can learn more about the types of application you’re looking to build. We know your time is valuable so we’ve limited it to only 5 questions and the entire survey shouldn’t take longer than 10 minutes. Complete the Survey We are also looking forward to your reports via the Support Center. Please submit separate tickets for each problem or question for better tracking. Thank you for your help! In earlier. Detail Form Layout Customization in Code With this release, you can customize the Detail View's default layout in your Data Model code using the DetailViewLayout attribute. Please refer to the following example code/screenshot below: public class Contact { [Browsable(false)] public int ID { get; private set; } [DetailViewLayoutAttribute(LayoutColumnPosition.Left)] public string FirstName { get; set; } [DetailViewLayoutAttribute(LayoutColumnPosition.Right)] public string LastName { get; set; } [DetailViewLayoutAttribute("FullName", 0)] public string FullName { get { return FirstName + " " + LastName; } } [DetailViewLayoutAttribute(LayoutColumnPosition.Left)] public string Email { get; set; } [DetailViewLayoutAttribute(LayoutColumnPosition.Right)] public virtual Contact Manager { get; set; } [DetailViewLayoutAttribute(LayoutColumnPosition.Left)] public DateTime? Birthday { get; set; } [FieldSize(FieldSizeAttribute.Unlimited)] [DetailViewLayoutAttribute("NotesAndRemarks", LayoutGroupType.TabbedGroup, 100)] public string Notes { get; set; } [FieldSize(FieldSizeAttribute.Unlimited)] [DetailViewLayoutAttribute("NotesAndRemarks", LayoutGroupType.TabbedGroup, 100)] public string Remarks { get; set; } } In this release we have extended support for usage scenarios when using non-persistent objects first introduced in v15.1: I will post additional content on some of these items (like S172038) separatly, once we release v15.2. Please stay tuned and let us know what you think of these new features. In this release cycle, we've evolved our web page templates and themes optimized for touch devices (released in v15.1). While we still continue to refine things, we believe that this feature is ready to be used in production and we do not expect major breaking changes going forward. UPDATE: See also the XAF Goes Mobile: CTP Version Ships with v15.2.9 post to learn more on the related XAF 15.2 feature. In addition to numerous cosmetic enhancements and capabilities that were already test-driven in previous minor updates, let me highlight a few important features: 1. Adaptive and Responsive Layout Our ASPxGridListEditor now supports adaptive layouts in the new web style. Columns are collapsed automatically when the browser window is resized. To enable this responsive mode, set the ASPxGridListEditor.IsAdaptive property in a ViewController. You can customize this behavior via the IModelColumnWeb.AdaptivePriority property in the Model Editor invoked for the ASP.NET project. This option specifies the column's priority. Columns with a lower AdaptivePriority value remain visible when the browser window shrinks, while columns with a higher value are hidden. Hidden column data can be accessed via the ellipse "..." button. In the image above, both the SUBJECT and PROJECT columns have a lower AdaptivePriority value than others. 2. Device-Specific Settings in ASP.NET Applications Web applications can now have separate settings for desktop, tablet and mobile devices. Device-specific model differences are stored in the Model.Desktop.xafml, Model.Tablet.xafml and Model.Mobile.xafml files. Database settings storage is also supported. 3. Customizable ASP.NET Templates for Touch Devices Page templates designed for touch devices can be easily customized. Corresponding project item templates are now available in Visual Studio: Reporting With v15.2, the DevExpress HTML5 Report Viewer is used by default. Our Web Report Designer now supports parameters with complex types (including multi-value parameters for complex types). Finally, XAF's Reports Module can now store a layout in XML format, making complex report rendering much faster (learn more)! Confirm unsaved changes ASP.NET applications can now prevent loss of unsaved data by displaying a warning dialog if a user attempts to close the browser tab, or clicks an Action whose execution may lead to loss of unsaved data. This behavior is enabled by default. You can disable it using the ConfirmUnsavedChanges property of the Options node in the Model Editor invoked for an ASP.NET project: The option above is global. To enable/disable the confirmation dialog for a specific Action, use the ConfirmUnsavedChanges property of the ActionDesign | Action node. By default, the IModelActionWeb.ConfirmUnsavedChanges option is set to true for the following Actions: NextObject, PreviousObject, New, DialogCancel, DialogClose, ChooseTheme, Refresh, Cancel, Edit, Logoff, ChangeVariant Batch Edit support ASPxGridListEditor now supports Batch Edit Mode. Unlike other modes, Batch Edit allows you to edit multiple rows and then save all modified objects at the same time. To enable Batch Edit Mode, set the InlineEditMode property of the ListView node to Batch in the Model Editor and ensure that the AllowEdit property of the same node is set to true. Please note the following: In Batch Edit mode, the Detail View is not invoked when a user clicks a row. A few data types cannot be edited: images, references, criteria, file attachments. Initial property values for new objects are passed to the client when the grid control is created and are not updated each time you create objects using the New Command Item.Batch Edit Mode supports our new ConfirmUnsavedChanges and Validation module features. Master-Detail support ASPxGridListEditor now provides built-in support for the master-detail data presentation. To enable it, run the Model Editor for an ASP.NET project, and set the DetailRowMode property of the Views | ListView node to DetailView or DetailViewWithActions. In DetailViewWithActions mode, a Detail View specified using the DetailRowView property is shown in a detail row. In DetailView mode, the same Detail View is displayed, but the toolbar of its nested List View is hidden. By default, DetailRowMode is None and the master-detail mode is disabled. ===================================== Please tell us what you think of these new features. With this release, we're shipping a new Map module for XAF Web apps - allowing you display business objects on different kinds of maps. The module integrates the client-side dxMap and dxVectorMap widgets from DevExtreme into ASP.NET XAF applications via specialized XAF server-side wrappers like List and Property Editors: WebMapsListEditor, WebVectorMapsListEditor and WebMapsPropertyEditor. Primary Capabilities While designing this module, we considered customer feedback received during the research we conducted earlier this year and also previous user requests from the Support Center and other sources. Let's take a quick look at the functionality implemented in the initial release: 1. Interactive map displays objects implementing the IMapsMarker interface using the Google Maps API or Bing Maps API: 2. Vector map displays objects implementing the IAreaInfo interface as areas with different colors: 3. Vector map displays objects implementing the IVectorMapsPieMarker interface as pie-chart markers: You can experience a live demo of our Map module in the ListEditors | Maps section of the offline Feature Center demo that is installed with XAF or check its online version at demos.devexpress.com/xaf/featurecenter once v15.2 is officially released. You can configure map types via the Model Editor invoked from Visual Studio or in code: 1. Configuring WebMapsPropertyEditor for a DetailView: 2. Configuring WebVectorMapsListEditor for a ListView: 3. Customizing the underlying dxMap widget in code of a ViewController for a ListView: using DevExpress.Persistent.Base; namespace DevExpress.ExpressApp.Maps.Web.Controllers { public class MapCenterController : ObjectViewController<ListView, Store> { protected override void OnViewControlsCreated() { base.OnViewControlsCreated(); ((WebMapsListEditor)View.Editor).MapViewer.ClientSideEvents.Customize = GetCustomizeScript(); } private string GetCustomizeScript() { return @"function(sender, map) { map.option('center', 'Brooklyn Bridge,New York,NY'); map.option('autoAdjust', false); }"; } } =========================== We'd love to get your feedback on this new Map module and whether you are planning to use it in upcoming XAF Web apps. In our upcoming release of the eXpressApp Framework, data validation occurs immediately after input focus changes when validation rule evaluation does not require querying additional data from the server. These rules are RuleRequiredField,RuleRegularExpression, RuleStringComparison, RuleValueComparison and RuleRange. In-place validation is enabled for the Save context by default. To enable it for other contexts, use the AllowInplaceValidation property of the Validation | Contexts | Context node in the Model Editor. Since rule evaluation occurs on the client side, in-place validation does not occur when: In-place validation engine relies on Controllers provided in our platform-specific ValidationWindowsFormsModule and ValidationAspNetModule components. Note: The ASP.NET module was introduced in v15.2. In order to use in-place validation after upgrading your existing ASP.NET applications to v15.2, be certain to add this module from the Toolbox into the Application Designer. The XAF Team would love your feedback. Please tell us what you think of these new.
https://community.devexpress.com/blogs/eaf/
CC-MAIN-2016-44
en
refinedweb
in reply to hash ref package Can you define 'does not work anymore'? One possible cause: if you move it to a different package and don't export the sub into the namespace of the caller, sort won't know where to find it. If this particular cause of 'does not work anymore' is the reason, then specifying the full class of the grabdataentry sub or importing it into your caller's namespace may help. --MidLifeXis 1. Keep it simple 2. Just remember to pull out 3 in the morning 3. A good puzzle will wake me up Many. I like to torture myself 0. Socks just get in the way Results (288 votes). Check out past polls.
http://www.perlmonks.org/index.pl/jacques?node_id=993764
CC-MAIN-2016-44
en
refinedweb
WL#4739: Physical Structure of Server Affects: Server-9.x — Status: Assigned — Priority: Medium In order to simplify working with the server code, both as a developer and when packaging distributions, we want to have a physical structure of the code that makes the code easy to work with. We are also aiming at a long-term solution that will allow the server code base to grow significantly, without making the code unmaintainable. In order to support this, we need a structure that supports: - Clear and simple conventions for creating structure for the code - Easy adding and removing of features to allow: - Features to be added late with a minimal risk of ripple effects into unrelated parts of the code. This can be introduced due to merges causing unintended code changes, as well as logical dependencies that are not clear. - It shall be easy to remove features, should it be necessary for some reason (that might not strictly be technical). - Having a structure that allows the creation of various distribution packages from the same source, such as: - Client development distributions for application programmers - Storage engine development distributions for storage engine writers - Plug-in development distributions for plug-in writers (whom may or may not be storage engine developers) - Having a structure that support working with the code using scripts to perform common tasks, like building special distributions, release testing, and packaging. Working practice ================ There are some working practice that we need to support in this structure. These practices are central to how we work with the code and not supporting them will introduce severe problems for developers. - Bug fixes and features is introduced as a sequence of patches, where each patch is a change to one or more files. - A single patch should not cause a build failure and the server should still pass all tests. If a bug fix or feature requires several patches, each patch should still leave the server in a stable state in the sense that it should still build and still pass all tests. - A patch should not require unwarranted changes in other package. We should discourage practice that may require a developer to make changes in other packages than the one that he/she is working on. Forcing a developer to make changes in code that he/she is not familiar with, however small the changes are, increases the risk of introducing bugs and may go against design principles originally intended for a component or package. - A patch is normally targeted for a single package only: features affecting several packages should be split into separate patches, committed in the right order, and preferably pushed together (bu this is not a requirement). Notes ===== - This worklog needs to be split up into several worklogs, at least: - one for the actual design (this one), - one for implementing the build frame (WL#4875) - one for fixing the current include file header mess (WL#4877) Continuing work =============== - In order to not stall the change of the structure for too long, it is necessary to set a bar for when the code should be changed. If that is not done, we will have to maintain two structures in parallel, which not offer any improvements to the development practice and instead solidify the current situation. Open Issues =========== - What names shall we use for the packages? We already have storage/ and server/ and client/ (which already exists) have been suggested. Resolved issues =============== - Shall each package have a unique prefix for the files? Also consider the exported header files. The reasons for having different prefixes for header files is to be able to separate header files with same names in different packages when including them. However, by using the package directory name as prefix, a header file prefix is not needed. It would be either: #include "pkg_table.h" or #include "pkg/table.h" The reason for using prefixes for source files would be that linkers have problems distinguishing between files with the same name, but some tests indicate that is not the case on some common platforms (Linux and Solaris). In short, there seems to be no good reason to use file prefixes together with a package structure. - Shall a dynamically loadable module be a separate package or not? There might be reasons to why a loadable component may consist of several packages, so we should not require that each loadable component is a package. Decisions ========= 2009-02-26: We agreed on going for approach 2 when handling header files. The basis was later questioned and clarification of the document was asked for. 2009-05-27: It was agreed that we should not impose a structure on the packages from the build system and represent meta-data for a package separately(typically as a manifest or configuration file). Structure might still be mandated by coding styles and/or practical issues. High-level structure ==================== We envision that the system consists of a number of *packages* that together make up the code of the system. In order to build the server, and associated components, we have a *build frame* (or just *frame*) =========== ================================================= Packages ======== Packages are collections of components that server a common purpose. This formulation is deliberately not exact since what actually makes sense to turn into a package wary *subsystem directory* alongside the ``sql/`` directory. Apart from that, all package directories are placed at the same level. We are placing the packages in a new subsystem directory instead of re-using ``sql/`` to be able to easily distinguish between "unorganized" and "organized" code. The following subsystem directories are proposed (some directories already exists and almost have the basic structure proposed): ========== ============================================== Package Purpose ========== ============================================== storage/ Storage engines server/ Server modules common/ Common utilities ========== ==============================================. The package owner shall be able to decide what header files are available for users of the package. Initially we will not be able to do this for practical reasons since it requires the build frame to support that. Instead, we will assume that every header file in a package is available as a package interface. worklog. This is done to restrict the scope of the worklog and be able to close it. (recall that dependencies between components are represented as an #include directive). Should some include file be added to the convenience include file because *one* component needs it, *all* components that include this convenience include will be affected. To avoid introducing unnecessary dependencies in this way we could: 1. Have a rule stating that convenience include may only hold includes that are used by *all* components including this convenience include. This adds an additional burden on developers wanting to add an include to the convenience include to locate each user of the convenience include and decide if they need it. Since the includers of the convenience include is not easily visible in the file, it means searching all packages. Furthermore: with this approach it can be expected that over time, the set of includes in the file will shrink and the purpose of having a convenience include will diminish. 2. Have a rule stating that convenience includes shall not be used, which requires all necessary include files to be mentioned. This is a minor problem from a development perspective and make dependencies between components explicit, hence clear. We chose the latter. However, convenience include files serve a purpose for maintaining interfaces *into* the server is accepted (for example, to make it easier to work with the client interface). For these files it is, however, critical that they are convenience includes and not contain separate definitions. No ``using`` directives (``using namespace``) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Placing using directives at namespace level in header files will force any file that includes the header file to resolve symbols in a namespace they have no control over. This can lead to subtle and hard to find bugs, and should therefore not be used. Placing ``using`` directives at namespace level in source files will inject all symbols of that namespace (as ``pkga``) into another namespace (say ``pkgb``). If changes are made to ``pkga``, they may conflict with definitions in ``pkgb`` and since a developer have to ensure the system builds for each patch, he would be forced to make changes in ``pkgb`` despite the fact that the change itself is localized to ``pkga``. to be detected and tracked automatically. ####################################### Appendix A. Definitions and discussions ####################################### Packages ======== Packages are collections of components organized as a cohesive unit (that is, serve a common purpose). Each package has one or more (exported) interfaces, which are represented by one or more header files. Defining files of a package --------------------------- In order to define what files are part of a package, there are basically two options: either supply a file for each package that lists the files of that package, or put all the files of a package into a subdirectory. The advantages of using the file system to define packages by putting all the files of a package in a separate directory suggests that this approach should be used. Interfaces into a package ------------------------- Each package have one or more interfaces represented physically as one or more header files. The header files contains objects and definitions necessary to interface with the package, so we have no restrictions on the structure (but do have some recommendations for how to structure the interfaces for maintainability). Each interface is normally defined with some strategic objective, i.e., it is created for an intended set of users. We use *export target* to denote such a set of users of an interface. Library functions usually have only one export target, but many of our packages have several export targets such as "client developers" who write application clients to the server, "storage engine developers" who are creating a storage engine for the server, "plugin developers" that are writing a plug-in for the server. For each export target, we should ensure that the header files holding the interfaces is defined in such a manner that only the parts needed by that export target is included when that header file is included. Gratuitous definitions is a problem since they might clash with the names defined by the user, and also introduces an unnecessary dependency on parts of the server that the user does not in reality depend on. In short, interfaces into packages are represented as one or more header files, and we have two basic methods to identify the interface files: by naming convention (for example, placing the interface file in a separate directory) or by using one or more configuration files that explicitly the interface interface files. Using naming conventions ~~~~~~~~~~~~~~~~~~~~~~~~ For this discussion, we assume that the exported interface header files are put into the export/ directory. However, the same arguments apply to other schemes for using naming conventions. Note that each header file might correspond to a source file placed in the main package directory, like this: goobar/ export/ goo_interface.h goo_impl.cc . . . The advantage of this approach is: - Simplicity: normal file commands can be used to work with files. For example, to copy all files needed by a plugin-sdk could be as simple as: cp package/export/*.h /distro/include The disadvantages are: - Changing the status of a file from, e.g., internal to public requires moving the file and not all VCS systems support that well. - Having multiple "export targets" (users of the interface) require separate directories. For example, a package could export an interface for third-party users and one towards the rest of the server packages. Configuration file ~~~~~~~~~~~~~~~~~~ We somehow add extra configuration file(s) in the package to denote if the header file is exported. For this approach, we have two alternatives: a) Add a file parallel to the header file, e.g., the fact that "foo.h.export" exists could mean that the header file "foo.h" is an exported file. b) We introduce a "manifest" file for each package, containing information about the files in the package. The advantages of this approach is [incomplete list]: - Changing the properties of a file (e.g., from "internal" to "exported") does not require any changes to the file itself. - It allows header files to be marked with other properties, such as header files that are supposed to be exported to third-party developers. The disadvantages are: - Working with files is not trivial, e.g., copying all header files that goes into the plugin SDK could be: cp `grep plugin-sdk package/manifest | cut -f1` /distro/include Include file and path management -------------------------------- In order to manage the include path and the include files, it is necessary to ensure that all the header files that are exported are available for every package in the system, and *only* those files. To handle this, we basically have two approaches: 1. We have an include path containing the directory where the exported header files for each package is stored. This require the header files to be placed in a dedicated "export/" directory inside the package: otherwise, all header files of a package will be exportable, which is not the intention. So, for example, the include path could be set to pkb_a/export;pkg_b/export;pkg_c/export Whenever a package is added or removed, this would mean that the include path would have to be updated to match the actual packages available. The advantages of this approach are: - Simple model - No need to generate or copy files The disadvantages of this approach are: - If a package is added or removed, the include path have to be updated. Since every package depends on the include path, it might trigger a re-build. - If a header file with the same name is in multiple places, it will not be detected. - Is most cases, the source control systems will generate a conflict for the addition and removal of a directory to the include path in the build file (e.g., configure.ac or Makefile.am). 2. We have a dedicated include directory for, e.g., the server where exported header are available, and let the manifest file contain information on what files are to be made available in the central include directory. This would mean that the path stays the same regardless of what packages are available. For this approach, we have two "sub-approaches" on how to make the header files available from the include directory: a) Copy the files to the dedicated include directory b) Generate a header file holding only an #include directive referencing to the correct header file. The advantages of this approach is: - That there is no need to maintain an extensive include path to be able to compile a package (which might have dependencies on other packages). - Package maintenance is very easy. For example, adding a package does not require changing any include paths or anything at all in the build frame. - Conflicting header files will be detected during the build process (e.g., when copying header files to the include directory). The disadvantages are: - Requires more work in the build frame. - It requires a "staging" phase, where header files are made available in the dedicated include directory, either by copying or generating files. - In the copy approach (2a), it is necessary to build a dependencies Makefile for the include directory, to trigger a copy whenever the original header file changes. - In the copy approach (2a), it is possible that a developer starts editing the wrong file, which will then be overwritten at some later point, which will be hard to discover. Implementation ============== In order to implement the structure described in the high-level specification, we should approach it in well-contained steps that lead us to the goal. For example, since we need to develop a build frame for supporting this, we need an intermediate solution that does not cause problems for the final deployment of the build structure and allow developers to work on creating packages without introducing problems for the build frame. Stage 1: Create the directory structure --------------------------------------- Introduce the package directories and move the existing packages we have into that directory. At this stage we will keep the existing autotools-based build system and just do the minimal changes necessary to have a fully functional system. We assume that the original "unstructured" sql/ code is dependent on the packages, but that we have control over the dependencies between packages in the "structured" directories. In order to add a package, it will be necessary to: - Create a Makefile.am for the package - Add a reference under "SUBDIRS" in the parent directory Note that in this stage, all header files in a package will be available as package interface files, so care should be used when including header files from other packages. After this stage, developers will be able to create packages properly without affecting the following stages. Stage 2: Evaluate and optionally change to use CMake ---------------------------------------------------- It has been discussed if we should use CMake to build the server on all platforms and not just Windows, since it seems to be a portable alternative. However, concerns have been raised about the portability of CMake to the platforms that we need to support, so this alternative need to be evaluated before implementation starts. The goal is to have an equivalently simple build system compared to existing one, which also include being able to handle the system for defining pluggable storage engines. If the evaluation does not show problems with doing the switch, the replacement should be done in two steps: just switching build system but otherwise maintain the structure and build order of the old system. We do this step separately since it will require merging the build process on windows with the existing autotools-based build frame and still maintain the same functionality. After this stage, we will have a single build frame for all platforms, but there will still be problems such as that package interfaces are not distinguished from other header files. Stage 3: Streamline and consolidate build frame ----------------------------------------------- At this stage, the build frame will be consolidated by ensuring that there is support for easily working with the code. Copyright (c) 2000, 2016, Oracle Corporation and/or its affiliates. All rights reserved.
https://dev.mysql.com/worklog/task/?id=4739
CC-MAIN-2016-44
en
refinedweb
What’s new in Groovy 2.0? - | - - - - - - Read later My Reading List: import groovy.transform.TypeChecked void someMethod() {} @TypeChecked void test() { // compilation error: // cannot find matching method sommeeMethod() sommeeMethod() def name = "Marion" // compilation error: // the variable naaammme is undeclared println naaammme }:" }: import groovy.transform.TypeChecked @TypeChecked int method() { if (true) { // compilation error: // cannot return value of type String // on method returning type int 'String' } else { 42 } }: } }: import groovy.transform.TypeChecked import groovy.xml.MarkupBuilder // this method and its code are type checked @TypeChecked String greeting(String name) { generateMarkup(name.toUpperCase()) } // this method isn't type checked // and you can use dynamic features like the markup builder String generateMarkup(String name) { def sw =new StringWriter() new MarkupBuilder(sw).html { body { div name } } sw.toString() } assert greeting("Cédric").contains("<div>CÉDRIC</div>"). import groovy.transform.TypeChecked import groovy.xml.MarkupBuilder @TypeChecked String test(Object val) { if (val instanceof String) { // unlike Java: // return ((String)val).toUpperCase() val.toUpperCase() } else if (val instanceof Number) { // unlike Java: // return ((Number)val).intValue().multiply(2) val.intValue() * 2 } } assert test('abc') == 'ABC' assert test(123) == '246': import groovy.transform.TypeChecked // inferred return type: // a list of numbers which are comparable and serializable @TypeChecked test() { // an integer and a BigDecimal return [1234, 3.14] }: import groovy.transform.TypeChecked @TypeChecked test() { def var = 123 // inferred type is int var = "123" // assign var with a String println var.toInteger() // no problem, no need to cast var = 123 println var.toUpperCase() // error, var is int! }: import groovy.transform.TypeChecked @TypeChecked test() { def var = "abc" def cl = { if (new Random().nextBoolean()) var = new Date() } cl() var.toUpperCase() // compilation error! }: import groovy.transform.TypeChecked class A { void foo() {} } class B extends A { void bar() {} } @TypeChecked test() { def var = new A() def cl = { var = new B() } cl() // var is at least an instance of A // so we are allowed to call method foo() var.foo() }: import groovy.transform.CompileStatic @CompileStatic int squarePlusOne(int num) { num * num + 1 } assert squarePlusOne(3) == 10: int x = 0b10101111 assert x == 175 byte aByte = 0b00100001 assert aByte == 33 int anInt = 0b1010000101000101 assert anInt == 41285: try { /* ... */ } catch(IOException | NullPointerException e) { /* one block to handle 2 exceptions */ }: ... <taskdef name="groovyc" classname="org.codehaus.groovy.ant.Groovyc" classpathref="cp"/> ... <groovyc srcdir="${srcDir}" destdir="${destDir}" indy="true"> <classpath> ... </classpath> </groovyc> ...: CompilerConfiguration config = new CompilerConfiguration(); config.getOptimizationOptions().put("indy", true); config.getOptimizationOptions().put("int", false); GroovyShell shell = new GroovyShell(config);: add a new static method to Random to get a random integer between two values, you could proceed as in this class: package com.acme class MyStaticExtension { static String between(Random selfType, int start, int end) { new Random().nextInt(end - start + 1) + start } } That way, you are able to use that extension method as follows: Random.between(3, 4): moduleName = MyExtension moduleVersion = 1.0 extensionClasses = com.acme.MyExtension staticExtensionClasses = com.acme.MyStaticExtension. About the Author As Head of Groovy Development for SpringSource, a division of VMware, Guillaume Laforge is the official Groovy Project Manager, leading the Groovy dynamic language project at Codehaus. is also one of the founding members of the French Java/OSS/IT podcast LesCastCodeurs. Rate this Article - Editor Review - Chief Editor Action Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you thinkugh
https://www.infoq.com/articles/new-groovy-20/
CC-MAIN-2016-44
en
refinedweb
At some point of building your project you may wish to allow users to store some extra data. If you use Django‘s built-in authentication system you are in luck - much of the code have been already written and you only have to: If you are not familiar with Django‘s authentication framework, please refer to Django‘s documentation on the topic. For convenience, richtemplates comes with basic UserProfile class. You may use it directly by adding following line in your settings: This model provides only most basic fields: However, more probably you would like to extend this class - simply follow guidelines described at Subclassing profile class. If needed (well, most probably it is needed in one’s project) UserProfile may be easy subclassed. Let’s say we have main application where we define our user profile model and app label is core. We need to add address field at profile model. Models code could look as follows: from django.db import models from richtemplates.models import UserProfile as RichUserProfile class UserProfile(RichUserProfile): address = models.CharField(max_length=128, null=True, blank=True) Then, at settings file of our project we need to point at this class: AUTH_PROFILE_MODULE = 'core.UserProfile' If we create pluggable application and want to make our user profile class abstract until AUTH_PROFILE_MODULE is pointed at our model, we can add simple check within Meta class of our model: from django.conf import settings from django.db import models from richtemplates.models import UserProfile as RichUserProfile class UserProfile(RichUserProfile): address = models.CharField(max_length=128, null=True, blank=True) class Meta: abstract = getattr(settings, 'AUTH_PROFILE_MODULE', '') != \ 'core.UserProfile'
https://pythonhosted.org/django-richtemplates/userprofiles.html
CC-MAIN-2016-44
en
refinedweb
Like street numbers for a house, the Internet was originally designed so that all network devices could be directly addressed. Every connected device was given at least one unique identifier, or IP address, which could be used to route network packets to and from the device. For a while this worked well and devices had end-to-end connectivity. IPv4 addresses can be used to route to about 4 billion (232) devices, but the rapid growth of the Internet quickly exhausted that available real estate. In the late 1980’s, several methods were developed to conserve this rapidly dwindling address space. Network isolation became a key strategy in this effort, and it had a beneficial side-effect – increased security. If an attacker was unable to address a device to directly establish connectivity, then attacking it was more difficult. A common means for achieving this network isolation in the home is through an IPv4 router which supports Network Address Translation (NAT). These devices do some useful things for you: The following diagram illustrates this kind of deployment for two private networks both connected to the Internet through a NAT. With these features also come limitations. Note how both networks share the same private address space, both routers have the same 192.168.1.1 private IP address, and two different computers each have the same 192.168.1.20 private IP address. Devices on each private network can communicate within their own network, but how can they communicate with each other over the Internet? Internet packets are routed using public addresses, and in the scenario above, each network only has one public IPv4 address. That means all packets go to the home router and it figures out whether to send them on to a device on the home network. Here are 3 common strategies for achieving this routing between public and private networks: The term NAT traversal refers to the ability for client devices to address and communicate with listening devices behind a NAT. This turns out to be an incredibly useful thing to do for games, peer-to-peer, and a variety of other applications. Because of the security boundary offered by NAT configurations, a key tenant for any traversal technology is to continue to maintain that security boundary for existing applications and services. The technologies discussed in this article adhere to that tenant. For a detailed discussion of this topic, you may refer to this informational on Security Concerns With IP Tunneling. Generally these strategies result in a decent connectivity story, though complicated, and this carries over into application development. In NAT situations, devices don’t really have end-to-end addressability by default, and the port begins to play an overdeveloped role in routing to your home devices to compensate for private addresses not being publicly reachable. If your application only makes outgoing connections, the NAT solution will generally handle this transparently for you. The complications occur in applications that not only make but also receive connections – again, very common with peer-to-peer networks and video games. If your application listens on a particular port for incoming connections and relies strictly on IPv4, you might ordinarily need to resort to one of the following techniques: A key point and drawback to keep in mind is that there is a 1:1 mapping between the Internet facing port on the router and the private IP and port pair of your local device. If you want to run a web server from behind your NAT, you might allocate port 80 on the NAT and have it direct traffic to port 80 on Computer 1. If you wanted Computer 2 on the same network to also host a website available on the Internet, a different port on the NAT would need to be allocated, such as port 81 since port 80 is already in use and mapped to Computer 1. Visitors would then need to use the router’s public IP address combined with a distinct port to reach the appropriate web server. This is not ideal since web visitors may not be accustomed to having to specify a specific, non-default port. Contention for well-known ports demonstrates just one of the problems that may be encountered when relying on port mapping instead of an IP address with enough specificity to more fully handle addressing. With that in mind, we’ve made many of these complexities transparent for you in the .NET Framework 4.0. If you’re using TcpListener or UdpClient, just pass into the constructor IPAddress.IPv6Any, then call AllowNatTraversal with a value of true. var listener = new TcpListener(IPAddress.IPv6Any, 8000); listener.AllowNatTraversal(true); listener.Start(); You’ll notice we have been discussing IPv4, but the example above mentions IPv6. Have no fear, this will work over intermediate IPv4 networks, it just relies on the origin or destination endpoints supporting IPv6 and a couple key technologies which I’ll cover in more detail below. If your application must run on a PC which only has IPv4 installed, a condition that is becoming more rare, then unfortunately this new solution won’t help you right now. Another point worth mentioning is that applications which wish to listen on all IP addresses today have to set up two listeners, one for IPv4 using IPAddress.Any and one using IPAddress.IPv6Any. We’re investigating ways to take advantage of “dual mode” sockets so you won’t need to worry about tying your application to a specific IP version in the future. What if all your network devices had a public IP address which could be used by other devices to directly communicate with them? This would eliminate the need to use NAT for addressing and once again achieve end-to-end connectivity. With IPv6, this is possible. IPv6 has a significantly larger address space (2128 or about 3.4×1038), and with an address space this size, it is once again possible for every device connected to the Internet to be given a unique address. But, there’s a problem. Much of the world is still using IPv4, so exactly how can your application take advantage of IPv6 addresses today? Fortunately, a number of transparent solutions have been devised and implemented on platforms and devices so that your application doesn’t need to worry with the specifics so long as it listens on all available IP addresses. Although applications generally don’t need to worry about the underlying mechanism used to allocate an IPv6 address, for context, one such solution is 6to4 tunneling. This solution works great for devices that already have a public IPv4 address. A special range of IPv6 maps to the IPv4 address space, and so with a 6to4 tunnel gateway deployed at the edge, IPv6 connectivity can be automatically enabled. Windows supports 6to4, so if your computer already has a public IPv4 address, or you take your laptop to a hotspot that assigns it one, it probably also has a public IPv6 address through the 6to4 pseudo adapter. Of course, in the NAT scenario we’ve been discussing, only the home router has a public IPv4 address. So, to take advantage of 6to4, your router would need to be IPv6 compatible, it would need to be able to assign IPv6 addresses to the local network, and and it would need to have 6to4 tunneling built into it. Presently, most home routers don’t support this, though it is something a Windows Server can be configured to do as can standard Windows versions supporting Internet Connection Sharing (ICS). So if a home router holds the one and only public IPv4 address, and 6to4 isn’t an option, how can computers behind the NAT use IPv6? Like 6to4, Teredo is another IPv6 transition technology which brings IPv6 connectivity to IPv4 networks. It is described by RFC 4380, is further extended by MS-TERE, and has even been implemented by the open source community as Miredo. There are a number of technical articles on exactly how Teredo encapsulates IPv6 over IPv4/UDP, which most NATs can forward, so if you’re interested in those details you can find out more from the links at the end of the article. What’s important to know here is that Windows OS versions starting with Windows XP SP2 and Windows Server 2003 provide a virtual Teredo adapter which can give you a public IPv6 address in the range 2001:0::/32. This address can be used to listen for incoming connections from the Internet and can be provided to IPv6 enabled clients that wish to connect to your listening service. Teredo and related transition technologies free your application from worrying about how to address a computer behind a home router or NAT since typically all you need to do is connect to it using its IPv6 addresses. We’re making some additions in the .NET Framework 4.0 starting with Beta 2 to make these great technologies easier for you to use. The .NET Framework client APIs already support address-based NAT traversal, so the main updates are for listeners. The TcpListener and UdpClient changes were previously mentioned. Use these new methods to toggle whether your application explicitly wants to allow or restrict NAT traversal support. public class TcpListener { public void AllowNatTraversal(bool); } public class UdpClient { public void AllowNatTraversal(bool); } If you’re using sockets directly, the interface is a little closer to what Winsock exposes via the IPV6_PROTECTION_LEVEL socket option. We have exposed this on the Socket class as a new method called SetIPProtectionLevel. public class Socket { public void SetIPProtectionLevel (IPProtectionLevel); } public enum IPProtectionLevel { Unspecified = –1, // platform default Unrestricted = 10, // global with NAT traversal EdgeRestricted = 20, // global without NAT traversal Restricted = 30, // site local } Set the IPProtectionLevel to Unrestricted prior to Bind to allow clients to connect to your listener deployed behind a NAT. This is what the System.Net listener implementations do when you invoke the previously mentioned AllowNatTraversal method. EdgeRestricted allows clients to only connect to your listener on IP addresses that aren’t used for NAT traversal (like Teredo). Restricted only allows intranet connectivity (site and local link). The actual default setting is Unspecified and left to the underlying platform to determine. Starting with Windows Vista, this is equivalent to Unrestricted when the Windows Firewall is enabled and an appropriate rule is configured per the instructions below and EdgeRestricted when it is disabled. This honors that security point mentioned at the beginning of the article. Many IPv4 networks rely on NAT as a limited form of protection. The default setting protects applications from unintentionally exposing themselves to the Internet in NAT scenarios. Instead, applications must explicitly opt-in to NAT traversal using this socket option or by configuring a Windows Firewall rule. To enable you to easily turn these features on for your existing applications, you can also control this through app.config. <system.net> <settings> <!-- default is platform defined (Unspecified) –> <socket ipProtectionLevel="Unrestricted | EdgeRestricted | Restricted | Unspecified"/> </settings> </system.net> This setting will affect all listening sockets in an AppDomain. It’s not recommended to implement behavior that relies on direct knowledge of IP addresses since this would typically be handled through name resolution, but in some cases, like building peer to peer applications or your own discovery service, it can be useful. To get the list of addresses on a host, you could do it the traditional way using System.Net.NetworkInformation to enumerate NetworkInterfaces and their addresses, but we’re adding some new methods which make this simpler and also “wake up” Teredo if it hasn’t been used recently. public class IPGlobalProperties { public UnicastIPAddressInformationCollection GetUnicastAddresses(); public IAsyncResult BeginGetUnicastAddresses (AsyncCallback, object); public UnicastIPAddressInformationCollection EndGetUnicastAddresses(IAsyncResult); } This allows you to enumerate addresses as follows. var addressInfoCollection = IPGlobalProperties.GetIPGlobalProperties() .GetUnicastAddresses(); foreach(var addressInfo in addressInfoCollection) { Console.WriteLine("Address: {0}", addressInfo.Address); } Once you have acquired this list of addresses, you can give the address list to your clients out of band and they can use Socket.Connect, TcpClient.Connect, or even WebRequest.Create to establish a connection to your service. For a savvy way to publish your addresses so others can discover them, check out our Peer Name Resolution Protocol (PNRP). Another great discovery mechanism is the Collaboration API. The traditional mechanism is of course to use DNS and the System.Net APIs which accept a host name. Finally, we’re also adding a convenient property to the IPAddress class so you can tell if an address you’re dealing with is an IPv6 Teredo address. public class IPAddress { public bool IsIPv6Teredo { get; } } This property is primarily intended to be used for debugging and test scenarios since you will typically want to listen on all available IP addresses so your application can automatically take advantage of new platform enhancements. Right now, client support works transparently the whole way up the transport stack, and for listeners we support these new features with Sockets, TcpListener, and UdpClient. We hope to extend this to HttpListener in the future. WCF supports Teredo today for TCP channels when using the NetTcpSection.TeredoEnabled and TcpTransportElement.TeredoEnabled properties. By default, the Windows Firewall blocks incoming connections. For your listener to be accessible, you will want to create a firewall rule. This is true of any listening application, not just ones that wish to take advantage of NAT traversal. You can do this programmatically, and since adding firewall rules requires UAC elevation, application installation is the best time to do this. Even though the rule can be added at install time, it can be configured to activate only while the application is running. The firewall can be controlled using COM interop. Add a project reference to FirewallApi.dll. You can then add a rule with the following code. Guid netFwRuleUuid = new Guid("{2C5BC43E-3369-4C33-AB0C-BE9469677AF4}"); INetFwRule rule = (INetFwRule)Activator.CreateInstance(Type.GetTypeFromCLSID(netFwRuleUuid)); rule.Action = NET_FW_ACTION_.NET_FW_ACTION_ALLOW; rule.ApplicationName = @"C:\Program Files\My Application\MyApplication.exe"; rule.Description = "My Rule Description"; rule.Direction = NET_FW_RULE_DIRECTION_.NET_FW_RULE_DIR_IN; rule.EdgeTraversal = true; rule.Enabled = true; rule.Grouping = "My Rule Group"; rule.Name = "My Rule Name"; rule.Protocol = (int)ProtocolType.Tcp; Guid netFwPolicy2Uuid = new Guid("{E2B3C97F-6AE1-41AC-817A-F6F92166D7DD}"); INetFwPolicy2 policy = (INetFwPolicy2)Activator.CreateInstance(Type.GetTypeFromCLSID(netFwPolicy2Uuid)); policy.Rules.Add(rule); Note that to enable NAT traversal, the EdgeTraversal flag must be set to true since a firewall rule with a setting of false (the default) is used to prevent NAT traversal even when an application sets the IPProtectionLevel for its sockets to Unrestricted. Starting with Windows 7, although you can still use the approach above, there is more control over the default NAT traversal behavior using the InetFwRule2 interface and NET_FW_EDGE_TRAVERSAL_TYPE. Third party firewalls may require custom configuration or may prompt the user for permission on first application launch. With IPv6 and its related transition technologies becoming ubiquitous, now is a great time to take advantage of these capabilities in your applications, and the .NET Framework 4.0 makes it easy to get started. Microsoft IPv6 Website IPv6 Transition Technologies TechNet Teredo Overview TechNet Using IPv6 and Teredo MSDN Teredo Site Firewall requirements for coexisting with Teredo Security Concerns With IP Tunneling Special thanks to Dave Thaler for his insights and expert feedback. ~ Aaron Oneal | NCL Program Manager
http://blogs.msdn.com/b/ncl/archive/2009/07/27/end-to-end-connectivity-with-nat-traversal-.aspx
CC-MAIN-2014-23
en
refinedweb
DBIx::Class::Manual::Cookbook - Miscellaneous recipes%', '%Fear of Fours%'. Other queries might require slightly more complex logic: my @albums = $schema->resultset('Album')->search({ -or => [ -and => [ artist => { 'like', '%Smashing Pumpkins%' }, title => 'Siamese Dream', ], artist => 'Starchildren', ], }); This results in the following WHERE clause: WHERE ( artist LIKE '%Smashing Pumpkins%' AND title = 'Siamese Dream' ) OR artist = 'Starchildren' For more information on generating complex queries, see "WHERE CLAUSES" in SQL::Abstract.: my $top_cd = $cd_rs->search({}, { order_by => 'rating' })->single; my $top_cd = $cd_rs->search ({}, { order_by => 'rating', rows => 1 })->single; Sometimes you have to run arbitrary SQL because your query is too complex (e.g. it contains Unions, Sub-Selects, Stored Procedures, etc.) or has to be optimized for your database in a special way, but you still want to get the results as a DBIx::Class::ResultSet. This is accomplished by defining a ResultSource::View for your query, almost like you would define a regular ResultSource. package My::Schema::Result::UserFriendsComplex; use strict; use warnings; use base qw/DBIx::Class::Core/; __PACKAGE__->table_class('DBIx::Class::ResultSource::View'); # For the time being this is necessary even for virtual views __PACKAGE__->table($view_name); # # - will exclude this "table": sub sqlt_deploy_hook { $_[1]->schema->drop_table ($_[1]) } When you only want specific columns from a table, you can use columns to specify which ones you need. This is useful to avoid loading columns with large amounts of data that you aren't about to use anyway: my $rs = $schema->resultset('Artist')->search( undef, { columns => [qw/ name /] } ); # Equivalent SQL: # SELECT artist.name FROM artist This is a shortcut for select and as, see below. columns cannot be used together with select and as. The combination of select and as can be used to return the result of a database function or stored procedure as a column value. You use select to specify the source for your column value (e.g. a column name, function, or stored procedure name). You then use as to set the column name you will use to access the returned value: my $rs = $schema->resultset('Artist')->search( {}, { select => [ 'name', { LENGTH => 'name' } ], as => [qw/ name name_length /], } ); # Equivalent SQL: # SELECT name name, LENGTH( name ) # FROM artist Note that the as attribute has absolutely nothing to If your alias exists as a column in your base class (i.e. it was added with add_columns), you just access it as normal. Our Artist class has a name column, so we just use the name accessor: my $artist = $rs->first(); my $name = $artist->name(); If on the other hand the alias does not correspond to an existing column, you have to fetch the value using the get_column accessor: my $name_length = $artist->get_column('name_length'); If you don't like using get_column, you can always create an accessor for any of your aliases using either of these: # Define accessor manually: sub name_length { shift->get_column('name_length'); } # Or use DBIx::Class::AccessorGroup: __PACKAGE__->mk_group_accessors('column' => 'name_length'); See also "Using SQL functions on the left hand side of a comparison". my $rs = $schema->resultset('Artist')->search( {}, { columns => [ qw/name/ ], distinct => 1 } ); my $rs = $schema->resultset('Artist')->search( {}, { columns => [ qw/name/ ], group_by => [ qw/name/ ], } ); my $count = $rs->count; # Equivalent SQL: # SELECT COUNT( * ) FROM (SELECT me.name FROM artist me GROUP BY me.name) me: }); Subqueries are supported in the where clause (first hashref), and in the from, select, and +select attributes. my $cdrs = $schema->resultset('CD'); my $rs = $cdrs->search({ year => { '=' => $cdrs->search( { artist_id => { -ident => 'me.artist_id' } }, { alias => 'sub_query' } )->get_column('year')->max_rs->as_query, }, }); That creates the following SQL: SELECT me.cdid, me.artist, me.title, me.year, me.genreid, me.single_track FROM cd me WHERE year = ( SELECT MAX(sub_query.year) FROM cd sub_query WHERE artist_id = me.artist_id ) by resorting to literal SQL: $rs->search( \[ 'YEAR(date_of_birth) = ?', 1979 ] ); # Equivalent SQL: # SELECT * FROM employee WHERE YEAR(date_of_birth) = ? To include the function as part of a larger search, use the '-and' keyword to collect the search conditions: $rs->search({ -and => [ name => 'Bob', \[ 'YEAR(date_of_birth) = ?', 1979 ] ]}); # Equivalent SQL: # SELECT * FROM employee WHERE name = ? AND YEAR(date_of_birth) = ? Note: the syntax for specifying the bind value's datatype and value is explained in "DBIC BIND VALUES" in DBIx::Class::ResultSet. See also "Literal SQL with placeholders and bind values (subqueries)" in SQL::Abstract. When your RDBMS does not have a working SQL limit mechanism (e.g. Sybase ASE) and GenericSubQ is either too slow or does not work at all, you can try the software_limit DBIx::Class::ResultSet attribute, which skips over records to simulate limits in the Perl layer. For example: my $paged_rs = $rs->search({}, { rows => 25, page => 3, order_by => [ 'me.last_name' ], software_limit => 1, }); You can set it as a default for your schema by placing the following in your Schema.pm: __PACKAGE__->default_resultset_attributes({ software_limit => 1 }); WARNING: If you are dealing with large resultsets and your DBI or ODBC/ADO driver does not have proper cursor support (i.e. it loads the whole resultset into memory) then this feature will be extremely slow and use huge amounts of memory at best, and may cause your process to run out of memory and cause instability on your server at worst, beware!.name /] } ); # Equivalent SQL: # SELECT cd.* FROM cd # JOIN artist ON cd.artist = artist.id # WHERE artist.name = 'Bob Marley' # ORDER BY artist.name Note that the join attribute should only be used when you need to search or sort using columns in a related table. Joining related tables when you only need columns from the main table will make performance worse! Now let's say you want to display a list of CDs, each with the name of the artist. The following will work fine: while (my $cd = $rs->next) { print "CD: " . $cd->title . ", Artist: " . $cd->artist->name; } There is a problem however. We have searched both the cd and artist tables in our main query, but we have only returned data from the cd table. To get the artist name for any of the CD objects returned, DBIx::Class will go back to the database: SELECT artist.* FROM artist WHERE artist.id = ? A statement like the one above will run for each and every CD returned by our main query. Five CDs, five extra queries. A hundred CDs, one hundred extra queries! Thankfully, DBIx::Class has a prefetch attribute to solve this problem. This allows you to fetch results from related tables in advance: my $rs = $schema->resultset('CD')->search( { 'artists.name' => 'Bob Marley' }, { join => 'artists', order_by => [qw/ artists.name /], prefetch => 'artists' # return artist data too! } ); # Equivalent SQL (note SELECT from both "cd" and "artist"): # SELECT cd.*, artist.* FROM cd # JOIN artist ON cd.artist = artist.id # WHERE artist.name = 'Bob Marley' # ORDER BY artist.name The code to print the CD list remains the same: while (my $cd = $rs->next) { print "CD: " . $cd->title . ", Artist: " . $cd->artist->name; } DBIx::Class has now prefetched all matching data from the artist table, so no additional SQL statements are executed. You now have a much more efficient query. Also note that prefetch should only be used when you know you will definitely use data from a related table. Pre-fetching related tables when you only need columns from the main table will make performance worse!' AND liner_notes.notes LIKE '%some text%' # ORDER BY artist.name Sometimes you want to join more than one relationship deep. In this example, we want to find all Artist objects who have CDs whose LinerNotes contain a specific string: # Relationships defined elsewhere: # Artist->has_many('cds' => 'CD', 'artist'); # CD->has_one('liner_notes' => 'LinerNotes', 'cd'); my $rs = $schema->resultset('Artist')->search( { 'liner_notes.notes' => { 'like', '%some text%' }, }, { join => { 'cds' => 'liner_notes' } } ); # Equivalent SQL: # SELECT artist.* FROM artist # LEFT JOIN cd ON artist.id = cd.artist # LEFT JOIN liner_notes ON cd.id = liner_notes.cd # WHERE liner_notes.notes LIKE '%some text%' Joins can be nested to an arbitrary level. So if we decide later that we want to reduce the number of Artists returned based on who wrote the liner notes: # Relationship defined elsewhere: # LinerNotes->belongs_to('author' => 'Person'); my $rs = $schema->resultset('Artist')->search( { 'liner_notes.notes' => { 'like', '%some text%' }, 'author.name' => 'A. Writer' }, { join => { 'cds' => { 'liner_notes' => 'author' } } } ); # Equivalent SQL: # SELECT artist.* FROM artist # LEFT JOIN cd ON artist.id = cd.artist # LEFT JOIN liner_notes ON cd.id = liner_notes.cd # LEFT JOIN author ON author.id = liner_notes.author # WHERE liner_notes.notes LIKE '%some text%' # AND author.name = 'A. Writer'; It is possible to get a Schema object from a result object like so: my $schema = $cd->result_source->schema; # use the schema as normal: my $artist_rs = $schema->resultset('Artist'); This can be useful when you don't want to pass around a Schema object to every method. AKA getting last_insert_id Thanks to the core component PK::Auto, this is straightforward: my $foo = $rs->create(\%blah); # do more stuff my $id = $foo->id; # foo->my_primary_key_field will also work. If you are not using autoincrementing primary keys, this will probably not work, but then you already know the value of the last primary key anyway.; Suppose we have two tables: Product and Category. The table specifications are: Product(id, Description, category) Category(id, Description) category is a foreign key into the Category table. If you have a Product object $obj and write something like print $obj->category things will not work as expected. To obtain, for example, the category description, you should add this method to the class defining the Category table: use overload "" => sub { my $self = shift; return $self->Description; }, fallback => 1; Just use find_or_new instead, then check in_storage: my $obj = $rs->find_or_new({ blah => 'blarg' }); unless ($obj->in_storage) { $obj->insert; # do whatever else you wanted if it was a new row }; AKA multi-class object inflation from one table DBIx::Class classes are proxy classes, therefore some different techniques need to be employed for more than basic subclassing. In this example we have a single user table that carries a boolean bit for admin. We would like like to give the admin users objects (DBIx::Class::Row) the same methods as a regular user but also special admin only methods. It doesn't make sense to create two separate proxy-class files for this. We would be copying all the user methods into the Admin class. There is a cleaner way to accomplish this. Overriding the inflate_result method within the User proxy-class gives us the effect we want. This method is called by DBIx::Class::ResultSet when inflating a result from storage. So we grab the object being returned, inspect the values we are looking for, bless it if it's an admin object, and then return it. See the example below: Schema Definition package'; __PACKAGE__->table('users'); __PACKAGE__->add_columns(qw/user_id email password firstname lastname active admin/); __PACKAGE__->set_primary_key('user_id'); sub inflate_result { my $self = shift; my $ret = $self->next::method(@_); if( $ret->admin ) {### If this is an admin, rebless for extra functions $self->ensure_class_loaded( $admin_class ); bless $ret, $admin_class; } return $ret; } sub hello { print "I am a regular user.\n"; return ; }::Schema->connection('dbi:Pg:dbname=test'); $schema->resultset('User')->create( $user_data ); $schema->resultset('User')->create( $admin_data ); ### Now we search for them my $user = $schema->resultset('User')->single( $user_data ); my $admin = $schema->resultset('User')->single( $admin_data ); print ref $user, "\n"; print ref $admin, "\n"; print $user->password , "\n"; # pass1 print $admin->password , "\n";# pass2; inherited from User print $user->hello , "\n";# I am a regular user. print $admin->hello, "\n";# I am an admin. ### The statement below will NOT print print "I can do admin stuff\n" if $user->can('do_admin_stuff'); ### The statement below will print print "I can do admin stuff\n" if $admin->can('do_admin_stuff'); Alternatively you can use DBIx::Class::DynamicSubclass that implements exactly the above functionality. DBIx::Class is not built for speed, it's built for convenience and ease of use, but sometimes you just need to get the data, and skip the fancy objects. To do this simply use DBIx::Class::ResultClass::HashRefInflator. my $rs = $schema->resultset('CD'); $rs->result_class('DBIx::Class::ResultClass::HashRefInflator'); my $hash_ref = $rs->find(1); Wasn't that easy? Beware, changing the Result class using "result_class" in DBIx::Class::ResultSet will replace any existing class completely including any special components loaded using load_components, eg DBIx::Class::InflateColumn::DateTime.). To get the DBIx::Class::Schema object from a ResultSet, do the following: $rs->result_source->schema AKA Aggregating Data If you want to find the sum of a particular column there are several ways, the obvious one is to use search: my $rs = $schema->resultset('Items')->search( {}, { select => [ { sum => 'Cost' } ], as => [ 'total_cost' ], # remember this 'as' is for DBIx::Class::ResultSet not SQL } ); my $tc = $rs->first->get_column('total_cost'); Or, you can use the DBIx::Class::ResultSetColumn, which gets returned when you ask the ResultSet for a column using get_column: my $cost = $schema->resultset('Items')->get_column('Cost'); my $tc = $cost->sum; With this you can also do: my $minvalue = $cost->min; my $maxvalue = $cost->max; Or just iterate through the values of this column only: while ( my $c = $cost->next ) { print $c; } foreach my $c ($cost->all) { print $c; } ResultSetColumn only has a limited number of built-in functions. If you need one that it doesn't have, then you can use the func method instead: my $avg = $cost->func('AVERAGE'); This will cause the following SQL statement to be run: SELECT AVERAGE(Cost) FROM Items me Which will of course only work if your database supports this function. See DBIx::Class::ResultSetColumn for more documentation. Sometimes you have a (set of) result; my $author = $book->create_related('author', { name => 'Fred'}); Only searches for books named 'Titanic' by the author in $author. my $books_rs = $author->search_related('books', { name => 'Titanic' }); Deletes only the book named Titanic by the author in $author. $author->delete_related('books', { name => 'Titanic' }); If you always want a relation to be ordered, you can specify this when you create the relationship. To order $book->pages by descending page_number, create the relation as follows: __PACKAGE__->has_many('pages' => 'Page', 'book', { order_by => { -desc => 'page_number'} } ); If you want to get a filtered result set, you can just add add to $attr as follows: __PACKAGE__->has_many('pages' => 'Page', 'book', { where => { scrap => 0 } } );'; __PACKAGE__->table('address'); __PACKAGE__->add_columns(qw/id street town area_code country/); __PACKAGE__->set_primary_key('id'); __PACKAGE__->has_many('user_address' => 'My::UserAddress', 'address'); __PACKAGE__->many_to_many('users' => 'user_address', 'user'); $rs = $user->addresses(); # get all addresses for a user $rs = $address->users(); # get all users for an address my $address = $user->add_to_addresses( # returns a My::Address instance, # NOT a My::UserAddress instance! { country => 'United Kingdom', area_code => 'XYZ', town => 'London', street => 'Sesame', } );. As of version 0.04001, there is improved transaction support in DBIx::Class::Storage and DBIx::Class::Schema. Here is an example of the recommended way to use it: my $genus = $schema->resultset('Genus')->find(12); my $coderef2 = sub { $genus->extinct(1); $genus->update; }; my $coderef1 = sub { $genus->add_to_species({ name => 'troglodyte' }); $genus->wings(2); $genus->update; $schema->txn_do($coderef2); # Can have a nested transaction. Only the outer will actualy commit return $genus->species; }; use Try::Tiny; my $rs; try { $rs = $schema->txn_do($coderef1); } catch { # try block succeeds. use Try::Tiny; my $exception; try { $schema->txn_do(sub { # SQL: BEGIN WORK; my $job = $schema->resultset('Job')->create({ name=> 'big job' }); # SQL: INSERT INTO job ( name) VALUES ( 'big job' ); for (1..10) { # Start a nested transaction, which in fact sets a savepoint. try { ); } }); } catch { #; } }); } catch { $exception = $_; }; if ($exception) { # There was an error while handling the $job. Rollback all changes # since the transaction started, including the already committed # ('released') savepoints. There will be neither a new $job nor any # $thing entry in the database. # SQL: ROLLBACK; print "ERROR: $exception try-block around txn_do fails, a rollback is issued. If the try succeeds, the transaction is committed (or the savepoint released). While you can get more fine-grained control using svp_begin, svp_release and svp_rollback, it is strongly recommended to use txn_do with coderefs.. DBIx::Class::Schema::Loader will connect to a database and create a DBIx::Class::Schema and associated sources by examining the database. The recommend way of achieving this is to use the dbicdump utility or the Catalyst helper, as described in Manual::Intro. Alternatively, use the make_schema_at method: perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:./lib \ -e 'make_schema_at("My::Schema", \ { db_schema => 'myschema', components => ["InflateColumn::DateTime"] }, \ [ "dbi:Pg:dbname=foo", "username", "password" ])' This will create a tree of files rooted at ./lib/My/Schema/ containing source definitions for all the tables found in the myschema schema in the foo database.. The following example shows simplistically how you might use DBIx::Class to deploy versioned schemas to your customers. The basic process is as follows:, and defaults to . if not supplied.]. To ensure WHERE conditions containing DateTime arguments are properly formatted to be understood by your RDBMS, you must use the DateTime formatter returned by "datetime_parser" in DBIx::Class::Storage::DBI to format any DateTime objects you pass to search conditions. Any Storage object attached to your Schema provides a correct DateTime formatter, so all you have to do is: my $dtf = $schema->storage->datetime_parser; my $rs = $schema->resultset('users')->search( { signup_date => { -between => [ $dtf->format_datetime($dt_start), $dtf->format_datetime($dt_end), ], } }, ); Without doing this the query will contain the simple stringification of the DateTime object, which almost never matches the RDBMS expectations. This kludge is necessary only for conditions passed to "search" in DBIx::Class::ResultSet, whereas create, find, "update" in DBIx::Class::Row (but not "update" in DBIx::Class::ResultSet) are all DBIx::Class::InflateColumn-aware and will do the right thing when supplied an inflated DateTime object.. Information about Oracle support for unicode can be found in "Unicode" in DBD::Oracle.} ); You want to start using the schema-based approach to DBIx::Class (see "Setting it up manually" in DBIx::Class::Manual::Intro), but have an established class-based setup with lots of existing classes that you don't want to move by hand. Try this nifty script instead: use MyDB; use SQL::Translator; my $schema = MyDB->schema_instance; my $translator = SQL::Translator->new( debug => $debug || 0, trace => $trace || 0, no_comments => $no_comments || 0, show_warnings => $show_warnings || 0, add_drop_table => $add_drop_table || 0, validate => $validate || 0, parser_args => { 'DBIx::Schema' => $schema, }, producer_args => { 'prefix' => 'My::Schema', }, ); $translator->parser('SQL::Translator::Parser::DBIx::Class'); $translator->producer('SQL::Translator::Producer::DBIx::Class::File'); my $output = $translator->translate(@args) or die "Error: " . $translator->error; print $output; You could use Module::Find to search for all subclasses in the MyDB::* namespace, which is currently left as an exercise for the reader.. It's as simple as overriding the new method. Note the use of next::method. sub new { my ( $class, $attrs ) = @_; $attrs->{foo} = 'bar' unless defined $attrs->{foo}; my $new = $class->next::method($attrs); return $new; } For more information about next::method, look in the Class::C3 documentation. See also DBIx::Class::Manual::Component for more ways to write your own base classes to do this. People looking for ways to do "triggers" with DBIx::Class are probably just looking for this. For example, say that you have three columns, id, number, and squared. You would like to make changes to number and have squared be automagically set to the value of number squared. You can accomplish this by wrapping the number accessor with the around method modifier, available through either Class::Method::Modifiers, Moose or Moose-like modules): around number => sub { my ($orig, $self) = (shift, shift); if (@_) { my $value = $_[0]; $self->squared( $value * $value ); } $self->$orig(@_); }; Note that the hard work is done by the call to $self->$orig, which redispatches your call to store_column in the superclass(es). Generally, if this is a calculation your database can easily do, try and avoid storing the calculated value, it is safer to calculate when needed, than rely on the data being in sync. } Problem: Say you have a table "Camera" and want to associate a description with each camera. For most cameras, you'll be able to generate the description from the other columns. However, in a few special cases you may want to associate a custom description with a camera. Solution: In your database schema, define a description field in the "Camera" table that can contain text and null values. In DBIC, we'll overload the column accessor to provide a sane default if no custom description is defined. The accessor will either return or generate the description, depending on whether the field is null or not. First, in your "Camera" schema class, define the description field as follows: __PACKAGE__->add_columns(description => { accessor => '_description' }); Next, we'll define the accessor-wrapper subroutine: sub description { my $self = shift; # If there is an update to the column, we'll let the original accessor # deal with it. return $self->_description(@_) if @_; # Fetch the column value. my $description = $self->_description; # If there's something in the description field, then just return that. return $description if defined $description && length $descripton; # Otherwise, generate a description. return $self->generate_description; } Data::Dumper can be a very useful tool for debugging, but sometimes it can be hard to find the pertinent data in all the data it can generate. Specifically, if one naively tries to use it like so, use Data::Dumper; my $cd = $schema->resultset('CD')->find(1); print Dumper($cd); several pages worth of data from the CD object's schema and result source will be dumped to the screen. Since usually one is only interested in a few column values of the object, this is not very helpful. Luckily, it is possible to modify the data before Data::Dumper outputs it. Simply define a hook that Data::Dumper will call on the object before dumping it. For example, package My::DB::CD; sub _dumper_hook { $_[0] = bless { %{ $_[0] }, result_source => undef, }, ref($_[0]); } [...] use Data::Dumper; local $Data::Dumper::Freezer = '_dumper_hook'; my $cd = $schema->resultset('CD')->find(1); print Dumper($cd); # dumps $cd without its ResultSource If the structure of your schema is such that there is a common base class for all your table classes, simply put a method similar to _dumper_hook in the base class and set $Data::Dumper::Freezer to its name and Data::Dumper will automagically clean up your data before printing it. See "EXAMPLES" in Data::Dumper for more information. When you enable DBIx::Class::Storage's debugging it prints the SQL executed as well as notifications of query completion and transaction begin/commit. If you'd like to profile the SQL you can subclass the DBIx::Class::Storage::Statistics class and write your own profiling mechanism: package My::Profiler; use strict; use base 'DBIx::Class::Storage::Statistics'; use Time::HiRes qw(time); my $start; sub query_start { my $self = shift(); my $sql = shift(); my @params = @_; $self->print("Executing $sql: ".join(', ', @params)."\n"); $start = time(); } sub query_end { my $self = shift(); my $sql = shift(); my @params = @_; my $elapsed = sprintf("%0.4f", time() - $start); $self->print("Execution took $elapsed seconds.\n"); $start = undef; } 1; You can then install that class as the debugging object: __PACKAGE__->storage->debugobj(new My::Profiler()); __PACKAGE__->storage->debug(1); A more complicated example might involve storing each execution of SQL in an array: sub query_end { my $self = shift(); my $sql = shift(); my @params = @_; my $elapsed = time() - $start; push(@{ $calls{$sql} }, { params => \@params, elapsed => $elapsed }); } You could then create average, high and low execution times for an SQL statement and dig down to see if certain parameters cause aberrant behavior. You might want to check out DBIx::Class::QueryLog as well. When inserting many rows, for best results, populate a large number of rows at a time, but not so large that the table is locked for an unacceptably long time. If using create instead, use a transaction and commit every X rows; where X gives you the best performance without locking the table for too long. }, });
http://search.cpan.org/~ribasushi/DBIx-Class-0.08250/lib/DBIx/Class/Manual/Cookbook.pod
CC-MAIN-2014-23
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. Malicious code writers have many attack vectors. Here, I will introduce a JS class which dissects an encoded JavaScript. I will show you a real life example on a script that tries to hide its actions by using some very common techniques and how to bypass them to uncover the true intent of the code. JavaScript is a very flexible OO scripting language which is mostly known for its capability to run inside browsers and manipulate the web pages on the client side. For more information see the Wikipedia entries for JavaScript [1] and prototype based OO programming languages [2]. Some authors even think that JavaScript will be the scripting language of the future [3]. This article assumes that the reader knows the basic constructs of JavaScript. Most of the code hiding techniques are composed of two parts: an encrypted string and a decryptor, which un-mangles and finally evaluates the resulting piece of code. JavaScript (and most of the scripting languages) offers functions that take a string and evaluate it as a piece of code. This process is repeated several times (so the "decrypted" string may actually contain another string to be decrypted). The main goal of this article is to show you how to place hooks on these commonly used functions and to redirect them to a log window instead of execution, where the data can be conveniently interpreted. The frequently used functions in these routines are: document.write, document.writeln and eval (or the old deprecated counterpart of it – Object.prototype.eval). Below you can see a fragment of such a code: document.write document.writeln eval Object.prototype.eval <script language="javascript"> document.write(unescape('%3C%73%63%72%69%70%74%2... dF('%286FVFULSW%2853odqjxdjh%28... </script> It is clear that the first line must somehow define the function dF which is most probably the decryptor. Our goal is to hook document.write and instead of execution the output should be redirected to some log window so that we can analyze the result. (A quick alternative would be to replace document.write with alert and observe the output. However this has two drawbacks: if one wants to recreate the code she/he must type it back – as you can't copy-paste from the alert box – and the alert box limits the maximum number of characters that can be displayed, which proved be insufficient in this case). Fortunately, hooking is very easy to do. One can simply write: dF alert function someFunction() { //... } document.write = someFunction; and all the calls to document.write are now redirected to someFunction. Next we need a separate the window where the output will be dumped. This can be opened with window.open, however most probably it will be blocked by popup blockers (since the window must appear at startup time, without user intervention to record every call - even those which are made during the loading phase of the page, as are most decryption calls, since their intent is to present to the browser / user a decrypted version). So we should provide an alternative method for opening up the window, and memorize the things we would like to display until the window is open and we can dump the text there. Also, we would like to provide as little namespace pollution as possible (namespace pollution means that we define global functions / variables which may conflict with the existing ones). To avoid this we always declare local variables in functions and wrap the entire code in a class, the name of which can be changed easily with any editor providing a search and replace functionality. someFunction window.open Remark: One could use Venkman [5], the very powerful JavaScript debugger for Mozilla / Firefox. However this wonderful system doesn't perform really well with self modifying code (after all, which normal programmer would write such a code?!) The code is contained entirely in the file "jsdebug.js". It has three big parts: the declaration of the JsInterceptor class (which can be renamed if needed), the initial call which initializes the system and the function which substitutes the default eval function (this was necessary, since the eval behaves like a standalone function). JsInterceptor General notes: Ihe implementation was done on Firefox 1.0.6 (the latest stable release while writing this document) and while I've tried to be cross browser compatible, I've never tested on other browsers. Also, if you are going to analyze hostile code, I recommend Firefox since it is a very secure browser and the flaws are patched up very rapidly (and through the automated notify system you get to know about it very fast). During initialization of the system the following things are accomplished: JsInterceptor.SetupWindowOpener toString() Now, I will provide a short description of every method and any important implementation quirk it might contain: CopyToClipboard InterceptorWriteLog NewDocumentWrite NewDocumentWriteLn NewEval AddEvent SetupDebugWindow WindowOpener SetupWindowOpener One useful trick in the code is the usage of escape / unescape when constructing functions from the strings. Since these functions themselves need to contain strings (delimited by single or normal quotation marks), these signs had to be eliminated. Also, one would have to eliminate the newline characters. Instead of writing a function with several replace methods one after the other I found it easier to wrap / unwrap the strings with the above mentioned function. For obvious reasons I'm not redistributing the hostile code, however I will describe the session during which it was analyzed to exemplify the usage of the provided class. The first step is to inspect the code very carefully. Tools recommended: an editor which offers syntax highlighting (I personally use jEdit [6]) and Tidy [7]. First I pass the HTML code through Tidy to make it more readable and then open it with jEdit and look very carefully through it (preferably more than once). This is a very important step, since after this you will open the script in a live browser and you can never be sure what exploitable part your browser contains (particularly IE is rated currently "Highly critical" in the Secunia database [8], while Firefox is rated "less critical" [9], but there are claims that there are some undisclosed exploits for it too [10]). In this particular case we see the following code: <script language="javascript"> document.write(unescape('%3C%73%63%72%69%70%74%2... After looking closer, we see that there are actually two lines that are written as one, most probably to further obfuscate the code. After indenting properly we end up with: After carefully looking at it we decide that the first line probably defines the "dF" function, and the second line calls it, most probably with the encrypted body of a function. We decide that now it's safe to put it in a browser, so we edit the HTML page to include the following line in the head: <script type="text/javascript" language="javascript" src="jsdebug.js"></script> And fire it up in the browser. We get back the result of the first (and only) document.write call and an error regarding the dF function (since we redirected the before mentioned function): function dF(s) { var s1=unescape(s.substr(0,s.length-1)); var t=''; for(i=0;i<s1.length;i++) t+=String.fromCharCode(s1.charCodeAt(i)-s.substr(s.length-1,1)); document.write(unescape(t)); } (The original code was again in a single line, indentation was added by me.) This is a simple routine and its only interaction with the "environment" is through the document.write code which we have hooked. We copy it back to the original source code (before a call to dF is made) and refresh the page. Now we get the result of the call to dF (since it uses document.write to display it): <SCRIPT language="JScript.Encode">#@~^fAAAAA==@#@&NG1Es+xDRS... Wow, that looks strange. A little background info: In the year 2003 Microsoft created a little tool called Script encoder [11]. It provides a very weak encryption, which can be broken very quickly and only provides protection against the casual look (it also not compatible with any standard or other browsers!). I used Google with the query "JScript.Encode decode" and then the following website "Decode web pages containing "jscript.encode" sections" (I'm not affiliated to this site). Through it the final result appeared as: document.write( '<OBJECT classid=XXXX-XXXX-XXX codebase=XXXXXX.cab></OBJECT>'); Searching Google for XXXXXX.cab resulted in the URL of the file. Downloading and unpacking it gave an install script (.inf) and a DLL. After submitting the DLL to VirusTotal [12] we concluded that this threat is detected by many antivirus engines, and now (at least in this case) we can rest. If this technique becomes widely used, future malware authors will try to detect its presence, much the same way as (some) the current compiled ones try to detect the presence of debuggers. The best defense against that is of course very careful inspection of the code before you run it in the browser and its modification so that the detection code gets skipped. I will present three methods that are used to detect the presence of this system and how they can be defeated. I welcome any other suggestions on how this system can be detected and how a particular detection method can be defeated, but I would also like to stress again that the best defense is a deep inspection of the code before running it: (JsInterceptor != null) toString document.write.toString() function write() { [native code] } write Function document.write.toString.toString() document.open() document.write(...) document.close() Today's scripting malware is going through the same evolutionary path as binary viruses: from proof of concept through polymorphism and ending up with metamorphism (no example yet, but they will appear). To counter this it is necessary to study the techniques which provide the fastest and the most convenient method for analysis of the code to be able to react quickly to the new threats. Full disclosure: I’m a junior virus analyst for (SOFTWIN) the makers of the BitDefender antivirus product. However anything that I've written in this article must not be interpreted as an official statement of the company, it is merely a personal.
http://www.codeproject.com/Articles/11721/Hostile-code-analysis-with-JavaScript?PageFlow=FixedWidth
CC-MAIN-2014-23
en
refinedweb
Example scenario Besides ITCAM for SOA, there are other ITCAM products, such as ITCAM for Response Time, ITCAM for WebResource, and ITCAM for Transaction. However, these products monitor system at the application granularity level. Unlike the other products, ITCAM for SOA can monitor systems at a granularity level rather than application level. That is, ITCAM for SOA can recognize Service component architecture (SCA) components and messages in applications to monitor systems. Let's begin from the example scenario illustrated in Figure 1 to explore ITCAM for SOA. In this example scenario, there are three modules, Module 1 contains a Business Process Execution Language (BPEL) SCA component which provide service for clients; Module 2 and Module 3 contains some Java SCA components which provide computing services to BPEL. When clients invoke the BPEL component, BPEL invokes the Java components for computing. Assume the 3 modules are deployed on heterogeneous environment, thus BPEL have to invoke computing services using various protocols. In this example, as you see, the SCA/JAX-RPC/JMS/HTTP bindings are being utilized. In Figure 1 below, the arrows are indicating where messages are being requested/responsed. The right-arrow means request message and left-arrow means response message. The different arrow styles means different message protocol. In next the section, we will explain "ServiceRequester" and "ServiceProvider". Figure 1. Example scenario How ITCAM for SOA record traffic data While the application illustrated in Figure 1 executing, Data Collector of ITCAM for SOA monitoring the SCA or WebService messages and send these messages as traffic data to ITCAM for SOA server side. ITCAM for SOA analysis system exactly based on these traffic data. When traffic data arrived, ITCAM for SOA analyze the traffic data to get information such as Service-Provider, WSDL-ServicePort and their namespaces to identify each unique transaction, then create record in the Services_Inventory table for the identified transactions. Although one traffic data may content the service requester and service provider, ITCAM for SOA only record the service provider, will not record the service requester. That is, even there may be many clients request one service, ITCAM for SOA only records the information of request messages to service provider and response messages from service provide, the information of service requester are ignored. ITCAM for SOA will create two records for the service provider information, one is to record request messages and the other is to record response message. Meanwhile, ITCAM for SOA will gather certain data from the traffic data as metrics to update them to the corresponding transaction records in Services_Inventory table. The important metrics for ITCAM for SOA are "Message Count", "Avg Message Length", "Avg Elapsed Time " and "Fault Count". (See Table 1 below for more detail of the Services_Inventory table fields). Based on upon understanding, for example, when component "AccountVerificationToBe" invoke "CustomerRetrieval" component once, there should be two records in Services_Inventory table. Table 1. Example records in Services_Inventory From this example, we can see that: - ITCAM for SOA will only record the request message to service provider, and response message from service proivder, regardless the service requester. In this example, component "AccountVerificationToBe" is the service requester to service provider, the component "CustomerRetrieval". Thus ITCAM for SOA create one "requester" row to record the request messages to service provider "CustomerRetrieval", and another row to record the response messages from service provider "CustomerRetrieval". - If run the application many times, the records won't increase, ITCAM for SOA will only update the metrics value of the record, such as "Message Count", "Avg Message Length", "Avg Elapsed Time" and "Fault Count". Figure 2 displays the message count that ITCAM for SOA record down while the example application run once. The digit on each arrow shows how many messages ITCAM for SOA caught in one invocation. Figure 2. Message count ITCAM for SOA record via one single run Through figure 2, there has 6 questions Q1 to Q6. They need to do more explanation to understand how ITCAM works. - Q1: Why there is no message count be caught for "Account VerificationToBe" component ? Because Data Collector of ITCAM for SOA only detect SCA and WebService messages. "Account VerificationToBe" component is the first interface for clients, if the transactions from out side to "Account VerificationToBe" component are not SCA or WebService message, Data Collector can not detect them. In another word, if clients invoke "Account VerificationToBe" component through SCA or WebService protocol, there should have message count caught. - Q2: Why one invocation cause 2 request message and 2 response message? Shouldn't it be 1 ? At first glance, the message count of request and response should be 1, since there only one invocation happened. However, the correct answer is 2. This is because of the implements of SCA invocation mechanism. The invocation from caller to callee is indirectly, caller send one SCA request message to SCA container, and then SCA container forward a new SCA message to callee. In this case, Data Collector will get two SCA messages, both of them has same invocation source and target. That means, ITCAM for SOA count all SCA messages here. Figure 3 below shows the interactions in SCA container. Now, it's clear why the DataCollector got 2 request messages and 2 response messages. In fact, different SCA invocation pattern would cause different message count. Please refer SCA invocation pattern in WebSphere Process Server V6.1 to learn more SCA invocation pattern. Figure 3. Synchronous invocation (request-response operation) - Q3: Why is there no message between Import and Export? WDPE SCA container consider Export as special component, it is not treated as real service provide. The messages to Export won't detected by SCA DataCollector. Thus the message count between Import and Export is 0. But, if the message protocol is WebService, it's another story, see Q4. - Q4: Why there is only one request and one response message? From Q3, the messages to Export can not be detected by SCA DataCollector. But besides SCA DataCollector, ITCAM for SOA have another WebService DataCollector. Although the message can not detected by SCA DataCollector, the WebService DataCollector detected the JAX-RPC message. From Q2, we knew one because of SCA invocation mechanism, one invocation would have 2 messages. Here we only have 1, it indicate that the message is not caught by SCA DataCollector, but by WebService DataCollector. The message is not pass thought SCA container, so only 1 message caught. - Q5: Why is the response message count three? This is because of the implements of JMS invocation mechanism. At JMS caller side, it might have two queues, the "send" queue and "receive" queue; but at JMS callee side, it might have three queues, "receive", "send" and "callback" queue. So, if application set the JMS callee to use "callback" queue, there will has 3 messages generated at callee side. - Q6: Why both the two component are "ServiceProvider" ? From Q1, we knew since "Account VerificationToBe" component is the first interface for the clients, DataCollector can not detect the non-WebService messages from clients, it act as "ServiceRequester"; From Q3, we knew Export is not treated as real service provider by SCA container, it act as "ServiceRequester". Both "DetermineApplicantEligibility" and "DetermineApplicantEligibilityHTTPImport" are not the situation of Q1 and Q3. They act as both "ServiceProvider" and "ServiceRequester". E.g., "DetermineApplicantEligibility" is the "ServiceProvider" for "DetermineApplicantEligibility SCA export"; and it also the "ServiceRequester" for "DetermineApplicantEligibilityHTTPImport". Services_Inventory table fields overview The traffic data is stored in the Services_Inventory table. ITCAM for SOA uses the data in Services_Inventory table to present the system status that under monitoring. Table 2 below lists all the fields within the Services_Inventory table. Note: The Services_Inventory table is commonly used within TEP, both for common WebService and SCA component. In case of the SCA component, some of the field names are not compatible with the content. For example, the SCA component does not have a property named "Operation Namespace", so in the filed "Operation Namespace (Unicode)" it exactly store the name of the method on the interface of the SCA component being invoked. All the incompatible fields are highlight as bold font. Note: The message length is not currently supported in WPS SCA container. As shown in Figure 4 and Figure 5, all the SCA message size are 0 but the WebService message size are not. Table 2. ITCAM for SOA metrics View traffic data within TEP Next we show how ITCAM for SOA operates and how it records the traffic data into the Service_Inventory table. ITCAM for SOA displays the traffic data within TEP graphically. The following figures shows how ITCAM for SOA displays the traffic data. Figure 4. Message summary workspace view in TEP Figure 5. Services management workspace view in TEP Figure 6. Performance summary workspace view in TEP Resources - Learn more SCA invocation pattern in WebSphere Process Server V6.1 to aware how ITCAM for SOA get the message count. - ITCAM for SOA 7.1.1 information center - IBM Tivoli Monitoring installation guide IBM Tivoli Monitoring installation guide - Redbook: IBM Tivoli Composite Application Manager Family Installation, Configuration, and Basic Usage,(SG24-7151-02), January 2008. - Introduction to WebSphere Dynamic Process Edition - WebSphere Dynamic Process Edition Information Center - WebSphere Message Modeler Information Center - WebSphere Process Server Information Center - WebSphere Business Services Fabric Information Center - WebSphere Business Monitor Information Center.
http://www.ibm.com/developerworks/webservices/library/ws-ITCAMpart2/index.html?ca=drs-&ca=dgf-ip
CC-MAIN-2014-23
en
refinedweb
Hello I found that a MIDlet will silently die on Nokia N90 and 6680 when trying to read the content of an HTTP "302 Found" server response. Here is a small code showing it : And here is the server-side PHP code to generate a HTTP 302 reply with content :And here is the server-side PHP code to generate a HTTP 302 reply with content :Code:import javax.microedition.midlet.*; import javax.microedition.io.*; import java.io.*; import javax.microedition.lcdui.*; public class Test extends MIDlet { private static final String URL = "" protected void startApp() throws MIDletStateChangeException { Form f = new Form( "Test" ); String response=""; try { HttpConnection con = (HttpConnection)Connector.open( URL, Connector.READ ); InputStream in = con.openInputStream(); InputStreamReader reader = new InputStreamReader(in); char content[] = new char[1024]; int len = reader.read( content, 0, 1024 ); reader.close(); in.close(); response = String.valueOf(content,0,len); } catch( IOException e ) {} f.append( new StringItem("Reponse : ", response, StringItem.PLAIN) ); Display.getDisplay(this).setCurrent(f); } protected void pauseApp() {} protected void destroyApp(boolean arg0) throws MIDletStateChangeException{} } It was REALLY hard to find out why my much more complicated app kept on crashing...It was REALLY hard to find out why my much more complicated app kept on crashing...Code:<?php header('Location:' ); echo 'plop'; ?> Isn't that part of the HTTP standard to send a content together with a 302 reply ?
http://developer.nokia.com/community/discussion/showthread.php/77062-KVM-crashes-when-reading-content-with-HTTP-quot-302-Found-quot-response-on-N90-and-6680?p=224125&viewfull=1
CC-MAIN-2014-23
en
refinedweb
> * a built-in mechanism to include build-file fragments - something > that doesn't use SYSTEM entities at all and therefore is XSchema > friendly, allows for property expansions ... +1 > * Let Ant ignore - but warn - if unknown XML elements or attributes > occur in a build file. +0 I don't know enough about XML I think, but coudn't a namespace be used for this? (Maybe I should have been more active in the discussion (was I at all??)) Nico
http://mail-archives.apache.org/mod_mbox/ant-dev/200104.mbox/%3C01d201c0c9de$194be1e0$012a2a0a@seessle.de%3E
CC-MAIN-2014-23
en
refinedweb
CryptoStream Class Defines a stream that links data streams to cryptographic transformations. For a list of all members of this type, see CryptoStream Members. System.Object System.MarshalByRefObject System.IO.Stream System.Security.Cryptography.CryptoStream [Visual Basic] Public Class CryptoStream Inherits Stream [C#] public class CryptoStream : Stream [C++] public __gc class CryptoStream : public Stream [JScript] public class CryptoStream extends Stream Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. Remarks. Example [Visual Basic, C#, C++] The following example demonstrates how to use a CryptoStream to encrypt a byte array. This method uses RijndaelManaged with the specified Key and initialization vector (IV) to encrypt a file specified by inName, and outputs the encrypted result to the file specified by outName. The rijnKey and rijnIV parameters to the method are 16-byte arrays. You must have the high encryption pack installed to run this sample. [Visual Basic] [C#] private static(); } [C++](S"Encrypting..."); //Read from the input file, then encrypt and write to the output file. while(rdlen < totlen) { len = fin->Read(bin, 0, 100); encStream->Write(bin, 0, len); rdlen = rdlen + len; Console::WriteLine(S"{0} bytes processed", __box(rdlen)); } encStream->Close(); fout->Close(); fin->Close(); } CryptoStream Members | System.Security.Cryptography Namespace | Cryptographic Services
http://msdn.microsoft.com/en-US/library/system.security.cryptography.cryptostream(v=vs.71)
CC-MAIN-2014-23
en
refinedweb
Forum:Should VFS be abolished? From Uncyclopedia, the content-free encyclopedia I noticed on the VFS page this month that there's a sentiment that the system is more trouble than it's worth. It's led to numerous drama fests in the past (including the banning of PuppyOnTheRadio), and becoming an admin becomes an obsession for many. It leads to lobbying and votewhoring by people looking to secure an adminship. I'm normally for democratization, but in the case, perhaps we should have a meritocratic system similar to a typical discussion board. So, let's vote (irony): should VFS be abolished and replaced by something else entirely? If so, what? Saberwolf116 (talk) 19:59, July 3, 2012 (UTC) Vote! For. It seems to cause a lot more trouble than it's worth. Saberwolf116 (talk) 19:59, July 3, 2012 (UTC) Against. →A (Ruins) 20:23, 3 July 2012 Against. although reform would be nice. I've never thought it fair that admins had a greater say in VFS. And my campaigning to be admin a few years ago was a joke. --Hotadmin4u69 [TALK] 21:12 Jul 3 2012 Against. And I get two votes because I'm an admin (that was a joke, I actually think that aspect should be abolished). -- Sir Xam Ralco the Mediocre 21:19, July 3, 2012 (UTC) Against. Mattsnow 21:21, July 3, 2012 (UTC) - Against. There's no real debate about a consensus based system or the idea of having a VFS... just one that keeps the nepotism and lobbying to a minimum.--Sycamore (Talk) 22:05, July 3, 2012 (UTC) Against. While I agree VFS is a um... bad system to see the least, the idea of a RfA system would never work on Uncyclopedia because unlike wikipedia which receives close to 50 edits per minute you could have too many administrators. So unless it could be modified into some way that could make it work here I guess we'll have to make do with VFS. (Although totally impractical if we could somehow make a secret ballet, say by using some third party where all votes are submitted to that could gain absolutely nothing from altering the votes would be the ultimate solution to all vote whoring forever and ever amen) ~Sir Frosty (Talk to me!) 23:21, July 3, 2012 (UTC) - I've never really noticed a problem with "lobbying" unless you count, like, stating your reasons for voting for someone as lobbying. Which is a pretty big stretch. Anyway, no need to make things more complicated than they're worth. A little improvement at maybe reducing drama somehow would be nice, but that's definitely more our fault, not the system's. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 23:47, July 3, 2012 (UTC) - I don't think anybody wants to "abolish" VFS. The general sentiment is that the VFS system needs improvement. Perhaps we should be proposing improvements instead of voting about whether or not to scrap the whole system. -- Brigadier General Sir Zombiebaron 00:15, July 4, 2012 (UTC) Nah. —Sir Socky (talk) (stalk) GUN SotM UotM PMotM UotY PotM WotM 18:16, 4 July 2012 - Comment I don't think there is anything wrong with the process. I think that there has been issues with people, myself included. However we change process, we still have people involved, so I don't see how this will impact on that. Having said that, I'm happy to go with a democratic vote as to what system we have, but would suggest simplification is key. -- • Puppy's talk page • 02:47 07 Jul 02:47, July 7, 2012 (UTC) Against. I personally think there is nothing wrong with the current system and you have all convinced yourselves that drama will happen, when it only happens when you will it to happen. --MasterWangs CUNT and proud of it! 08:24, July 15, 2012 (UTC) Suggestions for replacement of VFS WP:RFA - it's simpler and if we chuck all their silly rules, it might even work here. ~ 01:34, 4 July 2012 Replace VFS with RFA (sans the silly rules)? For. Why not. Saberwolf116 (talk) 02:04, July 4, 2012 (UTC) - No. Not overhauling anything this drastically until a new standard is actually fully drafted out. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 02:38, July 4, 2012 (UTC) For.--fcukman LOOS3R! 02:50, July 4, 2012 (UTC) For. |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 10:24, July 4, 2012 (UTC) Per TKF ~Sir Frosty (Talk to me!) 02:54, July 4, 2012 (UTC) Against. Mattsnow 03:15, July 4, 2012 (UTC) Against. →A (Ruins) 15:29, 4 July 2012 Against. -- • Puppy's talk page • 12:40 11 Jul 00:40, July 11, 2012 (UTC) Could you all stop arbitrarily voting on everything? And, like, figure out what, specifically, needs fixing and then try to come up with things that might fix that instead? I suggest carefully discussing the matter in detail. ~ 07:29, 4 July 2012 Vote to stop voting on everything? - For. ~Sir Frosty (Talk to me!) 07:38, July 4, 2012 (UTC) - For. →A (Ruins) 15:29, 4 July 2012 - Forever and ever amen -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 16:29, July 4, 2012 (UTC) For. —Sir Socky (talk) (stalk) GUN SotM UotM PMotM UotY PotM WotM 18:17, 4 July 2012 - UltaMegaFor+20 -OptyC Sucks! CUN00:19, 11 Jul Against. -- • Puppy's talk page • 12:42 11 Jul 00:42, July 11, 2012 (UTC) Improvements (add any ideas) - Admins get 1 vote like everyone else. -- Sir Xam Ralco the Mediocre 00:17, July 4, 2012 (UTC) - I agree, if someone could explain to me why they are given x2 I'd stop being confused. ~Sir Frosty (Talk to me!) 00:26, July 4, 2012 (UTC) - Limit the amount a user can comment as they make their vote. Like limit them to two sentences worth of comment, if its any longer its ether really negative and probably not helpful or someone seriously has a boner for the candidate. ~Sir Frosty (Talk to me!) 00:26, July 4, 2012 (UTC) - How is that an improvement? Limiting the length of comments limits the ability of folks to explain themselves, which is particularly problematic with a system that actively discourages voters from examining candidates' contributions and the like - on the rare occasion someone does take the time to look, anything they find can be particularly pertinent to all involved in the discussion. ~ 01:32, 4 July 2012 - The long comments tend to lead to arguments and such which decreases efficiency and has also (although it may not be the intention of whoever wrote it) give the user it refers to the impression they are being bullied and picked on, which was the case with POTR and the comment you left you left last time. A 14 line critique of what they are doing wrong can instead of offering simple advice on how to improve themselves and make them a suitable candidate instead can leave the user feeling unwanted and unappreciated which leads to retaliation and poor decision making, admittedly your intensions may have been all well and good but a long list of reasons why it was a bad idea didn't help matters. But limited comment length is just my personal opinion as long winded comments don't help matters at all in my experience. ~Sir Frosty (Talk to me!) 02:06, July 4, 2012 (UTC) - I also think there's nothing wrong with explaining a vote, but when a person uses a vote as a platform for pettiness and bullying beyond the necessary boundaries, then again, that's on them and not the system. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 02:35, July 4, 2012 (UTC) - I don't think that was the intention but I'm fairly sure thats the way POTR took it as. As quoted from the forum he stated "I have now gotten to the point where I feel that I cannot contribute effectively here while she still remains as an admin, as her abuse of position and harassment and bullying of members based upon whim and favouritism is destroying the wiki." The VFS drama certainly was part of that. ~Sir Frosty (Talk to me!) 02:57, July 4, 2012 (UTC) - UR MOM IS SOOOO GHEY!!!!!! --fcukman LOOS3R! 03:13, July 4, 2012 (UTC) - The intention was to point out why he shouldn't be opped. I think it succeeded, although things did got a little out of hand. ~ 07:31, 4 July 2012 - The getting out of hand part is why I don't like long comments, because not everyone can just ignore it/take it on board. ~Sir Frosty (Talk to me!) 07:42, July 4, 2012 (UTC) - Yet the reason why POTR is no longer with us and why you are still here, despite Romartus's comment, is because you were able to take the criticisms (no matter how irrelevant) in stride, while he took them way too personally. Perhaps Lyrithya was being too harsh too, but again this is not the system's fault. If people are going to soapbox about other people, they will certainly find outlets other than VFS if that avenue has been deprived, too. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 15:10, July 4, 2012 (UTC) I had some ideas I had the idea of a simpler method here. While most seem to like having a lot of user involvement, it's not really led to outcomes that are agreeable. This isn't like a final draft - but it might offer some direction.--Sycamore (Talk) 10:03, July 5, 2012 (UTC) - I don't like the idea of user nominations being so discreet. Couldn't we instead allow users to "apply for adminship" like in RfA, and then allow the subsequent screening to take place? --Scofield & 1337 11:48, July 5, 2012 (UTC) - I like the idea of a secret nomination, but I would prefer a system where more than two admins do the voting. Maybe an instant runoff ballot on a private Google doc accessible, but anonymous to, all admins decides the top 3 candidates. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 14:35, July 9, 2012 (UTC) - This idea is something I knocked together just to maybe change direction a bit - ZB's idea that I've voted for below seem to strike a balance of everyones take, while I have an idea for VFS based on personal preference. I appreciate the concerns of Scofield and I like TKF's idea. My main reason to keep things so 'discreet' is that despite the bulk of good contributers, we simply cannot depend on some people here to be sensible and as one admin stated to me a while back, most of the powers that be and community at large are often 'too scared' of standing up to certain drama creating types (who have very surprisingly resurfaced with "Their Opinions").--Sycamore (Talk) 18:11, July 11, 2012 (UTC) - That's an odd opinion. The only thing that I'm aware of where users have chosen not to voice their perspective is when they have been told that doing so will earn them a ban. Having a look at almost every forum we have ever had, users who have opinions or perspectives have felt free to express them. -- • Puppy's talk page • 10:26 11 Jul 22:26, July 11, 2012 (UTC) - Well there are opinions and there's some very neurotic characters who'll mansplain to the entire community about 'their needs'. I think the community should be involved and have the opportunity to express their views, though the impression that individuals can aggressively lobby, whore or cajole the community onto their demands is unacceptable. 'Have your say, but like fuck off a bit' should be the new motto.--Sycamore (Talk) 08:52, July 12, 2012 (UTC) Drama We've had a nice three or four month period relatively drama free. It's been nice...hasn't it? Quiet. A nice PLS. A gentle roll-over of featured articles. Time floating by. Nice jokes bouncing around talk pages. Pleasant love filling up the whole wiki. It's lovely isn't it? That's why I've decided to name myself the new anti-wiki-terrorism agent. My new powers are to kidnap anyone who engages in wiki-terrorism, bring them to an abandoned parking lot, waterboard them until they admit to what they did, then dump them on the moon, naked, with a chocolate bar and a litre of petrol. I don't have any clue that wiki-terrorism is. It is so ill defined I could make anything seem like wiki-terrorism. For instance...Mattsnow forgot to capitalise a word. That is blatant wiki-terrorism...but I'll let him off this time because his fragile mind couldn't cope with my interrogation techniques. I love you all...and so I don't want to have to use the patriot act against any of you. But I will if it means preserving the freedom of freedom. --ShabiDOO 03:01, July 4, 2012 (UTC) - I just named myself as Osama Bin Laden and Hitler. My single goal in life is to spread hatred and destruction :3 →A (Ruins) 15:33, 4 July 2012 New system, Mr-ex777 selects new ops based on how "ghey" their mom is - You know I'm right ~Sir Frosty (Talk to me!) 03:41, July 4, 2012 (UTC) - For. I have never been in more support of anything in my entire life. -RAHB 03:43, July 4, 2012 (UTC) HELL FUCKING YES →A (Ruins) 15:34, 4 July 2012 - Antifor I'd lose. My mom isn't "ghey" : ( Cat the Colourful (Feed me!) Zzz 18:00, 4 July, 2012 (UTC) For. —Sir Socky (talk) (stalk) GUN SotM UotM PMotM UotY PotM WotM 18:18, 4 July 2012 - My mother is more gheyer than all of your mothers time a nillion! --ShabiDOO 22:38, July 4, 2012 (UTC) Just do what I suggested last year For. --Hotadmin4u69 [TALK] 02:52 Jul 5 2012 Switch who can vote for round 3 & 4 - My proposal is simple, currently users get the vote in round 3 and then the sysops take that result to vote in round 4 and eventually picks the new sysops. I propose we make round 3 (still with nominees selected by anybody) the sysop only round, anyone who is able to get x number of votes may proceed to round 4 where everyone gets 1 or 2 votes (depending on how many we want for that month). Now some of you ask, whats the point? We still have an admin only round and a user round. Will think of it this way. Currently its probably the most universally liked people that get past round 3 and into round 4 where its the best candidates that make it through. Essentially we get the most competent of the most popular elected, which isn't necessarily the best result. Reversing it will only allow the users that have the confidence of our administrators past round 3 and into round 4, where with a much shorter list of candidates users may vote more rationally and not just for their favorite user. ~Sir Frosty (Talk to me!) 10:49, July 5, 2012 (UTC) - To be frank the problem is not favoritism, it's characters such as yourself constantly bringing up the issue or whining about how the community has turned against because you were not the 'Sysop candidate you felt you were'. The issue it not so much the voting of sysops, it's devising a VFS where we don't have this behavior or detrimental activities. These are just horrible to be around, see reputations such as Puppy's destroyed and take us collectively away from comedy. --Sycamore (Talk) 11:16, July 5, 2012 (UTC) - BAHAHAHAHAHAHAHA I was pissed because being an ED admin seems a valid reason to vote someone done and the community has in part turned against me because of ED and proved by all the bullshit I received on irc and over email. Also I wanted new sysops simply because doing nothing but reverting vandals that are allowed to get to 50+ edits because nobody is awake to ban them and it fucking sucked and I'd had enough. But ok, I just won't except noms in VFS for a very long time now, tired of all the bullshit related to ED, also tired of being labeled as "a whiny little bitch who just wanted things to run more smoothly." ~Sir Frosty (Talk to me!) 23:02, July 5, 2012 (UTC) - Actually I suspect your attitude in general might raise issues and misgivings about your involvements elsewhere (and their reputations/issues) . More often than not, keeping calm and building some kind of consensus is better than trying to bludgeon your points across... This is a maturity thing, and as I suspect, a life experience thing. That's not something that comes with extra vigilance towards vandals. Again the impetus is on you, not on the community to resolve this need for admin status:)--Sycamore (Talk) 23:56, July 5, 2012 (UTC) For. →A (Ruins) 15:07, 5 July 2012 New hierarchical structure I saw the so-called fanboys at Bulbapedia (one of my all-time favorite wikis), and they seem to be organizing their admins into three tiers of adminship. Here's basically how it works. Right above the administrators, but lower in rank than the bureaucrats, are the Senior Administrators, who "can add and remove users from the abuse usergroup, view the checkuser log, delete pages with large histories, and more." These guys are just a bit more experienced than your average admin. (Further information: Bulbapedia:Senior Administrators.) Just below adminship is the Junior Administrator rank. (I've documented a proposal for this rank here.) Junior administrators can rollback, edit protected pages, view deleted pages, and show and hide individual edits to a page. I've added the extra privilege that they can archive old VFD and QVFD nominations. (Further information: Bulbapedia:Junior Administrators.) Let's discuss. -- Sir CuteLatiasOnTheRadio [CUN • PBJ'12 • PLS(0)] 16:22, July 5, 2012 (UTC) - Sounds interesting. I'm in! Cat the Colourful (Feed me!) Zzz 16:25, 5 July, 2012 (UTC) - Hm... what's it like to be j-- oomph! :3 -- Sir CuteLatiasOnTheRadio [CUN • PBJ'12 • PLS(0)] 16:26, July 5, 2012 (UTC) - I mean, ideally, about 5 of our sysops would become senior admins and about 7 of our regular users who contribute a lot would become junior admins. -- Sir CuteLatiasOnTheRadio [CUN • PBJ'12 • PLS(0)] 16:30, July 5, 2012 (UTC) - I'm not kissing anyone's ass but if Zombiebaron, Lyrithya, ChiefjusticeDS, Romartus and someone else would be senior admins. And Shabidoo would be a great junior admin. Cat the Colourful (Feed me!) Zzz 16:37, 5 July, 2012 (UTC) - This proposition is technically impossible. We do not have local CheckUser, separate admin usergroups, an abuse usergroup, or the ability to hide individual edits. And I doubt that Wikia is going to turn any of these features of for use (especially local CheckUser). -- Brigadier General Sir Zombiebaron 17:02, July 5, 2012 (UTC) - Or we could do what RationalWiki does and make anyone an op if they contribute regularly. --Hotadmin4u69 [TALK] 18:40 Jul 5 2012 For. →A (Ruins) 19:28, 5 July 2012 Nah. They might not have good judgement. -- Sir CuteLatiasOnTheRadio [CUN • PBJ'12 • PLS(0)] 18:34, July 6, 2012 (UTC) - Qzekrom Well, they wouldn't be automatically op'd....we could see their contribs and decide if they deserve it. →A (Ruins) 18:40, 6 July 2012 - Not feasible, plain and simple, for the reasons Zombiebaron stated. The difference in the toolbox is extremely marginal, plus it's just another way to make bureaucracy and rank seem more important on this comedy site where no one's supposed to give a sliver of a fuck about such retarded, menial things. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 01:30, July 7, 2012 (UTC) Shabidoo should never be nominated as any kind of admin While I appreciate the idea (though I'm not entirely sure if you guys are joking or not), I don't want to be an admin. There's no point as I wouldn't use the tools. However, you can vote for and feature my articles more often. And love me. I wish you guys would love me more. --ShabiDOO 23:11, July 5, 2012 (UTC) Zombiebaron's idea Ok, so here's my idea. This is the VFS model that was used at the time that I was made an admin. - The admins privately decide that more admins are required to keep the site running smoothly. This is because we the admins are the ones who are on the forefront of adminning the site. We collectively know how many pages get deleted and how many bans are given every day. We know when the site is running smoothly and when it is not. - The admins hold a two week nomination process, on a sysop-protected forum page, in MiniLuv. Anyone can nominate anyone. Both for and against votes are encouraged. - All registered users get to vote on the top scoring nominees from the previous round for the following 2 weeks, on a semi-protected Village Dump page. No against votes. No comments. - Top scoring users become admins. The number of admins is determined based on the voting totals and general community consensus. I think two votes for everyone in both of the voting rounds makes sense. -- Brigadier General Sir Zombiebaron 02:10, July 7, 2012 (UTC) - So, in this case, instead of everyone nominating anyone, only admins can nominate? Then, instead of Admins-only getting to vote in the final round, everyone does? It just switches when Users can vote from the first two rounds to the final round? Why was this switched away from in the first place? The Woodburninator Minimal Effort ™02:15, July 7, 2012 (UTC) - I have no idea why it was switched in the first place. The other important change is the first step. -- Brigadier General Sir Zombiebaron 02:18, July 7, 2012 (UTC) - A compromise for this I think. ~Sir Frosty (Talk to me!) 02:34, July 7, 2012 (UTC) - The switch occurred far, far earlier. I think MadMax and Strange but Untrue were the first admins to be elected by the system we currently have, and me and Mordillo rode the wave in shortly after. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 07:51, July 7, 2012 (UTC) - While Zombie may have some kind of sort of point about the admins being on the "forefront" of how many pages are whatever and how many users are whatever, there isn't any explanation as to why it should be "private". Since when were things done out of sight? Without transparency and a clear dialogue and communication available? Its disturbing that things have come to this, where the admins will get some kind of "carte blanche" to do as they please and not even have to account for it. --ShabiDOO 17:32, July 7, 2012 (UTC) - The admins already do many important things in private. Mostly arguing. But we do also have civil policy discussion and what-not. There are some things that we simply cannot discuss on the wiki. One recent example of this that I can think of is when I banned PotR. Several admins approached me privately on IRC to discuss that event. The fact that the admins will be deciding when we need new admins doesn't mean that we will no longer be listening to the views of the community. I mean, if somebody were to hold a vote where a vast majority of the registered users were calling for new admins, and the admins just ignored it, that would be stupid. --Brigadier General Sir Zombiebaron 17:46, July 7, 2012 (UTC) - Thats even more disturbing that there are important conversations going on that we don't even know about. I hope they don't involve policy or major decisions.--ShabiDOO 20:24, July 7, 2012 (UTC) - Privacy also works pragmatically against interloping absentee admins returning to the community with little idea of what it's been like for the past few months having an immediate say in policy/important votes. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 21:07, July 7, 2012 (UTC) - What kind of decisions are you guys making behind the scenes?--ShabiDOO 22:19, July 7, 2012 (UTC) - Beats me. I didn't even know we had a "scene." -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 22:20, July 7, 2012 (UTC) - And why precisely should "deciding if the wiki needs new admins" be held behind closed doors? Why can you not have that discussion with each other, at the VERY LEAST on the wiki, blocked so only admins can edit it? We should be able to see the positions of each admin, their reasons for and against and what the debate entails. Users should not be kept out of the dark about this. Theres no reason to. --ShabiDOO 12:51, July 8, 2012 (UTC) - It does not mean the admins won't entertain the whims of the community, it just means we can discuss it without fear of inciting drama, personal insult or implausibly high expectations of what will happen next. --Black Flamingo 13:13, July 8, 2012 (UTC) - But thats just putting bad faith on the community. Just because there was one particularily exagerated VFS drama, doesn't mean users should loose the right to know why a decision has been made. Just because there is the possibility of a problem doesnt mean you should hold your sessions behind closed doors. VFD has the potential to induce drama...does that mean the admins should now deal with VFD...and even more so behind closed doors? Besides, the big drama fest came from the "selection" of the admins and not wether to have admins. This solution just disenfranchises the community even more. Instead of having one step of the vote involving admins only making the decision, now, one of the steps is admins making the decision AND doing so with absolute secrecy. --ShabiDOO 18:35, July 8, 2012 (UTC) - For -RAHB 02:32, July 7, 2012 (UTC) - Spiritual for If change is required, then this is a good proposal. -- • Puppy's talk page • 02:51 07 Jul 02:51, July 7, 2012 (UTC) - I like it, aside from an issue already discussed with Zombiebaron. This makes me kind of want to come back and do stuff on uncyc again. Kind of.--OliOmniOmbudsman 03:48, July 7, 2012 (UTC) - The "issue" was whether or not against votes would be allowed during the second round of voting. I have changed the proposal to clearly state that against votes will not be allowed during the second round. -- Brigadier General Sir Zombiebaron 03:53, July 7, 2012 (UTC) - Does that principle also extend to the first round? (Just for clarification.) -- • Puppy's talk page • 06:48 07 Jul 06:48, July 7, 2012 (UTC) - Apparently not. I would also add in something protecting against "non-against comments" or comments in general, too, since some people love doing those so much, too. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 06:53, July 7, 2012 (UTC) - Seconded. ~ BB ~ (T) ~ Sat, Jul 7 '12 7:14 (UTC) - I'd allow for the provision of comments on the talk page. Votes for where votes go, comments elsewhere. (And I realise the irony of me saying this when I'm not actually voting here.) -- • Puppy's talk page • 08:51 07 Jul 08:51, July 7, 2012 (UTC) - Ok I haved added "No comments" to the proposal. -- Brigadier General Sir Zombiebaron 16:20, July 7, 2012 (UTC) For. Saberwolf116 (talk) 04:14, July 7, 2012 (UTC) - Also let's do this. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 06:54, July 7, 2012 (UTC) For. And if it doesn't work... We can argue some more. ~Sir Frosty (Talk to me!) 07:02, July 7, 2012 (UTC) - No we can't! --Black Flamingo 23:35, July 8, 2012 (UTC) - Shut up! The Woodburninator Minimal Effort ™ 23:40, July 8, 2012 (UTC) For. Especially the "No comments" part. ~ BB ~ (T) ~ Sat, Jul 7 '12 7:14 (UTC) - NO Admins should not be quietly tiptoeing around making decisions for users in a non open and non-trasparent way. Im surprised this is even being suggested, and that users would consider letting admins make decisions on our behalf behind closed doors. --ShabiDOO 14:05, July 7, 2012 (UTC) WTF? I'm not sure about this. Per Shabidoo, official Uncyclopedia processes need to be transparent. Here's my proposal: anyone should be able to request that an admin nominate them during the admin-only nominations (or else open up nominations to all autoconfirmed users). On an ideal wiki, it would take an expert to distinguish a regular user from an admin or a junior admin and so forth, but it's different here because we're kind of an odd community that actually has vanity pages about admins being dirty turds. --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 22:40, July 7, 2012 (UTC) - For. --Black Flamingo 11:09, July 8, 2012 (UTC) Upvote. – Sir Skullthumper, MD (criticize • writings • SU&W) 18:11 Jul 08, 2012 Against. A major step backwards. My biggest issue is that it still leaves VFSes up to admins. Why should the decision for new admins be left to the group of individuals whose absence necessitates VFSes in the first place? Shouldn't those who need admins and are irked when they're not around to help decide when it's time for some more of them? --Hotadmin4u69 [TALK] 18:24 Jul 8 2012 - Yeah. This seems to strike the balance.--Sycamore (Talk) 20:05, July 8, 2012 (UTC) - Against, especially that commenting bit. Regular users can have as useful of things to say as admins, even if they usually don't bother; there is always potential for misuse with any system, but that most votes here have a comment attached is also evidence of the potential for good, and well-worded supportive comments can still make a world of difference for a user faced with a general lack of other support. I also just don't like the only op if it's needed model; just because more admins aren't specifically needed doesn't mean that, provided capable candidates, it would hurt to lessen the workload for the current admins to help prevent burnout and give then more time to do other things, like write articles. Because people still do that here, from what I understand. Write, I mean. Don't they? -Lyrithya 12:01, July 12, 2012 (UTC) The Burninator's Idea (Joe The Burninator/Kyurem+Woodburninator) Wait, I think we can have a very democratic application here. How about we get the general public and the admins to vote if we need to have any more admins every two months? Then if the voting is passed upon agreeing that we need more admins, you can nominate someone or yourself to become an admin. The top five candidates who received most votes will afterwards proceed to the third part: discussion with the sysops on why they want to become an admin. After the interview, the sysops proceed to vote, with bureaucrats' votes worth three (Zombiebaron et al.), and only sysops can vote. So the process will be like this: - First week: General public decides if we need more admins. Needs for and against votes. This will take place every two months instead of monthly. - Second week: If we have enough votes for wanting more sysops, then we proceed to an election. You can nominate yourself or another person. Expect two-week voting process. For votes are only allowed, as the public will use a numbered system. The process will be explained: - To prevent rigging, the proportional voting system must be used. - The general public and sysops number their preferences for admin candidates, 1 being in most favor of being an admin, and the last number (7, 8, 9, 10, etc.) being in least favor. For example, if there are seven candidates, people number them from 1 to 7. Qzekrom will create the voting process. - The ones with the most votes are selected. The top five candidates with the most votes proceed to the next round. The rest are eliminated. - Fourth week: The top five candidates (the ones with the most for votes) are interviewed by sysops and bureaucrats. Expect a two-week process. - Sixth week: Sysops proceed to vote in a Hunger Games-style elimination round. Bureaucrats' vote is worth three votes. Again, two-week process. - Eighth week: The new sysops are announced Any questions? |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 08:54, July 7, 2012 (UTC) - Too long, too complicated and Zombiebaron's idea is simpler and works better. ~Sir Frosty (Talk to me!) 09:00, July 7, 2012 (UTC) - Agree with Frosty. Fails on the K.I.S.S. principle. (Keep it simple, fuckwad) -- • Puppy's talk page • 10:59 07 Jul 10:59, July 7, 2012 (UTC) - Addendum: The point of that that I agree with and could be used if we keep a system similar to the current is the 2 month rule. It takes users around 3 months (from what I've seen) to get to grips with the admin role. Two months is enough time though to see if they are coming to terms with it. It also reduces the "Yes/No" arguments to only 6 a year. 3 months would probably be a better term, as I don't recall seeing a VFS go to voting within 3 months of the previous. (I could be wrong on that though.) -- • Puppy's talk page • 11:04 07 Jul 11:04, July 7, 2012 (UTC) - Typically since we changed the rules to include users in round 1 its been around every 6 months. ~Sir Frosty (Talk to me!) 11:18, July 7, 2012 (UTC) - Is there a reason my name is associated with this? Or am I just being vain again? I do that sometimes! ME! ME! ME! The Woodburninator Minimal Effort ™ 21:27, July 7, 2012 (UTC) - It's because Joe likes to call himself "Joe the Burninator". —Sir Socky (talk) (stalk) GUN SotM UotM PMotM UotY PotM WotM 21:53, 7 July 2012 - JoeNumbers has a better ring to it. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 22:09, July 7, 2012 (UTC) - I like Joey Numbers. Sounds like a racketeer in the Mafia. Is that cool with you, Joey Numbers? The Woodburninator Minimal Effort ™ 22:14, July 7, 2012 (UTC) - No, call me Kyurem. And how the hell did you get unbanned, POTR? |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 05:48, July 8, 2012 (UTC) - I've been unbanned? Wow! -- • Puppy's talk page • 12:49 11 Jul 00:49, July 11, 2012 (UTC) - Sure thing, Joey Numbers. The Woodburninator Minimal Effort ™ 06:12, July 8, 2012 (UTC) - Joey Numbers: Don't Lose My Number or I'll Do a Number on Your Face! |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 08:27, July 9, 2012 (UTC) Voting process You asked me for it, here you go. Feel free to revise. I think rather than ranking all admin candidates, voters pick their top three choices to prevent ties, and so that no one gets offended finding that Joey Numbers ranked them sixth rather than fifth. (And it saves brainpower for writing articles.) (Though admins can pick more than three.) During the voting process, candidates are free to do campaigning as long as they aren't misleading. Then, the votes get tallied. Each vote gets put in as follows: - 1nd place: 50 points - 2st place: 38 points - 3th place: 25 points - 4rd place: 13 points The top five candidates are then nominated for the second round, and the next two become poopsmiths. --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 22:56, July 7, 2012 (UTC) - The process is still overlong (two whole weeks for interview when a single day will suffice) and unnecessarily complicated. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 23:01, July 7, 2012 (UTC) - Then we should fill up the rest of the time with conventions and debates between candidates (made on public Dump forums where a bureaucrat is assigned to moderate the forum). --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 23:12, July 7, 2012 (UTC) - And hey, do we really need two weeks for the elimination round? --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 23:17, July 7, 2012 (UTC) - There are a lot of words on this forum. Should I be paying attention? -- RomArtus*Imperator ® (Orate) 23:21, July 7, 2012 (UTC) - Not really. If anything noteworthy comes from this collection of words, it'll be reflected on {{VFSrules}}. So as far as paying attention is concerned, just putting that template on your watchlist will suffice. —Sir Socky (talk) (stalk) GUN SotM UotM PMotM UotY PotM WotM 00:07, 8 July 2012 - I think we need two new admins every two months. This should keep the numbers small. Also, I think we need a maximum of five candidates, and top three choices to prevent ties. |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 05:33, July 8, 2012 (UTC) - This is similar to the instant-runoff voting system that Spang proposed last year. I am all for it and the revised rules that The Burninator has proposed. --Hotadmin4u69 [TALK] 18:27 Jul 8 2012 - Instant runoff voting is better than that voting system we currently have. An if we have a two-week long voting process, we could give time for those unable to vote on one week, to vote the second week. And I'll change the numbering system (Qzekrom, I modified it because four is better than three) |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 08:01, July 9, 2012 (UTC) Quick note on the instant runoff suggestion and the two month rule The two month rule is a rule in which two new admins are picked every two months, while the rest will choose becoming a poopsmith or a rollback as a consolation prize. In the fourth and fifth weeks, there will be debates and conversations between candidates. All of the current goings-on regarding the debate and conversations for the VFS will be promoted on the UnSignpost and the VFS section of UnNews. The final three weeks will be a trial run on how the new admins are going. As a final note regarding campaigning for VFS, advertising campaigns will be placed on the board (where the PLS announcements and so on are placed), and must be approved by the Advertising Ombudsman (a bureaucrat). |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 08:27, July 9, 2012 (UTC) - Your proposal there has several flaws in it: - We do not need 2 new ops every 2 months, currently we have been getting 1 - 2 every 6 months and some see this as too many. You do the math 2 every 2 months = 12 a year and with our current user-base size that is far too much. - Poopsmith is a ridiculous "consolation prize" we have 3 active ones and 3 is plenty we archive 3 voting pages on the whole site and administrators also do this task as well, granting an otherwise meaningless title is a) a poor prize b) Pointless, we have enough and don't need anymore (I say this as a poopsmith as I can typically archive the 3 pages in a the space of a few minutes, without any help) - If you want rollback, be like a normal person and just ask Zombiebaron or Thekillerfroggy giving it as a second prize is a dumb idea because typically (under your proposal) anyone who comes second is going to already be a rollbacker (I base this off the fact we haven't had a non-rollbacker promoted to admin since 2008.) - Five weeks? As TKF pointed out everything you proposed can easily be done in half that time - Potential opping candidates before last VFS already did banter, argue and carry on. It's called IRC we argue and debate a lot there and guess where it typically will take us? No where - You are proposing a system that involves debates and voting almost like a presidential election, and yet we can't even manage a simple vote what on earth makes you think this will work? - UnNews is a namespace for humor and making fun of IRL events and being funny, it is not for carrying on about VFS which nobody in the real world cares about - ~Sir Frosty (Talk to me!) 08:40, July 9, 2012 (UTC) NEW PROPOSAL I'm so fucking tired of forum debates. VFS sucks. It will always suck. Nothing you do will ever improve it to the point where less than half of the user base can find some problem with it. We have mob rule here, and mob rule produces three things: tyrants, angry people, and angry dead people. If this were a country, I might give a shit. But it isn't a country, it's a comedy website. So I have a NEW FUCKING PROPOSAL (read closely, chimps, because it's about to get complicated): NO OPS EVER AGAIN, WITH THREE EXCEPTIONS: - If you give Zombiebaron sexual favors, instant oppage. Boom. - If you give Thekillerfroggy sexual favors, instant oppage. Boom. - If you post again in this forum, you die. Oh, wait, did I say there were three exceptions? Consider this one "oppage in Hell". Who's with me!?!? Boom. I hereby apologize for starting the last three forums I have been responsible for, and I request that y'all just fuck off to your respective corners, and start spewing comedy like comedy was shit and you fuckers just drank ten gallons of Log-Out™. ~ BB ~ (T) ~ Mon, Jul 9 '12 14:12 (UTC) - If this had been the rules from the beginning, I would have been opped years ago! The Woodburninator Minimal Effort ™ 14:41, July 9, 2012 (UTC) - Strong For. More sexual favors from Woody please. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 14:44, July 9, 2012 (UTC) - What if I give Zombie and TKF sexual favours at the same time. What then? --ShabiDOO 17:19, July 9, 2012 (UTC) Hell, no! What happens when all the current sysops have retired? --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 01:58, July 10, 2012 (UTC) Strong boner ~Sir Frosty (Talk to me!) 02:32, July 10, 2012 (UTC) Euroiphones New proposal: Whatever the voting rules are, let's create a new rank in the (Most Excellent) Order of Uncyclopedia and award that to VFS runner-ups, poopsmiths and rollbackers. We still don't have any use for Her Majesty's Flying Rat's Ass Medal, do we? --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 02:16, July 10, 2012 (UTC) - Thats hardly a proposal, this forum is for fixing the problem not adding pointless consolations to it. ~Sir Frosty (Talk to me!) 02:30, July 10, 2012 (UTC) Alternate suggestion Week one. The entire community equally and openly decides if a new admin is needed. Week two. The entire community nominates nominees. Week three and four. The entire community equally and openly selects the admin. Its simple, clear, equitable, transparant, participatory and uncomplicated. I personally like it. --ShabiDOO 02:59, July 10, 2012 (UTC) Ooh... that feels good... Forgive me, I was just thinking about the new system... Support. --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 03:13, July 10, 2012 (UTC) Kinda. ~ BB ~ (T) ~ Tue, Jul 10 '12 4:19 (UTC) Strong for And no word limits. Scofield & 1337 10:04, July 11, 2012 (UTC) For. Ticks the right boxes in my mind. -- • Puppy's talk page • 12:30 11 Jul 12:30, July 11, 2012 (UTC) For. --Hotadmin4u69 [TALK] 13:13 Jul 11 2012 - This Shabidoo guy is a communist fag. --ShabiDOO 19:08, July 11, 2012 (UTC) - If you included "all first- and second-week comments are limited to 20 words or less" and "votes are limited to 'For' or 'Against'", I'd vote "yes". ~ BB ~ (T) ~ Tue, Jul 10 '12 3:10 (UTC) - Less complicated would be: "don't be a dick during this process". --ShabiDOO 03:18, July 10, 2012 (UTC) - Yes, but certain pompous blowhards who shall remain nameless here would just say "I am not being a dick. I am simply offering up a 10,000-character comment (which I have, incidentally, chosen to label a 'non-vote') because that's how we do things on Wikipedia." So I think "no vote comments" and "keep all other comments short" needs to be carved in stone. ~ BB ~ (T) ~ Tue, Jul 10 '12 3:30 (UTC) - In an open, equal and transparent process, you cant tell users what to say, how to say it and when to say it. No matter what you do, nasty users will find a way to be dirty. Brush away unconstructive criticism to the side and move on. Just dont be dicks during the process and keep things equitable, fair and uncomplicated and a good admin will be selected. --ShabiDOO 03:51, July 10, 2012 (UTC) - You can't tell them what to say, but limiting how much they can say will certainly make them choose their words more carefully, and make it less headache-inducing to read. ~ BB ~ (T) ~ Tue, Jul 10 '12 4:19 (UTC) - We should have a televised debate for all of the candidates and we should get the cookie monster to be the moderator. His first question would be: 'peanut butter cookies vs. oatmeal bars'.--ShabiDOO 04:24, July 10, 2012 (UTC) - Which is better: "No. I do not think he would make a good admin. Perhaps in the future, though," or a paragraph which wastes 500 words to say EXACTLY the same thing? ~ BB ~ (T) ~ Tue, Jul 10 '12 4:44 (UTC) - I myself am very guilty of writing very long blocks of text. Its actually embarassing to read them. My three largest blocks of text were all written to the same user...who reacted in different ways to the different texts, 1. told me they didnt even bother to read it. 2. banned me. 3. tried to work things out. I still laugh my ass off when I read the longest one...and I laugh even harder when I read Aleisters commentary on it. It was very funny. --ShabiDOO 05:19, July 10, 2012 (UTC) - To avoid controversy, let's not allow explicit non-votes. If you're not going to vote for someone, just don't fucking vote! --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 21:29, July 10, 2012 (UTC) - I'm sorry that Lyrithya was nasty and dirty to Frosty and Puppy, but thats no reason to shut up users who have constructive comments. Zombiebarons proposal will make the process secretive and limiting to avoid "controversey" ... which is a very exagerated response to one bad VFS experience. Simple is better. --ShabiDOO 00:36, July 11, 2012 (UTC) - Let's keep it simple and see if Puppy has any ideas for how to change it. --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 00:40, July 11, 2012 (UTC) - In a Wikipedian manner, let's reserve the voting area for votes, and allow the comments in a separate section underneath, or preferably, on a talk page. That means that when someone wants to non-vote, they can do that, and write war and frigging peace if they like, but at least a vote is counted as a vote, and commentary is counted as commentary. In much the ame way this particular vote here has been framed. Clear, simple, concise, and equitable. -- • Puppy's talk page • 12:26 11 Jul 12:26, July 11, 2012 (UTC) The collegiate system At the present we have members of the community broken into various time zones. I propose we tabulate how many members we have in a particular time zone, and give that timezone a number of "collegiate" votes accordingly. Then the voting is done broken down into timezones. The majority of votes in a particular timezone means that all of those collegiate votes are put toward that candidate. That seems the only logical method to me. Oh, and terms cannot last longer than 2 years, and an admin cannot have more than 2 consecutive terms. And the admin candidates can be chosen by parties representing the retention of existing articles, or "conservatives", and the other party can represent the growth of new articles, or "liberals". And an individual cannot become an admin via merit alone, but by the country of their birth. That way we become the most powerful wiki on the web. GOD BLESS UNCYCLOPEDIA! -- • Puppy's talk page • 12:59 11 Jul 00:59, July 11, 2012 (UTC) - This is a misrepresentation of conservatives and liberals! Conservatives want to conserve the supreme position of featured articles while leaving every other article to starve to deletion for all they care. Liberals seek to divert the focus away from the featured articles, towards the often ignored majority of non-featured articles. —Sir Socky (talk) (stalk) GUN SotM UotM PMotM UotY PotM WotM 16:52, 11 July 2012 - It's impractical to de-op admins just because their terms expired. And what if someone doesn't want to reveal their time zone? --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 22:17, July 11, 2012 (UTC) - We can get time zone information from the Order of Uncyclopedia. --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 22:23, July 11, 2012 (UTC) - I think all of the current and active admins are good ones (though we all have our moments) and I cant see any reason to de-op or rotate or give terms for any of them as they'll probably always be useful and good admins (though we all have our moments). --ShabiDOO 23:14, July 11, 2012 (UTC) This forum ...is too long and confusing. In fact, it's longer than my penis even though it's erected now. Can someone please tell me what the hell is currently being discussed here? --QZEKЯOM Proud sponsor of Team Zombiebaron Tw$*ty Tw%#ve G*me$ FTW! Let's go for the g^@d! 22:52, July 11, 2012 (UTC) - When a man and a woman love each other very much... -- • Puppy's talk page • 01:19 12 Jul 01:19, July 12, 2012 (UTC) I say that we just go the Bulbapedian way. Where's my boner tag? I want an extensive erection like Cilan had! |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 06:33, July 12, 2012 (UTC) - Also, Boner. |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 06:42, July 12, 2012 (UTC) on a side note Sandwiches are fucking awesome. ~Sir Frosty (Talk to me!) 09:18, July 12, 2012 (UTC) - You're too young to be engaging in group sex. -- • Puppy's talk page • 09:24 12 Jul 09:24, July 12, 2012 (UTC) MasterWangs' great idea with a 100% chance of success!!! All of you stop bitching and write comedy -- Do you know why this system doesn't work? Because you all have it set it out in your head and have convinced yourselves that something will go wrong. In my humble opinion it's more an attitude problem more than an actual problem with the system. If you all could try and be more positive about it you might find it works. So my advice is as follows: - Leave the current system - Adopt a more positive mind set about it - Watch the results when it starts running more smoothly - Go back to writing comedy I personally think it's a mind thing, does anyone agree? --MasterWangs CUNT and proud of it! 08:20, July 15, 2012 (UTC) - From the mouth of madness. I MEAN, "babes". ~ BB ~ (T) ~ Sun, Jul 15 '12 8:33 (UTC) Fuck the system If we can't get admin rights, then we'll be doing it the hard way. We just need a new system. |Si Plebius Dato' (Sir) Joe ang Pinoy CUN|IC Kill | 11:18, July 16, 2012 (UTC) - And uh, that means what exactly? ~Sir Frosty (Talk to me!) 11:19, July 16, 2012 (UTC) --fcukman LOOS3R! 11:26, July 16, 2012 (UTC)
http://uncyclopedia.wikia.com/wiki/Forum:Should_VFS_be_abolished%3F?t=20120716112643
CC-MAIN-2014-23
en
refinedweb
Difference between revisions of "PHP Security Cheat Sheet" Revision as of 11:16, 25 August 2012 DRAFT CHEAT SHEET - WORK IN PROGRESS. It is of utmost important that you upgrade your PHP to 5.3.x or 5.4.x right now. Also keep in mind that you should regularly upgrade your PHP distribution on an operational server. Every day new flaws are discovered and announced in PHP and attackers use these new flaws on random servers frequently. $x='1'; SELECT * FROM users WHERE ID >'$x'; Use UTF-8 unless necessary Many new attack vectors rely on encoding bypassing. Use UTF-8 as your database and application charset unless you have a mandatory requirement to use another encoding. $DB = new mysqli($Host, $Username, $Password, $DatabaseName); if (mysqli_connect_errno()) trigger_error("Unable to connect to MySQLi database."); $DB->set_charset('UTF-8'); Escaping is not safe mysql_real_escape_string is not safe. Don't rely on it for your SQL injection prevention. Why: When you use mysql_real_escape_string on every variable and then concat it to your query, you are bound to forget that at least once, and one is all it takes. You can't force yourself in any way to never forget. Number fields might also be vulnerable if not used as strings. Instead use prepared statements or equivalent. Use Prepared Statements Prepared statements are very secure. In a prepared statement, data is separated from the SQL command, so that everything user inputs is considered data and put into the table the way it was. MySQLi Prepared Statements Wrapper The following function, performs a SQL query, returns its results as a 2D array (if query was SELECT) and does all that with prepared statements using MySQLi fast MySQL interface: $DB = new mysqli($Host, $Username, $Password, $DatabaseName); if (mysqli_connect_errno()) trigger_error("Unable to connect to MySQLi database."); $DB->set_charset('UTF-8'); function SQL($Query) { global $DB; $args = func_get_args(); if (count($args) == 1) { $result = $DB->query($Query); if ($result->num_rows) { $out = array(); while (null != ($r = $result->fetch_array(MYSQLI_ASSOC))) $out [] = $r; return $out; } return null; } else { if (!$stmt = $DB->prepare($Query)) trigger_error("Unable to prepare statement: {$Query}, reason: " . $DB->error . ""); array_shift($args); //remove $Query from args //the following three lines are the only way to copy an array values in PHP $a = array(); foreach ($args as $k => &$v) $a[$k] = &$v; $types = str_repeat("s", count($args)); //all params are strings, works well on MySQL and SQLite array_unshift($a, $types); call_user_func_array(array($stmt, 'bind_param'), $a); $stmt->execute(); //fetching all results in a 2D array $metadata = $stmt->result_metadata(); $out = array(); $fields = array(); if (!$metadata) return null; $length = 0; while (null != ($field = mysqli_fetch_field($metadata))) { $fields [] = &$out [$field->name]; $length+=$field->length; } call_user_func_array(array( $stmt, "bind_result" ), $fields); $output = array(); $count = 0; while ($stmt->fetch()) { foreach ($out as $k => $v) $output [$count] [$k] = $v; $count++; } $stmt->free_result(); return ($count == 0) ? null : $output; } } Now you could do your every query like the example below: $res=SQL("SELECT * FROM users WHERE ID>? ORDER BY ? ASC LIMIT ?" , 5 , "Username" , 2); Every instance of ? is bound with an argument of the list, not replaced with it. MySQL 5.5+ supports ? as ORDER BY and LIMIT clause specifiers. If you're using a database that doesn't support them, see next section. REMEMBER: When you use this approach, you should NEVER concat strings for a SQL query. PDO Prepared Statement Wrapper The following function, does the same thing as the above function but using PDO. You can use it with every PDO supported driver. try { $DB = new PDO("{$Driver}:dbname={$DatabaseName};host={$Host};", $Username, $Password); } catch (Exception $e) { trigger_error("PDO connection error: " . $e->getMessage()); } function SQL($Query) { global $DB; $args = func_get_args(); if (count($args) == 1) { $result = $DB->query($Query); if ($result->rowCount()) { return $result->fetchAll(PDO::FETCH_ASSOC); } return null; } else { if (!$stmt = $DB->prepare($Query)) { $Error = $DB->errorInfo(); trigger_error("Unable to prepare statement: {$Query}, reason: {$Error[2]}"); } array_shift($args); //remove $Query from args $i = 0; foreach ($args as &$v) $stmt->bindValue(++$i, $v); $stmt->execute(); return $stmt->fetchAll(PDO::FETCH_ASSOC); } } $res=SQL("SELECT * FROM users WHERE ID>? ORDER BY ? ASC LIMIT 5" , 5 , "Username" ); Where prepared statements do not work The problem is, when you need to build dynamic queries, or need to set variables not supported as a prepared variable, or your database engine does not support prepared statements. For example, PDO MySQL does not support ? as LIMIT specifier. In these cases, you need to do two things: Not Supported Fields When some field does not support binding (like LIMIT clause in PDO), you need to whitelist the data you're about to use. LIMIT always requires an integer, so cast the variable to an integer. ORDER BY needs a field name, so whitelist it with field names: function whitelist($Needle,$Haystack) { if (!in_array($Needle,$Haystack)) return reset($Haystack); //first element return $Needle; } $Limit = $_GET['lim']; $Limit = $Limit * 1; //type cast, integers are safe $Order = $_GET['sort']; $Order=whitelist($Order,Array("ID","Username","Password")); This is very important. If you think you're tired and you rather blacklist than whitelist, you're bound to fail. Dynamic Queries Now this is a highly delicate situation. Whenever hackers fail to injection SQL in your common application scenarios, they go for Advanced Search features or similars, because those features rely on dynamic queries and dynamic queries are almost always insecurely implemented. When you're building a dynamic query, the only way is whitelisting. Whitelist every field name, every boolean operator (it should be OR or AND, nothing else) and after building your query, use prepared statements: $Query="SELECT * FROM table WHERE "; foreach ($_GET['fields'] as $g) $Query.=whitelist($g,Array("list","of","possible","fields","here"))."=?"; $Values=$_GET['values']; array_unshift($Query); //add to the beginning $res=call_user_func_array(SQL, $Values); ORM ORMs are good security practice. If you're using an ORM (like Doctrine) in your PHP project, you're mostly prone to SQL attacks. Although injecting queries in ORM's is much harder, keep in mind that concatenating ORM queries makes for the same flaws that concatenating SQL queries, so NEVER concatenate strings sent to a database. ORM's support prepared statements as well. Other Injection Cheat Sheet SQL aside, there are a few more injections possible and common in PHP: Shell Injection A few PHP functions namely - shell_exec - exec - passthru - system - backtick operator ( ` ) run a string as shell scripts and commands. Input provided to these functions (specially backtick operator that is not like a function). Depending on your configuration, shell script injection can cause your application settings and configuration to leak, or your whole server to be hijacked. This is a very dangerous injection and is somehow considered the haven of an attacker. Never pass tainted input to these functions - that is input somehow manipulated by the user - unless you're absolutely sure there's no way for it to be dangerous (which you never are without whitelisting). Escaping and any other countermeasures are ineffective, there are plenty of vectors for bypassing each and every one of them; don't believe what novice developers tell you. Code Injection All interpreted languages such as PHP, have some function that accepts a string and runs that in that language. It is usually named Eval. PHP also has Eval. Using Eval is a very bad practice, not just for security. If you're absolutely sure you have no other way but eval, use it without any tainted input. Reflection also could have code injection flaws. Refer to the appropriate reflection documentations, since it is an advanced topic. Other Injections LDAP, XPath and any other third party application that runs a string, is vulnerable to injection. Always keep in mind that some strings are not data, but commands and thus should be secure before passing to third party libraries. XSS Cheat Sheet There are two scenarios when it comes to XSS, each one to be mitigated accordingly: No Tags Most of the time, output needs no HTML tags. For example when you're about to dump a textbox value, or output user data in a cell. In this scenarios, you can mitigate XSS by simply using the function below. Keep in mind that this scenario won't mitigate XSS when you use user input in dangerous elements (style, script, image's src, a, etc.), but mostly you don't. Also keep in mind that every output that is not intended to contain HTML tags should be sent to the browser filtered with the following function. //xss mitigation functions function xssafe($data,$encoding='UTF-8') { return htmlspecialchars($data,ENT_QUOTES | ENT_HTML401,$encoding); } function xecho($data) { echo xssafe($data); } //usage example <input type='text' name='test' value='<?php xecho ("' onclick='alert(1)"); ?>' /> Yes Tags When you need tags in your output, such as rich blog comments, forum posts, blog posts and etc., you have to use a Secure Encoding library. This is usually hard and slow, and that's why most applications have XSS vulnerabilities in them. OWASP ESAPI has a bunch of codecs for encoding different sections of data. There's also OWASP AntiSammy and HTMLPurifier for PHP. Each of these require lots of configuration and learning to perform well, but you need them when you want that good of an application. Other Tips - We don't have a trusted section in any web application. Many developers tend to leave admin areas out of XSS mitigation, but most intruders are interested in admin cookies and XSS. Every output should be cleared by the functions provided above, if it has a variable in it. Remove every instance of echo, print, and printf from your application and replace them with the above statement when you see a variable is included, no harm comes with that. - HTTP-Only cookies are a very good practice, for a near future when every browser is compatible. Start using them now. (See PHP.ini configuration for best practice) - The function declared above, only works for valid HTML syntax. If you put your Element Attributes without quotation, you're doomed. Go for valid HTML. - Reflected XSS is as dangerous as normal XSS, and usually comes at the most dusty corners of an application. Seek it and mitigate it. CSRF Cheat Sheet CSRF mitigation is easy in theory, but hard to implement correctly. First, a few tips about CSRF: - Every request that does something noteworthy, should be CSRF mitigated. Noteworthy things are changes to the system, and reads that take a long time. - CSRF mostly happens on GET, but is easy to happen on POST. Don't ever think that post is secure. The OWASP PHP CSRFGuard is a code snippet that shows how to mitigate CSRF. Only copy pasting it is not enough. In the near future, a copy-pasteable version would be available (hopefully). For now, mix that with the following tips: - Use re-authentication for critical operations (change password, recovery email, etc.) - If you're not sure whether your operation is CSRF proof, consider adding CAPTCHAs (however CAPTCHAs are inconvenience for users) - If you're performing operations based on other parts of a request (neither GET nor POST) e.g Cookies or HTTP Headers, you might need to add CSRF tokens there as well. - AJAX powered forms need to re-create their CSRF tokens. Use the function provided above (in code snippet) for that and never rely on Javascript. - CSRF on GET or Cookies will lead to inconvenience, consider your design and architecture for best practices. Authentication and Session Management Cheat Sheet PHP doesn't ship with a readily available authentication module, you need to implement your own or use a PHP framework, unfortunately most PHP frameworks are far from perfect in this manner, due to the fact that they are developed by open source developer community rather than security experts. A few instructive and useful tips are listed below: Session Management PHP's default session facilites are considered safe, the generated PHPSessionID is random enough, but the storage is not necessarily safe: - Session files are stored in temp (/tmp) folder and are world writable unless suPHP installed, so any LFI or other leak might end-up manipulating them. - Sessions are stored in files in default configuration, which is terribly slow for highly visited websites. You can store them on a memory folder (if UNIX). - You can implement your own session mechanism, without ever relying on PHP for it. If you did that, store session data in a database. You could use all, some or none of the PHP functionality for session handling if you go with that. Session Hijacking Prevention It is good practice to bind sessions to IP addresses, that would prevent most session hijacking scenarios (but not all), however some users might use anonymity tools (such as TOR) and they would have problems with your service. To implement this, simply store the client IP in the session first time it is created, and enforce it to be the same afterwards. The code snippet below returns client IP address: $IP = (getenv ( "HTTP_X_FORWARDED_FOR" )) ? getenv ( "HTTP_X_FORWARDED_FOR" ) : getenv ( "REMOTE_ADDR" ); Keep in mind that in local environments, a valid IP is not returned, and usually the string :::1 or :::127 might pop up, thus adapt your IP checking logic. Invalidate Session ID You should invalidate (unset cookie, unset session storage, remove traces) of a session whenever a violation occurs (e.g 2 IP addresses are observed). A log event would prove useful. Many applications also notify the logged in user (e.g GMail). Rolling of Session ID You should roll session ID whenever elevation occurs, e.g when a user logs in, the session ID of the session should be changed, since it's importance is changed. Exposed Session ID Session IDs are considered confidential, your application should not expose them anywhere (specially when bound to a logged in user). Try not to use URLs as session ID medium. Transfer session ID over TLS whenever session holds confidential information, otherwise a passive attacker would be able to perform session hijacking. Session Fixation Session IDs are to be generated by your application only. Never create a session only because you receive the session ID from the client, the only source of creating a session should be a secure random generator. Session Expiration A session should expire after a certain amount of inactivity, and after a certain time of activity as well. The expiration process means invalidating and removing a session, and creating a new one when another request is met. Also keep the log out button close, and unset all traces of the session on log out. Inactivity Timeout Expire a session if current request is X seconds later than the last request. For this you should update session data with time of the request each time a request is made. The common practice time is 30 minutes, but highly depends on application criteria. This expiration helps when a user is logged in on a publicly accessible machine, but forgets to log out. It also helps with session hijacking. General Timeout Expire a session if current session has been active for a certain amount of time, even if active. This helps keeping track of things. The amount differs but something between a day and a week is usually good. To implement this you need to store start time of a session. Cookies Handling cookies in a PHP script has some tricks to it: Never Serialize Never serialize data stored in a cookie. It can easily be manipulated, resulting in adding variables to your scope. Proper Deletion To delete a cookie safely, use the following snippet: setcookie ($name, "", 1); setcookie ($name, false); unset($_COOKIE[$name]); The first line ensures that cookie expires in browser, the second line is the standard way of removing a cookie (thus you can't store false in a cookie). The third line removes the cookie from your script. Many guides tell developers to use time() - 3600 for expiry, but it might not work if browser time is not correct. You can also use session_name() to retrieve the name default PHP session cookie. HTTP Only Most modern browsers support HTTP-only cookies. These cookies are only accessible via HTTP(s) requests and not Javascript, so XSS snippets can not access them. They are very good practice, but are not satisfactory since there are many flaws discovered in major browsers that lead to exposure of HTTP only cookies to javascript. To use HTTP-only cookies in PHP (5.2+), you should perform session cookie setting manually (not using session_start): #prototype bool setcookie ( string $name [, string $value [, int $expire = 0 [, string $path [, string $domain [, bool $secure = false [, bool $httponly = false ]]]]]] ) #usage if (!setcookie("MySessionID", $secureRandomSessionID, $generalTimeout, $applicationRootURLwithoutHost, NULL, NULL,true)) echo ("could not set HTTP-only cookie"); The path parameter sets the path which cookie is valid for, e.g if you have your website at example.com/some/folder the path should be /some/folder or other applications residing at example.com could also see your cookie. If you're on a whole domain, don't mind it. Domain parameter enforces the domain, if you're accessible on multiple domains or IPs ignore this, otherwise set it accordingly. If secure parameter is set, cookie can only be transmitted over HTTPS. See the example below: $r=setcookie("SECSESSID","1203j01j0s1209jw0s21jxd01h029y779g724jahsa9opk123973",time()+60*60*24*7 /*a week*/,"/","owasp.org",true,true); if (!$r) die("Could not set session cookie."); Internet Explorer issues Many version of Internet Explorer tend to have problems with cookies. Mostly setting Expire time to 0 fixes their issues. Authentication Many websites are vulnerable on remember me features. The correct practice is to generate a one-time token for a user and store it in the cookie. The token should also reside in data store of the application to be validated and assigned to user. This token should have no relevance to username and/or password of the user, a secure long-enough random number is a good practice. It is better if you imply locking and prevent brute-force on remember me tokens, and make them long enough, otherwise an attacker could brute-force remember me tokens until he gets access to a logged in user without credentials. - Never store username/password or any relevant information in the cookie. file upload handling file_uploads = Off upload_tmp_dir = /path/PHP-uploads/ upload_max_filesize = 1M # NOTE: more or less useless as first handled by the web server max_file_uploads = 2 PHP General Guidelines for Secure Web Applications PHP Version Use PHP 5.3.8. Stable versions are always safer then the beta ones. Framework Use a framework like Zend or Symfony. Try not to re-write the code again and again. Also avoid dead codes. Directory Code with most of your code outside of the webroot. This is automatic for Symfony and Zend. Stick to these frameworks. Hashing Extension Not every PHP installation has a working mhash extension, so if you need to do hashing, check it before using it. Otherwise you can't do SHA-256 Cryptographic Extension Not every PHP installation has a working mcrypt extension, and without it you can't do AES. Do check if you need it. Authentication and Authorization There is no authentication or authorization classes in native PHP. Use ZF or Symfony instead. Input nput validation Use $_dirty['foo'] = $_GET['foo'] and then $foo = validate_foo($dirty['foo']); Use PDO or ORM Use PDO with prepared statements or an ORM like Doctrine Use PHP Unit and Jenkins When developing PHP code, make sure you develop with PHP Unit and Jenkins - see for more details. Use Stefan Esser's Hardened PHP Patch Consider using Stefan Esser's Hardened PHP patch - (not maintained now, but the concepts are very powerful) Avoid Global Variables In terms of secure coding with PHP, do not use globals unless absolutely necessary Check your php.ini to ensure register_globals is off Do not run at all with this setting enabled It's extremely dangerous (register_globals has been disabled since 5.0 / 2006, but .... most PHP 4 code needs it, so many hosters have it turned on) Protection against RFI Ensure allow_url_fopen and allow_url_include are both disabled to protect against RFI But don't cause issues by using the pattern include $user_supplied_data or require "base" + $user_supplied_data - it's just unsafe as you can input /etc/passwd and PHP will try to include it Regexes (!) Watch for executable regexes (!) Session Rotation Session rotation is very easy - just after authentication, plonk in session_regenerate_id() and you're done. Be aware of PHP filters PHP filters can be tricky and complex. Be extra-conscious when using them. Logging Set display_errors to 0, and set up logging to go to a file you control, or at least syslog. This is the most commonly neglected area of PHP configuration Output encoding Output encoding is entirely up to you. Just do it, ESAPI for PHP is ready for this job. These are transparent to you and you need to know about them. php://input: takes input from the console gzip: takes compressed input and might bypass input validation Abbas Naderi Afooshteh - abbas.naderi@owasp.org (owasp@abiusx.com) Achim - Achim at owasp.org Legacy Author: Andrew van der Stock --Abbas Naderi 00:34, 10 July 2012 (UT
https://www.owasp.org/index.php?title=PHP_Security_Cheat_Sheet&diff=134759&oldid=118675
CC-MAIN-2014-23
en
refinedweb
Introduction This series has concentrated on new features in PHP V5.3, such as namespaces, closures, object handling, object-oriented programming, and Phar. While these flashy new features are a welcome addition to the language, PHP V5.3 was also designed to further streamline PHP. It builds upon the popular and stable PHP V5.2 and enhances the language to make it more powerful. In this article, learn about changes and considerations when upgrading from PHP V5.2. Syntax changes Additions to the language, with namespaces and closures (discussed in Part 2 and Part 3), have added more reserved words. Starting in PHP V5.3, namespace can no longer be used as an identifier. The closure class is now a reserved class, but it is still a valid identifier. Listing 1 shows examples of statements that no longer work in PHP V5.3 because of the additional reserved words. Listing 1. Invalid PHP statements // the function definition below will throw a fatal error in PHP 5.3, but is perfectly // valid in 5.2 function namespace() { .... } // same with this class definition class Closure { .... } Support for the goto statement was also added in PHP V5.3. Now goto is a reserved word. goto statements are not common in modern languages (you might remember using them in BASIC), but there are occasionally use cases where they are handy. Listing 2 has an example of how they work. Listing 2. goto statements in PHP echo "This text will get outputted"; goto a; echo "This text will get skipped"; a: echo "This text will get outputted"; One possible use case for gotos is for breaking out of deeply nested loops and if statements. This will make the code much clearer to read. Changes to functions and methods Though there are no major changes to functions and methods in PHP V5.3, there are a few enhancements to help with outstanding issues in PHP and to improve performance. This section discusses a few of the more notable changes. In previous versions of PHP, the array functions atsort, natcasesort, usort, uasort, uksort, array_flip, and array_unique let you pass objects instead of arrays as parameters. The functions then treat the properties of the objects as the array keys and values. This is no longer available in PHP V5.3, so you need to cast the objects to arrays first. Listing 3 shows how to change your code. Listing 3. Changing code to cast objects to arrays for certain functions $obj = new stdClass; $obj->a = '1'; $obj->b = '2'; $obj->c = '3'; print_r(array_flip($obj)); // will NOT work in PHP 5.3, but will in PHP 5.2 print_r(array_flip((array) $obj)); // will work in PHP 5.3 and 5.2 The magic class methods are now much more strictly enforced. The following methods must have public visibility: __get __set __isset __unset __call You can use the new __callStatic() magic method in cases where __call was used in a static context as a workaround for this change. The required arguments for these methods are enforced and must be present with the exception of the __isString() magic method, which accepts no parameters. Listing 4 shows how to use these methods and the required parameters for them. Listing 4. Using the magic methods class Foo { public function __get($key) {} // must be public and have one parameter public function __set($key,$val) {} // must be public and have two parameters public function __toString() {} must be public and have no parameters } Several functions that previously were not supported on PHP with Windows are now supported in PHP V5.3. For example, the getopt() function is designed to parse the options for calling a PHP script from the command line. inet_ntop() and inet_pton(), the functions for encoding and decoding Internet addresses, now work under Windows® as well. There are several math functions, such as asinh(), acosh(), atanh(), log1p(), and expm1(), which now have Windows support. Extension changes The PHP Extension C Library (PECL), has been the breeding ground for new extensions in PHP. Once an extension is mature and stable and is viewed as a useful function for part of the core distribution, it is often added during major version changes. In this spirit, starting in PHP V5.3, the following extensions are part of the core PHP distribution. - FileInfo - Provides functions that help detect the content type and encoding of a file by looking at certain magic byte character sequences in the file. - intl - A wrapper for the International Components for Unicode (ICU) library, providing functions for unicode and globalization support. - Phar - A PHP archiving tool discussed in Part 4. - mysqlnd - A native PHP driver for MySQL database access that's a replacement for the earlier MySQL and MySQLi extension which leveraged the libmysql library. - SQLite3 - A library for using SQLite V3 databases. When an extension is no longer actively maintained, or is deemed unworthy of distribution with the core PHP distribution, it is often moved to PECL. As part of the shuffling in PHP V5.3, the following extensions have been removed from the core PHP distribution and are maintained as part of PECL. - ncurses - An emulation of curses, which is used to display graphical output on the command line. - fpdf - Handles building and using forms and form data within PDF documents. - dbase - Provides support for reading and writing dbase compatible files. - fbsql - Supports database access for Frontbase database servers. - ming - An open source library that allows you to create Flash 4 animations. The Sybase extension has been removed entirely and is superseded by the sybase_ct extension. The sybase_ct extension is fully compatible with the former and should be a drop-in replacement. The newer function will use the Sybase client libraries you need to install on your Web server. Build changes With the strong focus on refining the build process in PHP V5.3, it's easier to build PHP on all platforms. To maintain consistency between PHP builds and to provide a guaranteed set of components in PHP, the PCRE, Reflection, and SPL extensions can no longer be disabled in the build. You can build distributable PHP applications that use these extensions and are guaranteed that they will be available for use. A new team took over the PHP Windows build in the last year. Starting in PHP V5.3, the team will provide several improvements for users on Windows. The new builds will target the 586 architecture (Intel® Pentium® or later) and will require Windows 2000/XP or later, removing support for Windows 98/NT and earlier. PHP builds built with Microsoft® Visual Studio® 2008 and builds targeting the x86-64 architecture will be built. They offer improved performance when working with FastCGI on the Microsoft IIS Web server or with Apache built with the same compiler and architecture. The Windows installer is also being improved to better configure PHP with the Microsoft IIS Web server. The team launched a Web site specific to PHP on Windows (see Resources). .ini changes An important feature of PHP is that its behavior can be configured using an .ini file. In PHP V5.3, several problematic directives for this file have been removed, such as the zend.ze1_compatibility_mode setting. You now have tremendously improved flexibility when using this file. There are two major improvements to the php.ini file: - You can have variables within the php.ini file. This is very handy for removing redundancies within the file, and it's easier to update the file if changes are needed. Listing 5 shows an example. Listing 5. Variables in php.ini filefoo and newfoo have the same value. foo = bar [section] newfoo = ${foo} - You can make per-directory and per-site PHP ini settings, similar to making those same settings with the Apache configuration files. The advantage here is that the syntax becomes consistent across all of the various SAPIs PHP can run under. Listing 6 shows how this works. Listing 6. Per-site and per-directory .ini settings [PATH=/var/www/site1] ; directives here only apply to PHP files in the /var/www/site1 directory [HOST=] ; directives here only apply to PHP files requested from the site. You can also have these .ini directives created in user-specified .ini files, located in the file system itself, in the same way that .htaccess files work under the Apache HTTP Web server. The default name for this file is specified by the user_ini.filename directive. The feature can be disabled by setting this directive to an empty value. Any per-site and per-directory directives cannot be overridden in a user-specified .ini file. Deprecated items PHP V5.3 starts officially deprecating older functions that will no longer be available in future versions of PHP. When you use these functions, an E_DEPRECATED error will be emitted. The following functions are deprecated for PHP V5.3: - Ticks ( declare(ticks=N)and register_tick_function()), which were designed to have a function call for every n statements executed by the parser within the declare()block. They're being removed because of numerous breaks in their function and because the feature isn't used very often. define_syslog_variables(), which initializes all syslog related variables. This function isn't required because the constants it defines are already defined globally. Simply removing this function call should be all that is necessary. - The eregregular-expression functions. It's recommended that you use the PCRE regular-expression functions instead, since they are much faster and more consistent with regular expressions used in other languages and applications. Support for the eregfunctions is being removed so PHP can standardize with one regular-expression engine. It is recommended that you migrate away from these features with PHP V5.3. Future major PHP releases will drop support for the above items. Summary PHP V5.3 has numerous new features and has "cleaned up" several items. There are some backward-compatibility issues. This article provides some guidance for migrating your Web application to work with PHP V5.3. For the latest details regarding PHP V5.3 see the PHP wiki, which has notes on any other changes that might affect your applications. Resources Learn - Learn more about closures from Wikipedia. - PHP For Windows is dedicated to supporting PHP on Microsoft Windows. It also supports ports of PHP extensions or features, and provides special builds for the various Windows architectures. - Visit the PHP wiki to learn all about changes to PHP V5.3. - on learning to program with PHP, see "Learning PHP, Part 1," Part 2, and Part 3. - Planet PHP is the PHP developer community news source. - The PHP Manual has information about PHP data objects and their capabilities. - Visit Safari Books Online for a wealth of resources for open source technologies. - - PHPMyAdmin is a popular PHP application that has been packaged in Phar to use as an example of how easy using Phar archives are. - Get PHP V5.2.
http://www.ibm.com/developerworks/opensource/library/os-php-5.3new5/index.html?ca=dgr-lnxw64Migrate2PHP5.3&S_TACT=105AGY46&S_CMP=grsitejw64
CC-MAIN-2014-23
en
refinedweb
import X between submodules in a package Discussion in 'Python' started by Donn Ingle, Dec 19, 2007.47 - TG - Jul 20, 2006 Python submodules and name importsFrank Aune, Aug 23, 2007, in forum: Python - Replies: - 1 - Views: - 797 - =?iso-8859-1?B?UOFkcmFpZw==?= - Aug 23, 2007 Noddy with submodules?Torsten Mohr, Sep 7, 2009, in forum: Python - Replies: - 1 - Views: - 284 - Gabriel Genellina - Sep 8, 2009 Automatic import of submodulesMassi, Nov 25, 2011, in forum: Python - Replies: - 4 - Views: - 237 - Shambhu Rajak - Nov 28, 2011 Move modules to submodules question, Jan 11, 2013, in forum: Python - Replies: - 1 - Views: - 102 - Peter Otten - Jan 11, 2013
http://www.thecodingforums.com/threads/import-x-between-submodules-in-a-package.562215/
CC-MAIN-2014-23
en
refinedweb
Communication between two phones using NFC Note: This is a community entry in the Windows Phone 8 Wiki Competition 2012Q4. What is NFC ?. NFC was present in a single Windows Phone 7 device, the Nokia Lumia 610 Quiksilver, exclusive for the French market. However, developers didn't have access to this feature . NFC and Windows Phone 8 NFC communications are provided by the ProximityDevice class (namespace Windows.Networking.Proximity). To use NFC, you need to add a specific capability: open your WMAppManifest.xml and add the capability ID_CAP_PROXIMITY. If you don't do this, an exception will be launched when you will try to access the ProximityDevice. Test if NFC is available NFC chip-sets aren't mandatory on Windows Phone, so before we use NFC, it's necessary to test if it is present. if (ProximityDevice.GetDefault() != null) MessageBox.Show("NFC present"); else MessageBox.Show("Your phone has no NFC or NFC is disabled"); Restrict your application to NFC-enabled devices If NFC is essential for your application, you can specify in your WMAppManifest.xml file that your application must be only visible to phones with an NFC chip. To enable this, check the ID_REQ_NFC option in the Requirements tab. Protocols When you want to publish or subscribe to messages, you need to specify the type of messages you want. Message type values are case-sensitive strings consisted of one part or two parts: - <protocol>.<subtype> - <protocol> The following table shows the supported values for the protocol part of the message type: The subtype is a string of alphanumeric characters and any of the valid URI characters as defined by RFC 3986: - . _~ : / ? # [ ] @ ! $ & ‘ ( ) * + , ; = %. The subtype cannot exceed a length of 250 characters. Example: Windows.NokiaSample Subcribe to a message To subscribe messages, simply call the SubcribeMessage method and pass the message type that you want to listen to. In our example, we will ask to listen to messages of type Windows and as type NokiaExample. var proximitydevice = ProximityDevice.GetDefault(); if (proximitydevice == null) MessageBox.Show("NFC not present"); var subscriptionId = proximitydevice.SubscribeForMessage("Windows.NokiaExample", (device, message) => { Deployment.Current.Dispatcher.BeginInvoke(() =>{ MessageBox.Show(message.DataAsString); }); }); Be careful, the callback is not called in the UI thread ! Stop subscribing The phone will continue to listen to NFC tags as long as your application will be executed. To stop the broadcast, call StopSubscribingForMessage method with the subscription ID returned from the SubscribeForMessage method. proximitydevice.StopSubscribingForMessage(_subscriptionId.Value); As soon as Windows Phone reads a NFC tag of the given type, the callback passed as a parameter will be executed. Publish messages We have seen how to subscribe to a message type, we will now see how to post a message. As previously, we will use the ProximityDevice object, but this time, we will use the PublishMessage method that takes two parameters: - type var proximitydevice = ProximityDevice.GetDefault(); if (proximitydevice == null) MessageBox.Show("NFC not present"); var subscriptionId = proximitydevice.PublishMessage("Windows.NokiaExample", "Hello you!"); A third parameter can be added allowing you to have a callback indicating that the message has been transmitted: var proximitydevice = ProximityDevice.GetDefault(); if (proximitydevice == null) MessageBox.Show("NFC not present"); var publishId=proximitydevice.PublishMessage("Windows.NokiaExample", "Hello you!",(device, messageId)=>{ Deployment.Current.Dispatcher.BeginInvoke(() =>{ MessageBox.Show("Message transmitted !"); }); }); The method returns a long value, that will serve you later to call the StopPublishingMessage method. Take in mind that the phone will continue to publish the message as long as you haven't stopped the publication. Publish Uri or Binary messages You can also use the PublishBinaryMessage or PublishUriMessage methods to publish binary or Uri message: Note: PublishUriMessage doesn't need type. Stop publishing messages To stop the broadcast of a message, call the StopPublishingMessage method using a publication ID returned from the PublishMessage, PublishBinaryMessage, and PublishUriMessage methods. var proximitydevice = ProximityDevice.GetDefault(); proximitydevice.StopPublishingMessage(_publishId); Publishing multiple messages When you call the PublishMessage method many times, be aware that your new message will not overwrite the previous but will complete it. It's essential to call StopPublishingMessage before calling PublishMessage if you want to replace the previous message. Sample code Here's a Windows Phone 8 C# project that allows to demonstrate how to communicate between two phones: File:NFC WP8 sample app.zip
http://developer.nokia.com/community/wiki/index.php?title=Communication_between_two_phones_using_NFC&oldid=179306
CC-MAIN-2014-23
en
refinedweb
URIs rather than using memory streams whenever you can. The XAML framework can associate the same media resources. APIs from the Windows.Graphics.Imaging namespace. You might need these APIs if your app scenario involves image file format conversions, or manipulation of an image where the user can save the result as a file. The encoding APIs are also supported by the Guidelines for scaling to pixel density. Exposing basic information about UI elements. Windows 8 behavior For Windows 8, resources can use a resource qualifier pattern to load different resources depending on device-specific scaling. However, resources aren't automatically reloaded if the scaling factor changes while the app is running. In this case apps would have to take care of reloading resources, by handling the DpiChanged event (or the deprecated LogicalDpiChanged event) and using ResourceManager APIs to manually reload the resource that's appropriate for the new scaling factor. Starting with Windows 8.1, any resource that was originally retrieved for your app is automatically re-evaluated if the scaling factor changes while the app is running. In addition, when that resource is the image source for an Image object, then one of the source-load events (ImageOpened or ImageFailed) is fired as a result of the system's action of requesting the new resource and then applying it to the Image. The scenario where a run-time scale change might happen is if the user moves your app to a different monitor when more than one is available. If you migrate your app code from Windows 8 to Windows 8.1 you may want to account for this behavior change, because it results in ImageOpened or ImageFailed events that happen at run-time when the scale change is handled, even in cases where the Source is set in XAML. Also, if you did have code that handled DpiChanged/LogicalDpiChanged and reset the resources, you should examine whether that code is still needed given the new Windows 8.1 automatic reload behavior. Apps that were compiled for Windows 8 but running on Windows 8.1 continue to use the Windows 8 behavior. Requirements See also - FrameworkElement - Quickstart: Image and ImageBrush - XAML images sample - Optimize media resources - BitmapSource - FlowDirection - Windows.Graphics.Imaging - Source
http://msdn.microsoft.com/en-us/library/windows/apps/br242752.aspx?cs-save-lang=1&cs-lang=cpp
CC-MAIN-2014-23
en
refinedweb
Ok, so I want to start off by saying that I am finishing my Associates in Computer Information Systems, and I AM NOT A PROGRAMMER!! I am not good at it and it is extremely difficult for me to wrap my head around it (which is odd for me because I love to learning; I am learning to speak Russian just for fun). So, my assignment is to fill in the blanks of code with my own to get the outcome of a tic tac toe game. Here is all of the code: Code Java: /* * */ package tictactoe; /** * * @author Gorgo */ public class TicTacToe { // #2. variable and constant declarations - given by book static int [][] gameboard; static final int EMPTY = 0; static final int NOUGHT = -1; //PLAYER O static final int CROSS = 1; //PLAYER X // #3. utility methods - given by book static void set(int val, int row, int col) throws IllegalArgumentException { if (gameboard[row][col] == EMPTY) gameboard[row][col] = val; else throw new IllegalArgumentException ("Player already there!"); } static void displayboard() { for (int r = 0; r < gameboard.length; r++) { System.out.print ("|"); for (int c = 0; c < gameboard[r].length; c++) { switch (gameboard[r][c]) { case NOUGHT: System.out.print("O"); break; case CROSS: System.out.print("X"); break; default: //Empty System.out.print(" "); } System.out.print("|"); } System.out.println("\n--------\n"); } } // #5. define createBoard method - MY CODE static void createBoard(int rows, int cols) { //#6. Initialize gameboard int[][] createBoard = new int[rows][cols]; for (int r = 0; r < 3; r++) { for (int c = 0; c < 3; c++) { createBoard[rows][cols] = -1; } } } //#6. define winOrTie method - MY CODE static boolean winOrTie() { // #6. determine whether X or O won or there is a tie - MY CODE int winner = -2; for (int r = 0; r < 3; r++) if ((gameboard[r][0] == winner) && (gameboard[r][1] == winner) && (gameboard[r][2] == winner)) { return true; } for (int j = 0; j < 3; j++) if ((gameboard[0][j] == winner) && (gameboard[1][j] == winner) && (gameboard[2][j] == winner)) { return true; } if ((gameboard[0][0] == winner) && (gameboard[1][1] == winner) && (gameboard[2][2] == winner)) { return true; } if ((gameboard[0][2] == winner) && (gameboard[1][1] == winner) && (gameboard[2][0] == winner)) { return true; } return false; } public static void main(String[] args) { //#8. contents of main() - given by book displayboard(); int turn = 0; int playerVal; int outcome; java.util.Scanner scan = new java.util.Scanner(System.in); do { playerVal = (turn % 2 == 0) ? NOUGHT: CROSS; if (playerVal == NOUGHT) System.out.println ("\n-O's turn-"); else System.out.println ("\n-X's turn-"); System.out.print ("Enter row and column:"); try { set (playerVal, scan.nextInt(), scan.nextInt()); } catch (Exception ex) {System.err.println(ex);} turn++; outcome = winOrTie(); } while ( outcome == -2 ); displayboard(); switch (outcome) { case NOUGHT: System.out.println("O wins!"); break; case CROSS: System.out.println("X wins!"); break; case 0: System.out.println("Tie."); break; } } } So, the errors that I am getting are: #1. "Exception in thread "main" java.lang.NullPointerException" this is pointing to the 'displayboard' method: and here:and here:Code Java: Code Java: My problem that I am having with these errors is: 1. these particular lines of code were given to me; I did not write them; 2. I don't understand why it is pointing at the first bit. Everything that I have read in my text book says the exact same code. Therefore, my question is why is this one line (the 'displayboard method') an error. Thank you all in advance :)
http://www.javaprogrammingforums.com/%20object-oriented-programming/31104-methods-booleans-initializing-oh-my-printingthethread.html
CC-MAIN-2014-23
en
refinedweb
Composite Input Components in JSF Composite components are a great feature of JSF 2.0. The canonical example is a login component with fields for the username and password: <mylib:login This has been well explained elsewhere. But here is what always baffled me. I want to have a composite date component, with three menus for day, month, and year. But I want it to have a single value of type java.util.Date, so I can use it like this: <mylib:date and not <mylib:date Why do I care? - My classes aren't all made up of strings and numbers. I use objects when I can. My Userclass has a property birthDayof type java.util.Date. I don't want to write boring code to take dates apart and put them back together. - I want to use bean validation for the date property, with a @Pastor @Futureannotation. I asked around and people told me that this couldn't be done with composite components—I'd have to write an actual custom component. But, as I discovered, it is not so. With a small dose of knowledge of the JSF lifecycle, and the poorly documented technique of backing components, this is actually pretty easy. Here goes. When you make a composite component, it is normally turned into a UINamingContainer that contains the child components in the implementation. But you can also force a different component to be used, provided - it implements the NamingContainermarker interface - its “family” is "javax.faces.NamingContainer"(don't ask...) The easiest way of using your own component is to make a class whose name is libraryName.compositeComponentName, such as mylib.date. (It's a bit weird to have a lowercase class name, but that's the price to pay for “convention over configuration”.) package mylib; public class date extends UIInput implements NamingContainer { public String getFamily() { return "javax.faces.NamingContainer"; } ... } Note that I extend UIInput and not UINamingContainer. In the world of JSF, UIInput is an “editable value holder”, a class that holds a value of an arbitrary type (not necessarily numbers or strings), and to which you can attach validators. The JSF lifecycle starts out like this: - The HTTP request (doesn't actually have to be HTTP—in JSF, everything is pluggable) delivers name/value pairs - Each component can pick from the request what it wants in the decodemethod and sets its submitted value - The submitted value is converted to the desired type (integer, date, whatever), becoming the converted value - If the converted value passes validation, it becomes the value of the component For a composite component, the submitted value is a combination of the submitted values of the children. You could combine them by putting them into a map, but I simply say that the submitted value is the composite component: public class date extends UIInput implements NamingContainer { ... public Object getSubmittedValue() { return this; } ... } (If you don't override this method, the submitted value is null, and that gets into a murky corner of processing that you want to avoid.) The conversion from a bunch of values to a date happens in getConvertedValue: public class date extends UIInput implements NamingContainer { ... protected Object getConvertedValue(FacesContext context, Object newSubmittedValue) { UIInput dayComponent = (UIInput) findComponent("day"); UIInput monthComponent = (UIInput) findComponent("month"); UIInput yearComponent = (UIInput) findComponent("year"); int day = (Integer) dayComponent.getValue(); int month = (Integer) monthComponent.getValue(); int year = (Integer) yearComponent.getValue(); if (isValidDate(day, month, year)) // helper method that checks for month lengths, leap years return new Date(year - 1900, month - 1, day); else throw new ConverterException(new FacesMessage(...)); } ... } This is very similar to the usual conversion action, except that I combine the values from multiple child components. (I attached a javax.faces.Integer converter to each of the children so I don't have to convert the submitted strings to integers myself.) That takes care of processing the input. On the rendering side, I just populate the children before rendering them: public class date extends UIInput implements NamingContainer { ... public void encodeBegin(FacesContext context) throws IOException { Date date = (Date) getValue(); UIInput dayComponent = (UIInput) findComponent("day"); UIInput monthComponent = (UIInput) findComponent("month"); UIInput yearComponent = (UIInput) findComponent("year"); dayComponent.setValue(date.getDate()); monthComponent.setValue(date.getMonth() + 1); yearComponent.setValue(date.getYear() + 1900); super.encodeBegin(context); } } That's all. The same recipe works for any composite component that collects input for a complex data type. Here is the code of a sample application that works out of the box in GlassFish 3 (but not in Tomcat). Note that the sample application uses the composite component as an input for java.util.Date. It works with bean validation without any effort on the developer's part. The moral of this is: - The much maligned JSF lifecycle is actually pretty good. The decode/convert/validate order is what you need anyway, so why not have the framework manage it for you? - The much maligned generality of JSF is pretty good too. They could have said “With HTTP, what comes in is strings, so why not just work with strings?” But here we take advantage that the source and target of the conversion can be any type. - The declarative composite components that everyone raves about are great, but sometimes you've got to be able to add code. This blog shows you how to do it. - Login or register to post comments - Printer-friendly version - cayhorstmann's blog - 15193 reads Respect by etf - 2010-08-17 06:34I just want to express my respect. Very useful post for me. Thank you very much. Convention over configuration by gleenn - 2010-02-14 16:01Hi Professor Horstmann. I'm just taking a ride reading about all this JSF stuff (I'm a Rails guy now). So why on earth would they have you using lowercase class names by convention? They just didn't want to go by cayhorstmann - 2010-02-15 06:58 They just didn't want to go through the trouble of uppercasing the name, perhaps because they had bad memories of... Another Backing Component Approach by jdlee - 2010-02-01 08:01Cay, I used backing components quite a bit in Mojarra Scales, so I thought I'd share a tip to get you around your lower case class issue. Here's the composite component snippet: <html xmlns="" xmlns: <composite:interface </composite:interface> </html>Note the addition of the componentType attribute. My backing component would then look something like this: @FacesComponent("com.example.Foo") public class Bar extends UIInput implements NamingContainer { @Override public String getFamily() { return UINamingContainer.COMPONENT_FAMILY; } // ... } Using @FacesComponent, I set the type of the component to that required by the composite component. Just for grins, I made the class name of the component something completely different, just to highlight the fact that the class name has no necessary bearing on the type or family of the component. I do still implement NamingContainer and return UINamingContainer.COMPONENT_FAMILY as the family, but everything else can be whatever I want. I hope that helps. And that you'll be able to read this after java.net gets finished with it. :| Re: Another Backing Component Approach by jdlee - 2010-02-01 08:02Nope. It seems java.net hates source code. :) In short, add a componentType attribute to composite:interface, then attach @FacesComponent to the backing component, using the value of componentType as the value of the annotation and you're set. I am somewhat perplexed. by varan - 2010-02-01 00:31If you are writing as a developer who wants to explore how to expand JSF, this is great. However, if you want to use a better Web GUI framework, it seems to me that this is somewhat of a wasted effort, as ZK has matured quite a bit and provides all the things that you seek. I have used in a substantially large (large in terms of the size of the codebase and functionality) deployed project, and ZK has proved to be very robust. It has quite a few advantages over JSF, including the fact that if you want to write 99% of your web application in pure Java without having to deal with tagged files etc. you can do that as well. Perhaps I missing some intrinsic value in JSF that leads you to ignore something like ZK altogether. (I have no connection with ZK, except as a very satisfied user.) Re: I am somewhat perplexed. by jdlee - 2010-02-01 08:33What I'm guessing you and the Wicket people are missing is that not everyone likes to write user interfaces in pure Java code. Some of us like to use a DSLs, say, a markup language like XHTML, to write our UIs. Even on the thick client side you can see the same sort of thinking with the invention of declarative languages like JavaFX. It's simply a different preference. While you may claim the programmatic, pure-Java approach is superior, I don't think it's a claim that can be proven. While you may be more productive and enjoy your time more working in 100% Java code, some of us are not. For some of us in the latter camp, we find JSF's use of XHTML to be a very nice and capable abstraction for defining the view layer. For what it's worth, I DO have a connection to JSF, as I'm on the Expert Group and a member of the Mojarra Dev team, but I'm in both of those roles because I started off as a very satisfied user. In a nut shell, different strokes for different folks. There's no need to hijack every JSF thread with $OTHER_FRAMEWORK advocacy at every turn. Stockholm syndrome by ronaldtm - 2010-02-01 17:30"What I'm guessing you and the Wicket people are missing is that not everyone likes to write user interfaces in pure Java code." Actually, with Wicket you write all the visual part (layout, style, etc.) in XHTML/CSS/JavaScript (plain old web standards), and just the behavior in Java. Yes, you have to build the component tree in Java, but it contains only the dynamic and action components (inputs, buttons, dynamic labels, ajax-refreshed panels, etc.), not the whole layout, like Swing or GWT (1.x), as you imply. The advantage is, you use an UI language to describe the UI (html, css), and an object-oriented, fully-refactorable, static-typed, tool-supported language to describe behavior (Java). JSF is not refactorable at all, the tools available are little more than plain HTML editors (no static analisys, for example), and pulls too much complexity and logic into XML templates and XML configurations (switch to annotations is not that better). Sweet. Well, some people do prefer to live in this kind of environment. What do they call it?... oh yes, the Stockholm syndrome! ;) Re: I am somewhat perplexed. by varan - 2010-02-01 11:05I think that one of the objectives of site should be (if it is already not) to educate the developers on various alternatives and provide some definitive opinion on the more desirable ways for the development of real applications. My post was offered in that spirit, and I have no wish, malicious or otherwise, to hijack anyone's platform. Having said that, even if you like to use tags (XHTML) you can use the more mature ZK which already has many of the features that the good professor is groping for in JSF. Re: wicket by peat_hal - 2010-01-31 10:29ups forget how to finally use the component: private Date date; public HomePage(final PageParameters pp) { add(new MyDatePicker("datepicker", new PropertyModel(this, "date"))); }so no need for separate properties (year, month, day) and here is the projects home: with a lot of examples and built-in components ... such a date picker is already built-in as yui datepicker or datechooser ... but here is what you need... wicket by peat_hal - 2010-01-31 10:19Hi, don't get me wrong I really appreciate your work, because I worked with JSF for some time. I know how hard it is to create custom components with JSF ... But recently I gave wicket a try and beside the better UI and code separation it is a pleasure (+fast!) to create custom components. This date picker component which you created is easy as sth. like: public MyDatePicker(String id, IModel model) { super(id, model); add(new DropDownChoice("days", createDays())); add(new DropDownChoice("months", createMonths())); add(new DropDownChoice("years", createYears())); }and on the html side: [wicket:panel] day [select wicket:2[/option] [/select] month [select wicket:12[/option] [/select] year [select wicket:id="years"] [option selected="selected" value="2009"]preview 2009[/option] [option value="2010"]2010[/option] [/select] [/wicket:panel]Afterwards you can easily add ajax behaviour to change the provided days depending on the month. All in all I guess this component will take me < 30min. After that it will work with all the major browsers and wicket even provides non-ajax fallback etc etc. But the best thing is that I can embed this component everywhere I like. So, give wicket a try :-) ! wicket by caroljmcdonald - 2010-02-01 07:50 I'm learning and working with wicket now. I don't find it so easy (maybe it will get easier). One big disadvantage that I see with wicket compared to JSF, is you have to write a LOT more code because in wicket you have to write java code to build the page's component tree , whereas in jsf you only have this in html. Also the way you build your tree in java has to match exactly the html or boom, meaning that if you change one you have to change the other. Also it means a lot of code changes if you change the arangment of your components on the page since this changes the component hierarchy or tree. Also you have to override a lot of methods, and I find it confusing to know what method to over ride. In JSF there are only get set or action methods. One more thing wicket does not have enough documentation, only one good book . I think JSF is easier to learn and the code is easier to maintain. hmmh, okay. maybe this is a by peat_hal - 2010-02-01 13:33hmmh, okay. maybe this is a personal taste :-). the wicket style is simpler in many ways for me and it was a great experience for me how I changed a normal form into a ajaxified one - within 10 minutes or so. I didn't feel that it is a drawback for me to build a html tree and the java tree side by side, because wicket always explains very detailed whats going wrong. And if you develop with jetty you fix those problems under 5 seconds :-) And once you set up your components you can arrange them as you like. So no, I don't think that JSF is easier to maintain IMHO it leads to more copy and paste actions ...
https://weblogs.java.net/blog/cayhorstmann/archive/2010/01/30/composite-input-components-jsf
CC-MAIN-2014-23
en
refinedweb
Scout/Tutorial/3.8/webservices/Create WsLogForm From Eclipsepedia Create Ws Log Form On the client node, go to 'Forms'. Right click on the node to create the WsLogForm [1]. As the name of the form, enter Ws Log and choose to create the Form ID which is the primary key of the log entry[2]. Click next to choose from the artifacts which also should be created by Scout SDK [3]. Uncheck all permissions as Authorization is not part of this tutorial. As the WS log is read-only, also uncheck NewHandler. Because the form does only display read-only data, change the ModifyHandler as follows: public class ModifyHandler extends AbstractFormHandler { @Override public void execLoad() throws ProcessingException { IWsLogProcessService service = SERVICES.getService(IWsLogProcessService.class); WsLogFormData formData = new WsLogFormData(); exportFormData(formData); formData = service.load(formData); importFormData(formData); // disable whole form setEnabledGranted(false); } } Create Form Fields On the WsLogForm node, go to the 'MainBox'. Right click on the MainBox to create the following 4 Form Field's: DateField Type: Date Field Name: Date Class name: DateField ServiceField Type: String Field Name: Service Class name: ServiceField PortField Type: String Field Name: Port Class name: PortField OperationField Type: String Field Name: Operation Class name: OperationField To display the SOAP message for request and response, we create a TabBox that contains the two tabs 'Request' and 'Response', respectively. SoapMessageBox Type: Tab Box Name: <leave empty as no label> Class name: SoapMessageBox Because the tab box SoapMessageBox should not have a label, go to that node and uncheck the Label Visible property in the 'Advanced Properties' section of the Scout Property View. Next, we will create the two tabs. Therefore, right click on SoapMessageBox[4] and create the following two boxes: RequestBox Name: Request Class name: RequestBox ResponseBox Name: Response Class name: ResponseBox Finally, add two String fields to hold request and response to the boxes. Right click on RequestBox to create the 'Request' String field [5]: RequestField Type: String Field Name: <leave empty as no label> Class name: RequestField Right click on ResponseBox to create the 'Response' String field [6]: ResponseField Type: String Field Name: <leave empty as no label> Class name: ResponseField In both fields, adjust their properties in Scout Property View as following [7]: - Set Grid Hto 5 - Set Grid Wto 0 (FULL_WIDTH) - Set Label Visibleto false - Set Max Lengthto inf (Integer.MAX_VALUE) - Set Multiline Textto true - Set Wrap Textto true Associate WsLogForm with WsLogTablePage To view a log record, you have to add a 'VIEW' menu to the WsLogTablePage. On client node, go to the node 'Desktop' | 'Outlines' | 'StandardOutline' | 'Child Pages' | 'WsLogTablePage' | 'Table' | 'Menus'. Right-click on the menu node to create the following menu: VIEW menu [8] Name: View WS Log... Class Name: ViewWsLogMenu Super Type: AbstractMenu Form to start: WsLogForm Form handler: ModifyHandler We also have to provide the WsLogNr primary key as argument to the WsLogForm. For that reason, double click the ViewWsLogMenu to modify the code in execAction() as follows: @Override protected void execAction() throws ProcessingException { WsLogForm form = new WsLogForm(); // Add the following line to set the primary key of the selected log record to the form form.setWSLogNr(getWsLogNrColumn().getSelectedValue()); form.startModify(); form.waitFor(); if (form.isFormStored()) { reloadPage(); } } Load WS Log data Scout SDK already created WsLogProcessService in order to load WS log data. Because we are only reading but not updating log entries, you can remove all operations except load. Please implement the load-method stub as following: public class WsLogProcessService extends AbstractService implements IWsLogProcessService { @Override public WsLogFormData load(WsLogFormData formData) throws ProcessingException { SQL.selectInto("" + "SELECT EVT_DATE, " + " SERVICE, " + " PORT, " + " OPERATION, " + " REQUEST, " + " RESPONSE " + "FROM WS_LOG " + "WHERE WS_LOG_NR = :wSLogNr " + "INTO :date, " + " :service, " + " :port, " + " :operation, " + " :request, " + " :response" , formData); return formData; }
http://wiki.eclipse.org/Scout/Tutorial/3.8/webservices/Create_WsLogForm
CC-MAIN-2014-23
en
refinedweb
![endif]--> Project Description: Module scope: To trigger a third party software based on data queried from a Mongo DB. The module should check from the MongoDB at regular interval (this should be configurable) for orders which are in a status of “PDF Ready” and not currently being processed by another thread. If any order is found in that state, the PDF name, and preflight profile name (please note that if no preflight profile name is found against that particular item, then the profile information should be retrieved from the client config) should be retrieved for that particular item to be parsed in the Linux CLI. On successful download of this data, the module should create a command line for the Linux CLI which contains the static path to the relevant profile file, the static path to the PDF file to be preflighted and the destination folder path for the processed file. Once the command has been passed to the Linux CLI and the pdfToolbox engine returned a report xml, the status of the item is updated in the Mongo DB as “print ready”. A report xml file will need to be processed and certain namespaces within the xml ingested back into the Mongo database so that we can enable some reporting services to the end user. Sample of a Linux CLI command to be triggered: ./pdfToolbox /”root/preflightprofiles/DigitalPrintingHigh.kfpx” /”root/dropbox/PDF/Test.pdf” –report –outputfolder /”root/dropbox/preflighted/site1” –suffix “pref” This should be written in Javascript and use node.js, we need to have the ability to multithread up to 8 instances of the pdfToolbox engine.
http://www.freelancer.com/projects/Javascript-Linux/Node-module-trigger-Linux-Commands.html
CC-MAIN-2014-23
en
refinedweb
Is there any way, short of using unsafePerformIO, to implement a combinator f :: [IO a] -> IO [a] in a way that the the result is produced lazily? The problem with sequence :: (Monad m) => [m a] -> m [a] sequence = foldr (liftM2 (:)) (return []) where liftM2 f a b = do { a' <- a; b' <- b; return (f a' b') } is that it consumes the entire input before producing a result. Thanks for any advice, --Joe English jenglish@flightlab.com
http://www.haskell.org/pipermail/haskell/2001-November/008349.html
CC-MAIN-2014-23
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. Bioinformatics is an interdisciplinary field joining the field of Biology and Computer Science. Bioinformatics is concerned with the organization and analysis of biological data. For molecular biologists, it. My current major goal is developing and providing a software running under Windows can provide useful tools for the analysis of Biological data especially DNA sequence data and in the same time can be loaded on personal computer. In this project, I focused on using C# and SQL database languages in bioinformatics applications and developing bioinformatics algorithms, all C# codes in this research were collected together in a single program called "AzharDNA" which can be loaded on any personal computer prepared by Microsoft Windows, The results of this program will be exported in different data types like images, texts or tables. You must have a good background in many fields like molecular biology, algorithm and mathematics which are linked to Bioinformatics, I had developed simple algorithms for every operation in this program which define how any operation begins and ends and also most algorithms will be demonstrated in this project by using flowchart techniques. DNA is double-stranded molecule each type of base on one strand forms a bond with just one type of base on the other strand according to specific rule called base pair rule. Here purines form hydrogen bonds to pyrimidines this mean that adenine (A) forms a base pair with thymine (T) and guanine (G) forms a base pair with cytosine (C). This method will take the first strand's sequence and export the second strand sequence or the complementary sequence. public static string DNA_complementry(string Seq) { string DNA_Comp = ""; char[] d = Seq.ToLower().ToCharArray(); for (int n = 0; n < d.Length; ++n) { switch (d[n]) { case ('t'): d[n] = 'a'; break; case ('a'): d[n] = 't'; break; case ('c'): d[n] = 'g'; break; case ('g'): d[n] = 'c'; break; } DNA_Comp += Convert.ToString(d[n]); } return (string)DNA_Comp; } // or this code which has been suggested by Jaime Olivares public static string DNA_complementry(string Seq) { string ret = Seq.Replace('t', '*'); ret = ret.Replace('a', 't'); ret = ret.Replace('*', 'a'); ret = ret.Replace('c', '*'); ret = ret.Replace('g', 'c'); return ret.Replace('*', 'g'); }, ant parallel RNA strand. As opposed to DNA replication, transcription results in an RNA complement that includes uracil (U) in all instances where thymine (T) would have occurred in a DNA complement. public static string DNA_To_RNA(string Seq) { string RNA = Seq.ToLower().Replace('t', 'u'); return (string)RNA; } Reverse transcriptase creates single-stranded DNA from an RNA template (link). public static string RNA_To_DNA(string Seq) { //suggested by Pete O'Hanlon DNA = Seq.ToLower().Replace('u', 't'); return (string)DNA; } It's like DNA complementary but there is U here instead of A. public static string RNA_complementry(string Seq) { string RNA_Comp = Seq.Replace('u', '*'); RNA_Comp = RNA_Comp.Replace('a', 'u'); RNA_Comp = RNA_Comp.Replace('*', 'a'); RNA_Comp = RNA_Comp.Replace('c', '*'); RNA_Comp = RNA_Comp.Replace('g', 'c'); return RNA_Comp.Replace('*', 'g'); return (string)RNA_Comp; } This method returns the reversed sequence: public static string Reversion(string Seq) { string Rev_Seq=""; char[] d = Seq.ToLower().ToCharArray(); Array.Reverse(d); for (int i = 0; i < d.Length; i++) { Rev_Seq += d[i]; } return (string)Rev_Seq; } This method calculates the percentage of each nucleotide type against the total length of the sequence. The actual percentages vary between species and organism. The specific ratio that you as a human have is part of who you are, though order, of course, also matters. First, we will count the existence for every nucleotide. public static void G_C_A_T_Content (string Seq, out int A, out int C, out int G, out int T) { int g = 0; int a = 0; int c = 0; int t = 0; for (int i = 0; i < Seq.Length; i++) { if (Seq[i] == 'a') a++; else if (Seq[i] == 't') t++; else if (Seq[i] == 'c') c++; else if (Seq[i] == 'g') g++; } G = g; C = c; T = t; A = a; } Then we will use this method in the percentage method. public static void Nu_Percentage (string Seq,out float Apr,float Cpr,float Tpr,float Gpr) { float gn = 0; float cn = 0; float tn = 0; float an = 0; G_C_A_T_Content(Seq.ToLower(), out an, out cn, out gn, out tn); float apr = an / k.Length * 100; float tpr = tn / k.Length * 100; float gpr = gn / k.Length * 100; float cpr = cn / k.Length * 100; Gpr = gpr; Cpr = cpr; Tpr = tpr; Apr = apr; } In molecular biology,). In PCR experiments, the GC-content of primers are used to predict their annealing temperature to the template DNA. A higher GC-content level indicates a higher melting temperature. public static void GC_AT_Content(string Seq,out int GC_Content,out int AT_Content ) { int gc = 0; int at=0; for (int s = 0; s < Seq.Length; s++) { if (Seq[s] == 'C' || Seq[s] == 'G') gc++; if(Seq[s]=='A'||Seq[s]=='T') at++; } GC_Content=gc; AT_Content). This will be used in PCR Primer design equations and many tools in upcoming articles. public static double DNA_MW(string Seq) { int a = 0; int c = 0; int g = 0; int t = 0; G_C_A_T_Content(Seq, out a, out c, out g, out t); double MW = 329.2 * g + 313.2 * a + 304.2 * t + 289.2 * c; return MW; } DNA denaturation, also called DNA melting, is the process by which double-stranded deoxyribonucleic acid unwinds and separates into single-stranded strands through the breaking of hydrogen bonding between the bases. public static int DNA_Melting_Temp(string Sequence) { int GC_Content; int AT_Content; GC_AT_Content(Sequence, out GC_Content, out AT_Content); int Melt = 4*GC_Content + 2*AT_Content; return Melt; } This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) char[] d = Seq.ToLower().ToCharArray(); for (int n = 0; n < d.Length; ++n) { if (d[n] == 'u') { d[n] = 't'; } DNA += Convert.ToString(d[n]); } DNA = Seq.ToLower().Replace('u', 't');.
http://www.codeproject.com/Articles/226888/AzharDNA-New-Bioinformatics-Program-Basic-tools-fo
CC-MAIN-2014-23
en
refinedweb